Privacy in ML and AI

Under construction.

Algorithmic Fairness and Inclusive Datasets

  • https://techcrunch.com/2020/07/24/four-steps-for-an-ethical-data-practices-blueprint/ - a few good examples

Facial Recognition

  • https://www.lawfareblog.com/facial-recognition-less-bad-option

Generative AI

  • Examples of LLMs claiming that a person is dead / worse. They hallucinate. https://points.datasociety.net/what-we-can-learn-from-how-chatgpt-handles-data-densities-and-treats-users-f62b2cb222c6 - data densities, lacking data from the global south
  • https://web.archive.org/web/20230325030619/https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html
  • What does a right to be forgotten look like if it’s technically infeasible to remove someone from the model? E.g. because it’s not understood how that person is encoded in there. You could retrain the model, but the original training set may no longer exist (scooped up the whole internet as it was) - and if it does exist how do you find all potential references to that person? May be indirect. Current approach would likely be naive input filtering which we know can be defeated by attacks
  • How to prevent content, e.g. this course, being included in an LLM training set? robots.txt is somewhat effective and you can filter by user agent. No standardized way of saying “all ML datasets” though - you have to remove yourself from all datasets of the Internet and hope that e.g. nobody linked to your site from Reddit. Maybe we should agree on a aibots.txt. List of datasets ChatGPT was trained on is in the paper.
  • the question of consciousness - https://80000hours.org/podcast/episodes/robert-long-artificial-sentience/
  • Bing override switch, and its rules: https://twitter.com/marvinvonhagen/status/1623658144349011971
  • maybe https://betterwithout.ai/AI-is-public-relations
  • https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned
  • Standards and security measures corps should take: https://www.cold-takes.com/what-ai-companies-can-do-today-to-help-with-the-most-important-century/

What does privacy mean now in a world where anyone can easily produce realistic fake photos, videos, writing, or audio supposedly by or of “you”? We’re a long way from the world of gentlemen opening each other’s mail.

AI Policy Frameworks

  • Universal Guidelines for AI
  • OECD AI Principles / G20 AI Guidelines
  • UNESCO Recommendations on AI Ethics
  • OSTP AI Bill of Rights
  • EU AI Act
  • Council of Europe AI Treaty

For further reading, the Center for AI and Digital Policy offers a range of further resources, along with internships for US-based students and free part-time AI research and policy seminars (starting with the CAIDP AI Policy Certificate).

The Multistakeholder Approach: How Can I Contribute to Policy?

“Decisions at ICANN are made by people who show up. People who scream most loudly.” - Kathy Kleiman

As discussed in Ethical Dilemmas in Privacy, ‘universal’ ethical guidelines need some basic ideological foundation to be meaningful: you can’t simply tell people “do the right thing” if we’re not in agreement about what the right thing is. The ideological basis for the frameworks above, and for the Center for AI and Digital Policy, is the UN Universal Declaration of Human Rights, democracy, and the rule of law. This is my own ideological background, and I am a moral universalist - I struggle to find any ‘room to disagree’ when it comes to these ethical fundamentals. Also part of this basis is capitalism: it isn’t philosophically inherent in any of these three bases, but when they’re being espoused by the West, capitalism tends to come along as part of the package deal. This is the ideological basis that international law on AI is being built upon. If you have different cultural goggles and see room for disagreement here - or you share these values, but think that other important values such as Ubuntu are being neglected in discussions that are dominated by the US and EU - then your research input and policy input is needed asap.

So how can one get involved in policy debates on AI and on internet governance and technology policy more broadly? You don’t need to become an academic, policy analyst, or politician to take part.

Make your voice heard in policy consultations. One of the simplest ways is to respond to requests for comment. For example, as I write this (April 2023), consultation is open for the United Nations Global Digital Compact, which is expected to “outline shared principles for an open, free and secure digital future for all”. Informal consultations are being led by the country co-facilitators Rwanda and Sweden. You can sign up for online meetings discussing specific themes (e.g. data protection, human rights online) or submit a written statement as an individual or on behalf of an organization. This is open to all. The hardest thing is keeping track of which policies and guidelines are currently being negotiated; this consultation process, for example, is open for just two months.

Get involved with internet governance organizations. There are too many free opportunities here to list, but here are a few pointers to get you started:

  • Diplo offer a free Introduction to Internet Governance ebook
  • The Geneva Internet Platform Digital Watch Observatory has digital policy guides and weekly and monthly policy newsletters
  • Membership of the Internet Society is free and unlocks access to their free courses and participation in your local chapter, their Special Interest Groups (SIGs), and their fellowships and grant programs. (I created this course as part of my Early Career Fellowship at the Internet Society, and would highly recommend it!)
  • Membership of ICANN is also free - check out their beginner’s guide to learn about what they do and how you can participate.
  • Take part in the Internet Governance Forum
  • Participate in the development of open standards

Career change. If you genuinely would consider a career change to get involved here - or you’re still a student and are trying to chart a course for your career - I’d highly recommend the career advice at 80,000 Hours. A related resource is the Effective Altruism Opportunity Board, which includes various free courses and paid internships to break into the policy world, both for AI and for other existential risk topics such as biosecurity and nuclear weapons control.

AI Safety

  • https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4370566 - Algorithmic Black Swans, EU AI Act, NIST AI Risk Management Framework, AI Bill of Rights
  • https://twitter.com/lilianedwards/status/1639566493754228738 “The EU AI Act is an opportunity to apply international human rights principles… (to) non-consensual human experimentation to AI systems”
  • https://www.unesco.org/en/articles/unesco-member-states-adopt-first-ever-global-agreement-ethics-artificial-intelligence

Resources