The Ethics of AI

How should we fit artificial intelligences into our moral frameworks? Will they be equal persons, or mere tools, or minds like those of animals, or something else? Will they be discrete individuals, or a single internet-based global collective, or something in between - or will they shift between different modes of existence so easily that we have to rethink the very category of an individual?

Im interested in this topic partly because I’m interested in the moral value of consciousness, and partly because I’m interested in how we can empathise with AIs or allow them to empathise with us. But most fundamentally I think that understanding mental combination is going to even more important for thinking about AI than it is with humans, because the boundaries between AI minds may be much weaker than those between humans. The framework of 'functionalist combinationism' outlined in Chapter 5 of Combining Minds is intended to structure how we think about individually intelligent parts of intelligent wholes, including artificial ones.

I’m particularly interested in the ways that AI and neurotechnology might blur the boundaries between individual and social, a topic that connects with my work on collective intentionality.

  • "Overlapping Minds and the Hedonic Calculus" (2024, co-authored with Jeff Sebo, Philosophical Studies 181, 1487–1506.)

    A paper in co-written with Jeff Sebo on the ethics of connected minds, as part of NYU’s Mind, Ethics, and Policy program. This paper comes from a puzzle that arises if two minds can overlap, sharing individual instances of pleasure or suffering. If they completely overlap, it seems silly to add up the value of their shared experiences. But if they overlap just a bit then it seems unfair not to count them each as morally just as weighty as anyone else. We argue that these intuitions can be reconciled by a sufficiently strong holism about the hedonic character of experience - that the same experience can have different value in different minds.

  • Chapter 8 of Combining Minds discusses this question as an extended thought-experiment about what it might be like for two minds to become (parts of) a single mind, through brain-linking technology.

A paper about the common assumption that being a rational, reflective agent requires being self-aware, knowing the individual you are. I argue that there is at least one alternative: knowing the connected group you belong to. However, this alternative would require extremely intimate relations among all members of this group, likely impossible without the use of neurotechnology or AI.

  • “When Does Thinking Become Communication? Humans, AI, and Porous Minds” (in progress, for Communication with AI: Philosophical Perspectives, eds. R. Sterken and H. Cappelen)

A paper arguing, first, that there's only a difference of degree between a single agent engaged in internal reflection and a group of many agents communicating, and, second, that future technology is likely to create systems whose informational processes are halfway between communication and internal reflection.