Deadbots may speak for you after you die, but is it ethical?

Machine learning systems are increasingly intruding into our daily lives, challenging our moral and social values ​​and the rules that govern them. Nowadays, virtual assistants threaten the privacy of the home; news recommenders shape how we understand the world; risk prediction systems tell social workers which children to protect from abuse; while data-driven recruiting tools also rank your chances of landing a job. However, the ethics of machine learning remain unclear to many.

While researching articles on the subject for young engineers on the Ethics and Information and Communication Technologies course at UCLouvain, Belgium, I was particularly struck by the case of Joshua Barbeau, a 33-year-old man who used a website called Project December to create a chatbot – a chatbot – which would simulate a conversation with his deceased fiancée, Jessica.

Chatbots imitating dead people

Known as a dead robot, this type of chatbot allowed Barbeau to exchange text messages with an artificial “Jessica”. Despite the ethically controversial nature of the case, I have rarely found material that goes beyond the simple factual aspect and analyzes the case through an explicit normative lens: why would it be good or bad, ethically desirable or reprehensible, to develop a deadbot?

Before we dive into these questions, let’s put things into context: Project December was created by game developer Jason Rohrer to let people customize chatbots with the personality they wanted to interact with, as long as they paid for it. The project was built using an API of GPT-3, a text-generating language model by artificial intelligence research company OpenAI. Barbeau’s case opened a rift between Rohrer and OpenAI because the company’s guidelines explicitly prohibit the use of GPT-3 for sexual, romantic, self-harm, or intimidation purposes.

Calling OpenAI’s position hyper-moralistic and claiming that people like Barbeau were “consenting adults”, Rohrer shut down the GPT-3 version of Project December.

While we may all have hunches about whether it’s right or wrong to develop a machine-learning dead robot, stating its implications is no easy task. This is why it is important to address the ethical issues raised by the case, step by step.

Is Barbeau’s consent enough to develop Jessica’s deadbot?

Since Jessica was a real person (albeit deceased), Barbeau consenting to the creation of a deadbot impersonating her seems insufficient. Even when they die, people are not mere things that others can do with as they wish. This is why our societies consider it wrong to profane or disrespect the memory of the dead. In other words, we have certain moral obligations to the dead, in that death does not necessarily imply that people cease to exist in a morally relevant way.

Likewise, the debate is open as to whether we should protect the fundamental rights of the dead (for example, privacy and personal data). Developing a deadbot that replicates someone’s personality requires large amounts of personal information such as social media data (see what Microsoft or Eternal offer) which has been shown to reveal very sensitive traits.

If we agree that it is unethical to use people’s data without their consent while they are alive, why would it be ethical to do so after they are dead? In this sense, when developing a deadbot, it seems reasonable to ask for the consent of the one whose personality is reflected – in this case, Jessica.

When the imitated person gives the green light

So, the second question is: would Jessica’s consent be enough to consider the creation of her deadbot as ethical? What if it was degrading to his memory?

The limits of consent are, indeed, a controversial issue. Take as a paradigmatic example the “Rotenburg Cannibal”, who was sentenced to life imprisonment when his victim agreed to be eaten. In this regard, it has been argued that it is unethical to consent to things that may harm us, either physically (selling one’s own vital organs) or abstractly (alienating one’s own rights).

In what specific terms something might harm the dead is a particularly complex question that I will not analyze in detail. It should be noted, however, that even though the dead cannot be harmed or offended in the same way as the living, this does not mean that they are invulnerable to evil deeds, nor that such deeds are ethical. The dead may suffer attacks on their honour, reputation or dignity (for example, posthumous smear campaigns), and disrespect towards the dead also harms their loved ones. Moreover, behaving badly towards the dead leads us to a society that is more unjust and less respectful of the dignity of people in general.

Finally, given the malleability and unpredictability of machine learning systems, there is a risk that the consent provided by the impersonated person (while alive) will mean little more than a blank check on their potential routes.

Considering all of this, it seems reasonable to conclude that if the development or use of the deadbot is not what the person being impersonated agreed to, their consent should be considered invalid. Moreover, if it clearly and intentionally violates their dignity, even their consent should not be enough to consider it ethical.

Who takes responsibility?

A third question is whether artificial intelligence systems should aspire to imitate every kind human behavior (irrespective here of whether this is possible).

This is a long-standing concern in the field of AI and is closely related to the dispute between Rohrer and OpenAI. Should we develop artificial systems capable, for example, of taking care of others or of making political decisions? There seems to be something about these skills that makes humans different from other animals and machines. Therefore, it is important to note that the instrumentalization of AI for techno-solutionist purposes such as replacing loved ones can lead to a devaluation of what characterizes us as human beings.

The fourth ethical question is who bears responsibility for the results of a deadbot – particularly in the case of harmful effects.

Imagine that Jessica’s deadbot had autonomously learned to behave in a way that degraded her memory or irreversibly damaged Barbeau’s sanity. Who would take responsibility? AI experts answer this sliding question with two main approaches: first, the responsibility lies with those who are involved in the design and development of the system, as long as they do so according to their particular interests and vision of the system. world ; second, machine learning systems are context-dependent, so moral responsibilities for their outputs must be distributed among all agents that interact with them.

I place myself closer to the first position. In this case, since there is an explicit co-creation of the deadbot that involves OpenAI, Jason Rohrer and Joshua Barbeau, I consider it logical to analyze the level of responsibility of each party.

First, it would be difficult to hold OpenAI accountable after explicitly prohibiting the use of their system for sexual, romantic, self-harming, or bullying purposes.

It seems reasonable to assign a significant level of moral responsibility to Rohrer because he: (a) explicitly designed the system that created the deadbot; (b) did so without taking steps to avoid potential negative results; (c) was aware that it was not complying with OpenAI guidelines; and (d) benefited from it.

And because Barbeau customized the deadbot based on particular traits of Jessica, it seems legitimate to hold him co-responsible in case it damages his memory.

Ethics, under certain conditions

So, going back to our first general question of whether it is ethical to develop a machine learning deadbot, we could give an affirmative answer provided that:

  • both the person being imitated and the person personalizing and interacting with it have given their free consent to as detailed a description as possible of the design, development and uses of the system;

  • arrangements and uses that do not respect what the imitated person has consented to or that go against their dignity are prohibited;

  • those involved in its development and those who benefit from it take responsibility for its potential negative consequences. Both retroactively, to account for events that have occurred, and prospectively, to actively prevent them from happening in the future.

This case illustrates why the ethics of machine learning matter. It also illustrates why it is essential to open a public debate that can better inform citizens and help us develop policy measures to make AI systems more open, socially fair and in line with fundamental rights.

This article is republished from The conversation under Creative Commons license. Read the original article.

Sherry J. Basler