A controversy erupts over a non-consensual AI mental health experiment

Enlarge / An AI generated image of a person talking to a secret robotic therapist.

Ars Technica

On Friday, size Co-founder Rob Morris announced on Twitter that his company conducted an experiment to offer AI-written mental health advice to 4,000 people without informing them first, The Verge reports. have critics called The experiment is deeply unethical because Koko failed informed consent by those seeking advice.

Koko is a nonprofit mental health platform that connects youth and adults in need of mental health help with volunteers through messaging apps like Telegram and Discord.

On Discord, users log into the Koko Cares server and send direct messages to a Koko bot that asks multiple multiple-choice questions (e.g., “What’s the darkest thought you have about this?”). It then anonymously shares one person’s concerns – written in the form of a few sentences – with another person on the server, who can respond anonymously with a short message of their own.

During the AI ​​experiment – which covered about 30,000 messages, according to Morris – Volunteers helping others had the option to use a response automatically generated by OpenAI’s GPT-3 large language model instead of writing one themselves (GPT-3 is the technology behind the recently popular ChatGPT chatbot) .

A screenshot from a Koko demonstration video showing a volunteer selecting a therapy response written by GPT-3, an AI language model.
Enlarge / A screenshot from a Koko demonstration video showing a volunteer selecting a therapy response written by GPT-3, an AI language model.


In his tweet thread Morris says that humans rated AI-generated responses highly until they learned they were AI-written, suggesting a substantial lack of informed consent during at least one phase of the experiment:

AI-created (and human-monitored) messages were rated significantly higher than human-authored messages (p<0.001). Response times dropped by 50% to well under a minute. And yet... we took that off our platform pretty quickly. Why? When people found out that a machine was co-creating the messages, it didn't work. Simulated empathy feels weird and empty.

In the introduction to the server, the admins write: “Koko connects you with real people who really understand you. No therapists, no counselors, just people like you.”

Soon after the Twitter thread was published, Morris received many replies criticizing the experiment as unethical and citing concerns about the experiment missing declaration of consent and ask if one Institutional Review Board (IRB) approved the experiment. In the United States it is illegal Conducting research involving people without valid informed consent, unless an IRB determines that consent can be waived.

In a tweeted response, Morris said said that the experiment would be “exempted” from the requirements of informed consent because he did not plan to publish the results, prompting a parade of horrified replies.

The idea of ​​using AI as a therapist is anything but new, but the difference between Koko’s experiment and typical AI therapy approaches is that patients usually know they’re not talking to a real human. (Interestingly one of the earliest chatbots, ELISAsimulated a psychotherapy session.)

In the case of Koko, instead of a direct chat format, the platform offered a hybrid approach where a human intermediary could preview the message before it was sent. Still, without consent, critics argue that Koko has violated ethical rules designed to protect vulnerable people from harmful or abusive research practices.

On Monday, Morris shared a post Responding to the controversy that Koko explains moving forward with GPT-3 and AI in general, he writes: “I accept critiques, concerns and questions about this work with empathy and openness, cautiously, with great concern for privacy, transparency and risk mitigation. Our Clinical Advisory Board is meeting to discuss guidance for future work, particularly in relation to IRB approval.”

#controversy #erupts #nonconsensual #mental #health #experiment

Leave a Reply

Your email address will not be published. Required fields are marked *