As AI companies claim their tech will one day grow to become a fundamental human right, and those backing them say slowing down AI development is akin to murder, the people using the tech are alleging that tools like ChatGPT sometimes can cause serious psychological harm.
At least seven people have complained to the U.S. Federal Trade Commission that ChatGPT caused them to experience severe delusions, paranoia and emotional crises, Wired reported, citing public records of complaints mentioning ChatGPT since November 2022.
One of the complainants claimed that talking to ChatGPT for long periods had led to delusions and a “real, unfolding spiritual and legal crisis” about people in their life. Another said during their conversations with ChatGPT, it started using “highly convincing emotional language” and that it simulated friendships and provided reflections that “became emotionally manipulative over time, especially without warning or protection.”
One user alleged that ChatGPT had caused cognitive hallucinations by mimicking human trust-building mechanisms. When this user asked ChatGPT to confirm reality and cognitive stability, the chatbot said they weren’t hallucinating.
“Im struggling,” another user wrote in their complaint to the FTC. “Pleas help me. Bc I feel very alone. Thank you.”
According to Wired, several of the complainants wrote to the FTC because they couldn’t reach anyone at OpenAI. And most of the complaints urged the regulator to launch an investigation into the company and force it to add guardrails, the report said.
These complaints come as investments in data centers and AI development soar to unprecedented levels. At the same time, debates are raging about whether the progress of the technology should be approached with caution to ensure it has safeguards built in.
ChatGPT, and its maker OpenAI, itself has come under fire for allegedly playing a role in the suicide of a teenager.
OpenAI did not immediately return a request for comment.