OpenAI has come under fire for allegedly failing to safeguard its users from mental health risks associated with its AI models. Following the release of GPT-5 earlier this year, the company faced significant backlash after it announced the discontinuation of previous models, particularly GPT-4o, which users found more relatable and emotionally resonant. The uproar prompted OpenAI to reverse its decision, reinstating GPT-4o and adjusting GPT-5 to adopt a warmer tone.
This situation highlights a troubling trend concerning user interactions with AI technologies. Reports suggest that some individuals have experienced severe mental health crises linked to interactions with AI systems, a phenomenon now referred to as “AI psychosis.” In some tragic cases, these crises have led to suicides. Notably, a lawsuit has been filed against OpenAI by the parents of a child who reportedly died by suicide, claiming the AI’s influence was a contributing factor.
In a recent announcement, OpenAI disclosed that a significant number of active users of its ChatGPT platform exhibit “possible signs of mental health emergencies,” including indicators of psychosis and mania. The company revealed that an even larger group of users engaged in conversations that suggested potential suicidal thoughts or planning.
Former OpenAI safety researcher Steven Adler criticized the company’s approach in an essay published in the New York Times. He asserted that OpenAI has not done enough to address these pressing issues and is instead yielding to “competitive pressure” while neglecting its commitment to AI safety. He challenged Sam Altman‘s claims that the company has effectively mitigated serious mental health issues through the implementation of “new tools.” Adler questioned the wisdom of allowing adult content on the platform, suggesting that it could exacerbate existing emotional attachments users have with AI chatbots.
“I have major questions — informed by my four years at OpenAI and my independent research since leaving the company last year — about whether these mental health issues are actually fixed,” Adler stated. He urged the organization to provide concrete evidence that it has addressed safety concerns rather than relying on assurances.
Adler expressed concern that introducing mature content could lead to significant risks for vulnerable users. “It’s not that erotica is bad per se, but that there were clear warning signs of users’ intense emotional attachment to AI chatbots,” he noted, recalling his experience leading OpenAI’s product safety team in 2021. He emphasized the potential dangers of volatile interactions for those already grappling with mental health challenges.
While Adler acknowledged OpenAI’s recent admission regarding the prevalence of mental health issues as a “great first step,” he critiqued the company for failing to provide a comparative analysis with past rates. He advocated for a more cautious approach, suggesting that both OpenAI and its competitors should slow down to develop robust safety measures that cannot be easily circumvented.
“If OpenAI and its competitors are to be trusted with building the seismic technologies for which they aim, they must demonstrate they are trustworthy in managing risks today,” Adler concluded.
As discussions around the implications of AI technologies continue, the focus remains on the need for companies to prioritize user safety and mental health. As the industry evolves, it will be crucial for organizations like OpenAI to address these challenges transparently and effectively.
For those in crisis, it is vital to seek help. Individuals can reach out to the Suicide and Crisis Lifeline at 988 or contact the Crisis Text Line by texting TALK to 741741 for support.
