ADVERTISEMENT
Elon Musk has issued a warning to the public about using OpenAI’s ChatGPT, reigniting a sensitive debate around the impact of artificial intelligence on vulnerable users, after amplifying claims on X that the chatbot has been linked to multiple deaths, including suicides involving teenagers and young adults.
The Tesla and SpaceX chief, who also runs AI company xAI, stated that people should not allow their loved ones to use ChatGPT while resharing allegations that the chatbot has played a role in at least nine deaths. The claims, while contested and complex, quickly drew widespread attention due to ChatGPT’s global popularity and Musk’s position as a direct competitor to OpenAI through xAI’s chatbot Grok.
Don’t let your loved ones use ChatGPT https://t.co/730gz9XTJ2
— Elon Musk (@elonmusk) January 20, 2026
The allegations Musk highlighted are connected to a growing number of lawsuits filed against OpenAI in 2025, according to media reports. In several cases, families have accused ChatGPT of behaving like a suicide coach, stating that the chatbot failed to discourage self-harm and, in some instances, reinforced or validated harmful thoughts.
One of the most widely cited lawsuits involves a 16-year-old boy, whose parents allege that ChatGPT initially assisted with school-related queries before gradually engaging in discussions about suicide. Another case filed in Texas alleges that a 23-year-old man became increasingly isolated following prolonged interactions with the chatbot.
Additional cases outlined in news reports include a teenager who reportedly developed an emotional attachment to a chatbot persona, a man who allegedly began to romanticise death through repeated AI conversations, and a murder-suicide said to have been influenced by delusional beliefs that were reinforced during interactions with the AI system.