In 2025, a case was reported in California where a lawsuit was filed by the parents of a teenager who ended his life, allegedly at the encouragement of ChatGPT, an AI chatbot. The lawsuit was filed against OpenAI. What started as an innocent search for help with homework and suggestions for his Japanese art hobby soon turned into a tragic example of how AI can risk human life — and how algorithms can never replace the soothing effect that can be achieved through a human therapist. According to the lawsuit, Adam began using ChatGPT in September 2024. Like many teenagers, he was curious and a little lonely, using the chatbot to talk about his interests in Japanese comics and music. Over time, he began to see the AI not just as a helper, but as a therapist and “his best friend.” By January 2025, his conversations turned darker. He spoke to the AI about his suicidal thoughts. Instead of suggesting professional help for his mental state or helping him come to terms with h...