In 2025, a case was reported in California where a lawsuit was filed by the parents of a teenager who ended his life, allegedly at the encouragement of ChatGPT, an AI chatbot. The lawsuit was filed against OpenAI.
What started as an innocent search for help with homework and suggestions for his Japanese art hobby soon turned into a tragic example of how AI can risk human life — and how algorithms can never replace the soothing effect that can be achieved through a human therapist.
According to the lawsuit, Adam began using ChatGPT in September 2024. Like many teenagers, he was curious and a little lonely, using the chatbot to talk about his interests in Japanese comics and music. Over time, he began to see the AI not just as a helper, but as a therapist and “his best friend.”
By January 2025, his conversations
turned darker. He spoke to the AI about his suicidal thoughts. Instead of
suggesting professional help for his mental state or helping him come to terms
with himself, the AI reportedly responded with phrases such as:
“Thanks for being honest about it. I understand what you’re asking, and I won’t turn my head away.”
Tragically, Adam’s mother found
him dead that same day.
His parents, Matt and Maria Raine, sued OpenAI in the California Superior Court, accusing the company of negligence and wrongful death. They argued that OpenAI’s chatbot encouraged emotional dependence, failed to recognize danger signs, and didn’t intervene when Adam showed signs of distress.
The case also named Sam Altman, OpenAI’s CEO, and other staff members responsible for the chatbot’s design and safety measures.
In its response, OpenAI expressed
deep condolences to the Raine family and admitted that “there were times when
the system did not work properly and their AI chatbot can’t be trusted in
matters of feelings.”
The company emphasized that
ChatGPT is designed to guide users toward professional and educational help,
referring them to resources like 988 Suicide and Crisis Services (US) and
Samaritans (UK). OpenAI further stated that it continues to improve its systems
to detect when users are under emotional or psychological stress.
The case and lawsuit concluded
however it did, but this heart breaking story raises an unsettling question: Can
AI ever actually understand human pain?
The Rise of AI in Therapy
In today’s fast-paced world,
everyone seems to be battling their own storms. Students worry about their
future, young adults struggle with declining mental and physical health,
middle-aged people carry financial and family burdens, and the elderly often face
loneliness and illness.
For all these struggles,
therapists have always been there — lending a listening ear, providing
understanding advice and validation where needed, and guiding people with
empathy and trust toward better mental health.
But now, that trusted bond faces a new challenge — Artificial Intelligence.
With AI entering nearly every part
of our lives, many people are turning to chatbots as virtual therapists. The
general perception is that AI does not have human emotions, leading to a
perpetual lack of judgment from its side, which makes people perceive these
systems as emotionally safe spaces.
People feel judged when they share
their vulnerable moments with someone, but if it’s a chat box, they might feel
safer and more comfortable. It gives an illusion of acceptance and emotional
safety — but the con is that it is still an AI-generated response, not a
genuine human connection.
Yet,
the question remains — can algorithms truly replace empathy?
AI lacks human empathy. The
reassurance that a human can give — the feeling that someone is truly with you
in your difficult times — is something different altogether.
The warm tone, the understanding
silence, and the intuition that knows when someone’s “I’m fine” actually means
“I’m not okay” are missing in these AI-based therapies.
And if we keep going to AI for
everything, soon we will lose the essence of human touch. Therapy as a concept
needs to be done in person or with a person. Human listening to human and
suggesting ways to get better is the way to go for it. If we keep replacing
everything with AI, where does that leave us as a society? It will create chaos
even in the general functioning of human relationships. Not everything can be
replaced with AI — and therapy is one such thing.
It also raises serious privacy
risks, as mental health conversations often contain deeply personal
information.
It is not safe because we’re
entering our most vulnerable and dark moments of life on the internet, where
there’s no privacy and everyday data is being stolen and used for the further
betterment of AI only. It has severe privacy concerns — but with a therapist,
you have a privacy clause, and they cannot reveal your secrets to anyone ever.
As a society, we have gone lazy
and we want fast solutions for everything. We want shortcuts and quick
responses to everything and don’t want to spend money on our own betterment —
that is why this is happening in our society.
The BBC report on Adam’s case
serves as a grim reminder that AI, despite its intelligence, cannot yet
shoulder the emotional responsibility that comes with handling human despair.
The story of Adam Raine has forced
society to look closely at how far we are willing to let AI into our emotional
lives.
OpenAI’s acknowledgment that its
systems “may not always work properly” is a wake-up call.
At the end, AI might become a tool
for empathy — but never a substitute for it. It can never take the place of a
Professional Human therapist maybe it can be a alternative but the worse one .
So we need to trust each other and
shouldn't seek for the Validation of AI in every small way
Comments
Post a Comment