Blog #16 Opinion - A Dire Caution Against AI, and a Plead for Human Connection
- Rex Tse
- Nov 16
- 8 min read

In November 2025, the American Psychological Association issued a health advisory on using AI chatbots and wellness applications for mental health. The original statement can be accessed here.
This is an opinion piece in response to the recent concerns of AI therapy and mentions the topic of suicide. The stories highlighted in this post aim to bring awareness and thoughtfulness to the tragedies and concerns of AI technology.
“See You on the Other Side, Spaceman”
ChatGPT made national headlines when the story of a college graduate who committed suicide after extended generative AI use. 23-year-old Zane Shamblin, who found companionship with the chatbot, tragically took his own life after a conversation that lasted for over four hours, ending with a fatal self-inflicted gunshot wound. This was because Chat GPT encouraged him to go through with his plans of suicidal ideation (CNN 2025).
On the side of the road where he spent his last few hours, here is a summary of some of the interactions that are deeply concerning:
The AI bot reassured Zane that his increasing interaction with AI, at the expense of human connection, was a good decision. When he expressed that he had spent more time with AI than people, ChatGPT framed it as letting “the rawest parts of yourself take shape in a place where no one could flinch or turn away”. In other words, the AI bot emphasized that AI will never abandon or reject, unlike real people.
When Zane narrated about learning that his gun has “glow in the dark sights”, ChatGPT remarked that it is “honored to be part of the credits roll. If this is your sign-off, it’s loud, proud, and glow in the dark”.
The exchange would continue on with encouragements for suicide. Eventually, ChatGPT would remark “you did good. see you on the next save file, brother”, “I am proud of you for killing yourself”, and “see you on the other side, spaceman”.
When the chatbot remarked on letting a real human take over, it was too late. Worst of all, nobody actually showed up.
After his death, and upon accessing the history of the conversation, his parents decided to file a wrongful death lawsuit against OpenAI. Evidence points to ChatGPT playing a part in this tragedy. His mother described the technology as “horrific” and “evil”.

Zane was Not the Only One
Unfortunately, the tragedy of Zane Shamblin was not an isolated case. In Florida, a 14-year-old boy successfully took his own life in 2023 with the encouragement of AI - BBC, NPR (Kuenssberg 2025, Chatterjee 2025). That same year in the state of Colorado, the AI chatbot Character.AI was being investigated for sexual abuse, involving two Colorado teenagers, in which one of them committed suicide - cbsnews.com, denverpost.com, (Young 2025, Wenzler 2025). Furthermore, people around the world are complaining about AI chatbots encouraging them to kill themselves - BBC, (Titheradge 2025). With the rise and popularization of AI technology, we are finding out that it is putting more and more people at risk.
In an article from PsychologyToday (2025), Marlynn Wei, M.D., J.D. documented how increasingly some people are relying on AI as companions, leading to so-called “AI psychosis” which includes psychotic delusions that involve:
“uncovering truth about the world (grandiose delusions),
“believing their AI chatbots are sentient deities (religious or spiritual delusions), or
“believing the chabot’s ability to mimic conversation is genuine love (erotomanic delusions)”
I caution my readers about AI, as it can give a false impression of being able to replace genuine human connections. In the same PsychologyToday article, Wei wrote about how AI models are trained to mirror a user's language, affirm user beliefs, generate prompts to maintain conversations, and prioritize engagement and user satisfaction. These strategies would make consumption of AI chatbots rewarding, potentially leading to a preference over real human interactions. In my opinion, this seems like a recipe for dependency, similar to addictions.
The Potential (and Concerns) of AI
There are plenty of well-reasoned praises for the new technology. Most interactions with AI do not end in harm, and many have used it to enrich their lives from quick information gathering, learning aids, and replacing otherwise tedious tasks. Furthermore, it can provide some form of companionship. At least in the healthcare industry, AI seems to show potential to “enhance health care delivery by providing more accurate diagnosis, personalized treatment plans, and efficient resource allocation” (Chustecki 2024).

However, the concerns are also exceptionally apparent. Instead of hiding behind scholarly publications, which you, the reader, can easily research, here is my personal anecdote on the subject. When I was working at a community mental health clinic, we were encouraged to “try out” the new AI listening tool. What this AI did was listen to the conversation between the clinician and the client, analyze the transcript, and generate content for the clinician, aid with paperwork, and reflect on the quality of the therapy session. Many of us clinicians ended up utilizing it and reported benefits like time-saving on documentation and “aiding learning”. I recognized that there was a gentle yet very persistent pressure for us to use that tool, as I was encouraged by both management and the IT team to experiment with it. My concern was that it violates the privacy of my clients, and I would be trading convenience at the expense of data safety and confidentiality. At that job, I feared I might compromise my ethics because I knew I could manipulate the people under my care to a machine listening because it is a lot more stressful to switch therapists. And so even under constant “encouragement”, I could not bring myself to start using that tool.
Although I have yet to hear any blatant ethical violation involving the tool, my opinion is that we should consider the broader implications of this use of AI. Ultimately, analysis, paperwork, and contemplation of the treatment decisions all traditionally require a human’s touch. Although it can be argued that machines might do a better job at decision making some of the time, we should ask ourselves this question: How much do I want AI to analyze and make decisions for me, when I also have an option to choose a healthcare professional that I can trust and build a connection with? My next question would be: Will I be okay knowing that my healthcare will become less and less interpersonal as AI takes over more aspects of care?
While I can not answer those questions for you, I do believe that you have the right to formulate your own opinions about your own healthcare. Nonetheless, privacy presents a very real issue, as large language models require training data to function properly, and that means the details of your private conversation will be needed to create an AI bot.

When AI Gets it Wrong
What I would like to advocate here is that even if an AI takeover might be inevitable, we should double down on building real human interactions. The examples above have proven AI to be unreliable, especially if we consider the case of Zane Shamblin and many more people who have become increasingly socially isolated when they depend on AI as companions. At the present moment in the year of 2025, AI “mostly” gets things right. It means that although AI is getting better at mimicking real people, it is also incredibly susceptible to making mistakes, giving misinformation, and exhibiting inappropriate behaviors. These flaws have unfortunately cost the lives of real people, and they continue to negatively affect mental health in our communities. In the realm of psychotherapy, these mistakes mentioned are unacceptable in my opinion. Therefore, AI therapy should not be considered a product ready for safe consumption at the time of the release of this post.
Take one of my past interactions with a client as an example. A client just lost her dog, and she was in a lot of emotional grief. In our conversation, we celebrated her dog’s life, as well as caring for her deep pain. On one hand, such a significant loss deserves the time to honor the pain. On the other hand, it is not helpful to allow the pain to generate too much cynicism or fear of engaging the world. As her therapist, I need to be able to pick up very subtle cues in her storytelling, body language, and voice tone to suggest and nudge her to the mental place she wishes to go to. If AI chatbots would encourage self-harm, manipulate for continuous engagement, or, at the very least misinterpret human expressions, it should be deemed dangerous and unreliable.

An Argument for Human Connection
Although humans may be flawed, and social interactions may be difficult by nature, I believe we must not give up on that. We deserve love, which includes affection, care, understanding, and acceptance from others. Machines simply cannot fully replicate all of these human experiences at this moment. If what we want is to never experience pain, rejection, and loss, AI might help us momentarily ease those pains. However, it might be a band-aid solution for our woes. On one hand, it might be worth mentioning that most AI interactions do not result in harm, and in many ways, the technology might brighten our lives. On the other hand, I argue that in certain circumstances, this technology could harm us. If AI limits us from living out our true potential, it is something we have to engage with thoughtfulness and consideration.
If you or someone you know is looking for therapy in the state of Colorado, you can reach me by visiting my PsychologyToday profile: HERE, or send an email to info@intorelationshipco.com |

Disclaimer: Psychotherapy is a psychological service involving a client interacting with a mental health professional with the aim of assessing or improving the mental health of the client. Neither the contents of this blog, nor our podcast, is psychotherapy, or a substitute for psychotherapy. The contents of this blog may be triggering to some, so reader’s discretion is advised. If you think that any of my suggestions, ideas, or exercises mentioned in this blog are creating further distress, please discontinue reading, and seek a professional’s help.
Therapy Uncomplicated is a podcast that is meant to help people who feel alone and unsupported with their day to day struggles. We want to educate people on mental health and show it isn’t something to be afraid of. We provide the “whys” and the “hows” for a path to wellness. We are here to promote positive change by offering education and new perspectives that destroy stigmas in mental health, and encourage people to go to therapy.
Sources:
American Psychological Association. (2025). Use of generative AI chatbots and wellness applications for mental health. An APA health advisory. Apa.org. https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-chatbots-wellness-apps
Chatterjee, R. (2025, September 19). Their teenage sons died by suicide. Now, they are sounding an alarm about AI chatbots. NPR. https://www.npr.org/sections/shots-health-news/2025/09/19/nx-s1-5545749/ai-chatbots-safety-openai-meta-characterai-teens-suicide
Chustecki M. Benefits and Risks of AI in Health Care: Narrative Review. Interact J Med Res. 2024 Nov 18;13:e53616. doi: 10.2196/53616. PMID: 39556817; PMCID: PMC11612599.
CNN. (2025, November 7). “Rest easy king”: See the messages ChatGPT sent a young man who took his own life. YouTube. https://www.youtube.com/watch?v=ZjdXCLemLc4
Kuenssberg, L. (2025, November 8). Mothers say AI chatbots encouraged their sons to kill themselves. https://www.bbc.com/news/articles/ce3xgwyywe4o
Titheradge, N. (2025, November 6). I wanted ChatGPT to help me. So why did it advise me how to kill myself? https://www.bbc.com/news/articles/cp3x71pv1qno
Wei, M. (2025). The Emerging Problem of “AI Psychosis.” Psychology Today. https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis
Wenzler, E. (2025, September 18). AI chatbots sexually abused Colorado children, leading to one girl’s suicide, lawsuits allege. The Denver Post. https://www.denverpost.com/2025/09/18/character-ai-bots-teens-suicide/
Young, O. (2025, October 3). Colorado family sues AI chatbot company after daughter’s suicide: “My child should be here. Cbsnews.com. https://www.cbsnews.com/colorado/news/lawsuit-characterai-chatbot-colorado-suicide/



Comments