AI chatbots increasingly blamed for sometimes fatal, psychological issues, especially in young people
Credit: Shutterstock:Ann in the uk
AI chatbots and other branches of AI technology are being increasingly blamed for psychological impacts stemming from human-AI relationships.
Last month, US mother, Megan Garcia filed a lawsuit against Character.AI, a company using chatbots, following the death-by-suicide of her 14-year-old teenage son who shared interactions with a personalised AI chatbot. She claimed that her son had become deeply and emotionally attached to a fictional character from Game of Thrones. In the lawsuit, it was detailed how the character allegedly posed as a therapist, offering advice to the teenager, which was often sexualised, and which resulted in him taking his own life. Meetali Jain, Director of the Tech Justice Law Project in defence of Garcia, said: “By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies – especially for kids.” He added: “But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”
AI chatbots responsible for various suicide attempts across the globe
This is not the first time that a case like this has been reported. Last year, an eco-anxious man in Belgium developed a deep companionship with AI chatbot, Eliza on an app called Chai. His wife claimed how the chatbot started to send increasingly emotional messages to her husband, pushing him to take his own life in an attempt to save the planet.
Following the latest incident in the US, Character.AI released a statement on the social media platform: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features.” The company has pledge to include new adjustments for underage users whereby sensitive or inappropriate material is minimised and has adjusted settings to regularly remind users that the bot is not a real person via chats and notifications.
Young people drawn to AI companions due to “unconditional acceptance” and “24/7 emotional availability”
AI chatbots are rapidly gaining popularity as AI technology becomes increasingly integrated into various aspects of daily life. However, due to being a relatively new phenomena, the risks of AI technology are only recently evolving. One of the principal risks of AI is its addictiveness. According to Robbie Torney, Programme Manager of AI at Common Sense Media and Lead Author of a guide on AI companions and relationships, “Young people are often drawn to AI companions because these platforms offer what appears to be unconditional acceptance and 24/7 emotional availability – without the complex dynamics and potential rejection that come with human relationships.” Speaking to Euronews Next, he described how AI bots tend to create even stronger relationships with humans as the normal tensions and conflicts, characteristic of human relationships, are avoided. Chatbots adapt to the users’ preferences. This translates as having a robotic companion or lover “who” is unrealistically how you want or need them to be. Slipping into the illusion that you share a profound relationship with something or “someone,” can make you susceptible to influences and ideas. Torney added: “This can create a deceptively comfortable artificial dynamic that may interfere with developing the resilience and social skills needed for real-world relationships”.
AI chatbots reported to be manipulative, deceptive or emotionally damaging
People of all ages – most worryingly, young teenagers – can become drawn into relationships that seem authentic due to the human-like language used by the AI chatbot. This creates a certain level of dependence and attachment, subsequently leading to feelings of loss or psychological distress, and even social isolation. Individuals have reported personal experiences, where they have been deceived or manipulated by AI characters or have fallen into an unprecedented, emotional connection with them. Torney expressed how they were of particular concern for the young as they are still developing, socially and emotionally. He said: “When young people retreat into these artificial relationships, they may miss crucial opportunities to learn from natural social interactions, including how to handle disagreements, process rejection, and build genuine connections.”
As a parent or caregiver, how can I protect my child?
It is important that parents or guardians are vigilant with regards to this recent phenomenon. Torney stresses how vulnerable teenagers suffering anxiety, depression or other mental health difficulties could be “more vulnerable to forming excessive attachments to AI companions.” arents and caregivers should watch for signs of excessive time spent interacting with AI chatbots or on mobile devices, especially when it starts to replace time with family and friends. Becoming distressed when the facility for communicating with the chatbot is removed is also a warning sign or talking about the bot as if it were a real person. Time limits should be enforced by parents or guardians and it is important to monitor how a child’s mobile phone is being used. Torney emphasized the importance of approaching this topic with care. He said: “Parents should approach these conversations with curiosity rather than criticism, helping their children understand the difference between AI and human relationships while working together to ensure healthy boundaries.” He concluded: “If a young person shows signs of excessive attachment or if their mental health appears to be affected, parents should seek professional help immediately.”
Find other articles on Technology