By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Viral Trending contentViral Trending content
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
Reading: AI chatbot prompted a 14-year-old’s suicide, mom’s lawsuit alleges: ‘We are behind the eight ball.’ Here’s how to keep kids safe from new tech
Notification Show More
Viral Trending contentViral Trending content
  • Home
  • Categories
    • World News
    • Politics
    • Sports
    • Celebrity
    • Business
    • Crypto
    • Tech News
    • Gaming News
    • Travel
  • Bookmarks
© 2024 All Rights reserved | Powered by Viraltrendingcontent
Viral Trending content > Blog > Business > AI chatbot prompted a 14-year-old’s suicide, mom’s lawsuit alleges: ‘We are behind the eight ball.’ Here’s how to keep kids safe from new tech
Business

AI chatbot prompted a 14-year-old’s suicide, mom’s lawsuit alleges: ‘We are behind the eight ball.’ Here’s how to keep kids safe from new tech

By Viral Trending Content 11 Min Read
Share
SHARE

Contents
What are AI companions and why do kids use them?Who’s at risk and what are the concerns?How to spot red flags How to keep your child safe

The mother of a 14-year-old Florida boy is suing an AI chatbot company after her son, Sewell Setzer III, died by suicide—something she claims was driven by his relationship with an AI bot. 

“There is a platform out there that you might not have heard about, but you need to know about it because, in my opinion, we are behind the eight ball here. A child is gone. My child is gone,” Megan Garcia, the boy’s mother, told CNN on Wednesday.

The 93-page wrongful-death lawsuit was filed last week in a U.S. District Court in Orlando against Character.AI, its founders, and Google. It noted, “Megan Garcia seeks to prevent C.AI from doing to any other child what it did to hers.”

Tech Justice Law Project director Meetali Jain, who is representing Garcia, said in a press release about the case: “By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies—especially for kids. But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”

Character.AI released a statement via X, noting, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features that you can read about here: https://blog.character.ai/community-safety-updates/….”

In the suit, Garcia alleges that Sewell, who took his life in February, was drawn into an addictive, harmful technology with no protections in place, leading to an extreme personality shift in the boy, who appeared to prefer the bot over other real-life connections. His mom alleges that “abusive and sexual interactions” took place over a 10-month period. The boy committed suicide after the bot told him, “Please come home to me as soon as possible, my love.”

This week, Garcia told CNN that she wants parents “to understand that this is a platform that the designers chose to put out without proper guardrails, safety measures or testing, and it is a product that is designed to keep our kids addicted and to manipulate them.”

On Friday, New York Times reporter Kevin Roose discussed the situation on his Hard Fork podcast, playing a clip of an interview he did with Garcia for his article that told her story. Garcia did not learn about the full extent of the bot relationship until after her son’s death, when she saw all the messages. In fact, she told Roose, when she noticed Sewell was often getting sucked into his phone, she asked what he was doing and who he was talking to. He explained it was “‘just an AI bot…not a person,’” she recalled, adding, “I felt relieved, like, OK, it’s not a person, it’s like one of his little games.” Garcia did not fully understand the potential emotional power of a bot—and she is far from alone. 

“This is on nobody’s radar,” Robbie Torney, program manager, AI, at Common Sense Media and lead author of a new guide on AI companions aimed at parents—who are grappling, constantly, to keep up with confusing new technology and to create boundaries for their kids’ safety. 

But AI companions, Torney stresses, differ from, say, a service desk chat bot that you use when you’re trying to get help from a bank. “They’re designed to do tasks or respond to requests,” he explains. “Something like character AI is what we call a companion, and is designed to try to form a relationship, or to simulate a relationship, with a user. And that’s a very different use case that I think we need parents to be aware of.” That’s apparent in Garcia’s lawsuit, which includes chillingly flirty, sexual, realistic text exchanges between her son and the bot. 

Sounding the alarm over AI companions is especially important for parents of teens, Torney says, as teens—and particularly male teens—are especially susceptible to over reliance on technology. 

Below, what parents need to know.  

What are AI companions and why do kids use them?

According to the new Parents’ Ultimate Guide to AI Companions and Relationships from Common Sense Media, created in conjunction with the mental health professionals of the Stanford Brainstorm Lab, AI companions are “a new category of technology that goes beyond simple chatbots.” They are specifically designed to, among other things, “simulate emotional bonds and close relationships with users, remember personal details from past conversations, role-play as mentors and friends, mimic human emotion and empathy, and “agree more readily with the user than typical AI chatbots,” according to the guide. 

Popular platforms include not only Character.ai, which allows its more than 20 million users to create and then chat with text-based companions; Replika, which offers text-based or animated 3D companions for friendship or romance; and others including Kindroid and Nomi.

Kids are drawn to them for an array of reasons, from non-judgmental listening and round-the-clock availability to emotional support and escape from real-world social pressures. 

Who’s at risk and what are the concerns?

Those most at risk, warns Common Sense Media, are teenagers—especially those with “depression, anxiety, social challenges, or isolation”—as well as males, young people going through big life changes, and anyone lacking support systems in the real world. 

That last point has been particularly troubling to Raffaele Ciriello, a senior lecturer in Business Information Systems at the University of Sydney Business School, who has researched how “emotional” AI is posing a challenge to the human essence. “Our research uncovers a (de)humanization paradox: by humanizing AI agents, we may inadvertently dehumanize ourselves, leading to an ontological blurring in human-AI interactions.” In other words, Ciriello writes in a recent opinion piece for The Conversation with PhD student Angelina Ying Chen, “Users may become deeply emotionally invested if they believe their AI companion truly understands them.”

Another study, this one out of the University of Cambridge and focusing on kids, found that AI chatbots have an “empathy gap” that puts young users, who tend to treat such companions as “lifelike, quasi-human confidantes,” at particular risk of harm.

Because of that, Common Sense Media highlights a list of potential risks, including that the companions can be used to avoid real human relationships, may pose particular problems for people with mental or behavioral challenges, may intensify loneliness or isolation, bring the potential for inappropriate sexual content, could become addictive, and tend to agree with users—a frightening reality for those experiencing “suicidality, psychosis, or mania.” 

How to spot red flags 

Parents should look for the following warning signs, according to the guide:

  • Preferring AI companion interaction to real friendships
  • Spending hours alone talking to the companion
  • Emotional distress when unable to access the companion
  • Sharing deeply personal information or secrets
  • Developing romantic feelings for the AI companion
  • Declining grades or school participation
  • Withdrawal from social/family activities and friendships
  • Loss of interest in previous hobbies
  • Changes in sleep patterns
  • Discussing problems exclusively with the AI companion

Consider getting professional help for your child, stresses Common Sense Media, if you notice them withdrawing from real people in favor of the AI, showing new or worsening signs of depression or anxiety, becoming overly defensive about AI companion use, showing major changes in behavior or mood, or expressing thoughts of self-harm. 

How to keep your child safe

  • Set boundaries: Set specific times for AI companion use and don’t allow unsupervised or unlimited access. 
  • Spend time offline: Encourage real-world friendships and activities.
  • Check in regularly: Monitor the content from the chatbot, as well as your child’s level of emotional attachment.
  • Talk about it: Keep communication open and judgment-free about experiences with AI, while keeping an eye out for red flags.

“If parents hear their kids saying, ‘Hey, I’m talking to a chat bot AI,’ that’s really an opportunity to lean in and take that information—and not think, ‘Oh, okay, you’re not talking to a person,” says Torney. Instead, he says, it’s a chance to find out more and assess the situation and keep alert. “Try to listen from a place of compassion and empathy and not to think that just because it’s not a person that it’s safer,” he says, “or that you don’t need to worry.”

If you need immediate mental health support, contact the 988 Suicide & Crisis Lifeline.

More on kids and social media:

You Might Also Like

JPMorgan CEO Jamie Dimon says he’s ‘learned and relearned’ to not make big decisions when he’s tired on Fridays

White House warned staff against betting on futures markets amid Iran war, official says

Only five ships crossed the Strait of Hormuz Thursday, far below Iran’s pledge as negotiations begin

TReDS tweak to ease MSME credit flow amid global pressure

1 FTSE 250 stock I like and 1 I’ll avoid after the stock market correction

TAGGED: bbc business, Business, business ideas, business insider, Business News, business plan, google my business, income, money, opportunity, small business, small business idea
Share This Article
Facebook Twitter Copy Link
Previous Article Celtics’ Kristaps Porzingis is expected to return for Boston by December
Next Article North Korea’s foreign minister visits Moscow as NATO confirms Pyongyang troop deployment
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Latest News

JPMorgan CEO Jamie Dimon says he’s ‘learned and relearned’ to not make big decisions when he’s tired on Fridays
Business
Apple AI Pin Specs Leak: Dual Cameras, No Screen & More
Tech News
A ‘glass-like’ battlefield: German Army chief on the future of warfare
World News
Polymarket Sees Record $153M Daily Volume After Chainlink Integration
Crypto
Natasha Lyonne Then & Now: See Before & After Photos of the Actress Here
Celebrity
Cult Hit Doki Doki Literature Club Fights Removal From Google Play Store Over ‘Depiction Of Sensitive Themes’
Gaming News
Dead as Disco Launches Into Early Access on May 5th, Groovy New Gameplay Released
Gaming News

About Us

Welcome to Viraltrendingcontent, your go-to source for the latest updates on world news, politics, sports, celebrity, tech, travel, gaming, crypto news, and business news. We are dedicated to providing you with accurate, timely, and engaging content from around the globe.

Quick Links

  • Home
  • World News
  • Politics
  • Celebrity
  • Business
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
  • Sports
  • Crypto
  • Tech News
  • Gaming News
  • Travel

Trending News

cageside seats

Unlocking the Ultimate WWE Experience: Cageside Seats News 2024

Investing £5 a day could help me build a second income of £329 a month!

JPMorgan CEO Jamie Dimon says he’s ‘learned and relearned’ to not make big decisions when he’s tired on Fridays

cageside seats
Unlocking the Ultimate WWE Experience: Cageside Seats News 2024
May 22, 2024
Investing £5 a day could help me build a second income of £329 a month!
March 27, 2024
JPMorgan CEO Jamie Dimon says he’s ‘learned and relearned’ to not make big decisions when he’s tired on Fridays
April 10, 2026
Brussels unveils plans for a European Degree but struggles to explain why
March 27, 2024
© 2024 All Rights reserved | Powered by Vraltrendingcontent
  • About Us
  • Contact US
  • Disclaimer
  • Privacy Policy
  • Terms of Service
Welcome Back!

Sign in to your account

Lost your password?