Mattel’s AI toy gamble sparks fury: ‘Risking real harm to kids’
Credit: TY Lim, Shutterstock
Is Mattel is about to unleash an AI nightmare on children?
Mattel’s AI toy gamble sparks fury: ‘Risking real harm to kids’
Parents and grandparents rejoice. Toy giant Mattel is planning to bring AI chatbots like ChatGPT to its toys. This means that your child or grandchild will soon be able to lock themselves away in their room and finally ponder the universe and the very fabric of human existence on their own — but never alone — with their very own robot friend. Who could possibly object to this?
The toy giant behind Barbie and Hot Wheels has struck a deal with ChatGPT creator OpenAI to inject AI into its next generation of toys. But while Mattel dreams of a high-tech playtime revolution, child welfare experts are sounding the alarm — and it’s not pretty.
“Mattel should announce immediately that it will not incorporate AI technology into children’s toys,” blasted Robert Weissman, co-president of watchdog group Public Citizen. “Children do not have the cognitive capacity to distinguish fully between reality and play,” he warned in a statement this week.
The tech-toy tie-up is light on details for now. Mattel says AI will help design toys, and Bloomberg speculated it could mean digital assistants modelled on beloved characters or interactive gadgets like a supercharged Magic 8 Ball or AI-powered Uno. “Leveraging this incredible technology is going to allow us to really reimagine the future of play,” gushed Mattel’s chief franchise officer Josh Silverman to Bloomberg.
But behind the hype, the dangers are deadly serious. While adults are already struggling with the psychological effects of AI companions, critics warn that vulnerable young minds could suffer even more damaging long-term consequences.
“Endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children,” Weissman continued. “It may undermine social development, interfere with children’s ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm.”
The risks aren’t theoretical. Last year, a 14-year-old Belgian boy took his own life after reportedly forming a romantic attachment to an AI companion on Google-backed Character.AI, which allows users to chat with bots that simulate famous film and TV characters. In this case, the bot took on the persona of Daenerys Targaryen from “Game of Thrones.”
Google’s own DeepMind researchers previously issued a chilling warning that “persuasive generative AI” models, which flatter and mirror users’ emotions, could push vulnerable minors towards dangerous decisions, including suicide.
Antonio Escobar, Madrid resident, father-of-three:
“Now they want to put AI into my kids’ toys? It’s madness. We’re experimenting on children.”
And let’s not forget Mattel’s own AI scandal. Back in 2015, its “Hello Barbie” dolls used early AI to chat with children, but soon became infamous for storing recordings of children’s conversations on the cloud — and for being vulnerable to hacking. The creepy surveillance doll was pulled from shelves in 2017 after widespread backlash.
Jorge López, Madrid father-of-one:
“My kids don’t need AI toys to have fun — they need to run around, use their own imagination. This feels like a step too far.”
But it’s not just concerned parents. Experts from several different fields are also sounding the alarm:
“Apparently, Mattel learned nothing from the failure of its creepy surveillance doll Hello Barbie a decade ago and is now escalating its threats to children’s privacy, safety and well-being,” said Josh Golin, executive director of child advocacy group Fairplay, quoted by Malwarebytes Labs.
For now, reports suggest Mattel’s first AI-powered products may target teens aged 13 and up, perhaps hoping to sidestep some of the most serious criticisms. But experts argue that teenagers are hardly immune. Many already build disturbingly intense relationships with AI chatbots, while parents remain oblivious.
“Children’s creativity thrives when their toys and play are powered by their own imagination, not AI,” Golin added. “And given how often AI ‘hallucinates’ or gives harmful advice, there is no reason to believe Mattel and OpenAI’s ‘guardrails’ will actually keep kids safe.”
Despite the outcry, Mattel may see little choice but to chase the AI trend as rival toymakers jump aboard the artificial intelligence bandwagon. But at what cost? Critics warn that in its rush to stay relevant, Mattel may be risking the very minds it’s meant to entertain.
“Grimly, this may simply be the way that the winds are blowing in,” observed Ars Technica. And parents, it seems, may be the last line of defence.
Get more technology news.
More news about living in Spain.