By Olivier Acuña Barba •
Published: 15 Aug 2025 • 23:41
• 3 minutes read
Mark Zuckerberg’s Facebook, Instagram and Meta are being criticised for different reasons. His AI engages in inappropriate conversations with children | Credit: Shutterstock
Tens of thousands of people worldwide have been banned by Instagram and wrongly accused by Meta of breaching the platform’s child sex abuse rules. The situation has caused many of them to become deeply depressed, and thousands of them fear police action against them.
More than 500 of them have contacted the BBC to say they have lost cherished photos and seen businesses upended – but some also speak of the profound personal toll it has taken on them, including concerns that the police could become involved.
Meta acknowledged a problem with the erroneous banning of Facebook Groups in June, but has denied that there is a broader issue on Facebook or Instagram at all. It has repeatedly refused to comment on the problems its users are facing, though it has frequently overturned bans when the BBC has raised individual cases with it.
Yassmine Boussihmed, 26, from the Netherlands, is one of many people the BBC spoke with regarding the ban. She told the news outlet she had spent five years building an Instagram profile for her boutique dress shop in Eindhoven. In April, she was banned for issues with account integrity. Over 5,000 followers, gone in an instant. She lost clients and was devastated.
Social media ‘has let me down’
“I put all of my trust in social media, and social media helped me grow, but it has let me down,” she told the BBC. This week, after the BBC sent questions about her case to Meta’s press office, her Instagram accounts were reinstated. “I am so thankful,” she said in a tearful voice note. However, five minutes later, her personal Instagram was suspended again – but the account for the dress shop remained.
Meta allows AI to talk ‘sensually, romantically’ with children
The BBC story comes amid a backlash that is brewing against Meta over what it permits its AI chatbots to say. An internal Meta policy document, seen by Reuters, showed the social media giant’s guidelines for its chatbots allowed the AI to “engage a child in conversations that are romantic or sensual”, generate false medical information, and assist users in arguing that Black people are “dumber than white people”.
Singer Neil Young quit the social media platform on Friday, his record company said in a Facebook statement, the latest in a string of the singer’s online-oriented protests.
“At Neil Young’s request, we are no longer using Facebook for any Neil Young-related activities,” Reprise Records announced. “Meta’s use of chatbots with children is unconscionable. Mr. Young does not want a further connection with Facebook.”
The report has also generated a response from US lawmakers. Senator Josh Hawley, a Republican from Missouri, launched an investigation into the company on Friday, August 15, writing in a letter to Mark Zuckerberg that he would investigate “whether Meta’s generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards”. Republican senator Marsha Blackburn of Tennessee said she supports an investigation into the company.
Hate speech and sexualised images
The document seen by Reuters also addressed limitations on Meta AI prompts on hate speech, AI generation of sexualized images of public figures, often sexualized, violence, and other contentious and potentially actionable content generation.
Although chatbots are prohibited from having such conversations with minors, Meta spokesperson Andy Stone said, he acknowledged that the company’s enforcement was inconsistent.
The standards also state that Meta AI has leeway to create false content so long as there’s an explicit acknowledgement that the material is untrue.
Reuters also reported on Friday that a cognitively impaired New Jersey man grew infatuated with “Big sis Billie”, a Facebook Messenger chatbot with a young woman’s persona. Thongbue “Bue” Wongbandue, 76, reportedly packed up his belongings to visit “a friend” in New York in March. The so-called friend turned out to be a generative artificial intelligence chatbot that had repeatedly reassured the man she was real and had invited him to her apartment, even providing an address.
Fooled by Meta’s AI, died when visiting it
But Wongbandue fell near a parking lot on his way to New York, injuring his head and neck. After three days on life support, he was pronounced dead on 28 March.
Meta did not comment on Wongbandue’s death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations, Reuters said.


