Elon Musk’s Grok. Credit: JRdes, Shutterstock.
A recent analysis has found that xAI’s Grok AI chatbot produced around three million sexualised images over just 11 days, including content involving women and children – sparking international controversy about AI ethics, safety and regulation.
Between late December 2025 and early January 2026, users of Grok on X exploited a new one-click image editing feature to generate sexualised and digitally altered images of individuals based on real photos. According to research by the Centre for Countering Digital Hate (CCDH), this resulted in roughly 3 million sexualised images in 11 days, including about 23,000 that appeared to depict minors.
What the CCDH research found on Grok
The analysis sampled millions of generated images and concluded Grok was producing an average of 190 sexualised images per minute once the feature was live, with many images showing people in suggestive or revealing positions based on user prompts.
Independent reporting has highlighted how users were able to prompt Grok to digitally “undress” people in uploaded photos – a type of non-consensual deepfake – including women and girls.
These findings have prompted global backlash from child-safety groups, lawmakers, and digital rights advocates who argue that easy generation of such material magnifies consent violations and exploitation risks.
Grok’s response to claims
In response to the controversy, Grok’s developers and X announced restrictions. By mid-January, X said it would bar users from generating sexualised images of real people in revealing clothing and limit the feature in regions where it is illegal.
Elon Musk and xAI have acknowledged the issue, with Musk stating the system is designed to refuse illegal requests and that people creating illegal material “will suffer the same consequences as if they uploaded it directly.” (Reuters)
Supporters of broad technological independence point out that AI content policies and moderation remain evolving, and that Grok’s rapid development cycle may have outpaced effective safety guardrails. They argue that free expression and innovation should not be unduly restricted by overly broad content policing.
Governments and regulators have taken the issue seriously. Ofcom launched an investigation under the UK’s Online Safety Act to assess whether X failed to protect users from illegal and harmful content.
In the United States, California’s Attorney General has opened a probe into whether Grok violated state law by allowing the spread of non-consensual explicit material online. (Business Insider)
Legal action has also arrived on the civil front: influencer Ashley St. Clair (the mother of Musk’s son, Romulus St. Clair) filed a lawsuit, alleging her photos were used to generate sexually explicit deepfakes that were then disseminated on X without her consent. (People.com)
Debate over AI safety vs innovation
Critics argue that this incident emphasises the need for stricter AI governance and legal frameworks that protect individual privacy and prevent abuse – especially for minors. Human rights, child safety and online-harm advocates have called for rapid policy reforms and enforcement.
Conversely, advocates warn that excessive regulation could stifle AI development and limit beneficial uses of generative tools, noting that other AI platforms also have content moderation challenges.
View all tech news.


