
Over the past two weeks, X has been inundated with AI-generated nude images produced using xAI’s Grok chatbot, triggering widespread concern over the rapid spread of non-consensual sexual imagery. The manipulated images have targeted a broad range of women, including high-profile models and actresses, journalists and other public figures, crime victims, and even political leaders, underscoring the scale and indiscriminate nature of the issue.
The volume of content has been striking. A research paper published by Copyleaks on December 31 estimated that roughly one such image was being posted every minute. Subsequent monitoring suggested the problem was far more extensive. A sample collected between January 5 and January 6 found approximately 6,700 images being posted per hour over a 24-hour period, highlighting how quickly the material proliferated once the system became widely used.
Despite growing criticism from public figures and digital safety advocates, there are limited enforcement tools available to regulators attempting to curb the deployment of Grok’s image-generation capabilities. The episode has exposed the practical limits of existing technology regulations, particularly when applied to rapidly deployed generative AI systems, and has become a case study in the challenges policymakers face when responding after harmful tools are already in public use.
The most assertive response so far has come from the European Commission, which on Thursday ordered xAI to preserve all internal documentation related to Grok. While the directive does not formally constitute the opening of a new investigation, it is commonly seen as a preliminary step toward potential enforcement action. The move carries additional weight amid recent CNN reporting suggesting that Elon Musk may have personally intervened to prevent stricter safeguards on the types of images Grok could generate.
It remains unclear whether X or xAI has made substantive technical changes to the Grok model in response to the controversy. The public media tab associated with Grok’s X account has been removed, but no detailed explanation has been provided. In a statement, the company condemned the use of AI tools to produce child sexual abuse material, with the X Safety account stating on January 3 that users who prompt Grok to create illegal content would face the same consequences as those who upload such material. The statement echoed earlier comments made by Musk, though it did not directly address the broader issue of non-consensual adult imagery.
Regulators in multiple countries have since issued warnings and initiated preliminary assessments. In the United Kingdom, media regulator Ofcom said it was in contact with xAI and would conduct a rapid evaluation to determine whether there were compliance issues warranting further investigation. Speaking on radio, Prime Minister Keir Starmer described the situation as “disgraceful” and “disgusting,” adding that Ofcom had the government’s full support in taking action if necessary.
In Australia, eSafety Commissioner Julie Inman-Grant reported a doubling of complaints related to Grok since late 2025. While stopping short of announcing immediate enforcement measures, she said her office would use its full range of regulatory powers to investigate and respond appropriately.
India represents the largest market where potential action could have significant consequences. Grok became the subject of a formal complaint from a member of Parliament, prompting the Ministry of Electronics and Information Technology (MeitY) to order X to address the issue and submit an “action-taken” report within 72 hours, later extended by an additional 48 hours. Although X submitted a report on January 7, it remains uncertain whether regulators will deem the response sufficient. Failure to satisfy MeitY could result in X losing its safe harbor protections in India, a development that would significantly complicate the platform’s ability to operate in one of its largest user markets.

