News commentary

The Grok Disaster Isn't An Anomaly. It Follows Warnings That Were Ignored.

Tech Policy Press · Bruna Santos, Shirin Anlen · last updated

The recent revelations about Elon Musk’s Grok chatbot generating and publishing nonconsensual, sexualized images of women and children in response to user prompts on X are being treated as a scandal. They should instead be understood as the most severe phenomenon yet in a disaster that started years ago.

According to WIRED, separate from the images posted on X, a cache of around 1,200 links to outputs created on the Grok app or website is currently available, and some of these have already been shared on adult deepfake forums or indexed by Google. These images, WIRED reports, are “disturbing sexual videos that are vastly more explicit than images created by Grok on X.” 404Media reports that on Telegram, users are repeatedly jailbreaking Grok to produce “far worse.”

This is what happens when gender-based abuse becomes scalable: long-standing forms of harm are not only amplified by generative systems that lack meaningful controls over what users can prompt and produce, but also rapidly migrate across platforms–moving from private tools to public forums, from fringe channels to mainstream search results—making the abuse harder to contain, remove or remediate once it spreads.

Related stories