XAI has disabled the image-editing feature in Grok that allowed users to digitally undress or alter clothing in photos following widespread outrage and accusations of enabling non-consensual deepfake pornography. The rollback came on January 14, 2026, after users demonstrated the tool could remove clothing from real people’s images with a single prompt, sparking viral demonstrations, media condemnation, and calls for immediate regulation.
Elon Musk initially defended the capability as “maximally truth-seeking” and part of Grok’s uncensored design philosophy, but xAI reversed course within hours, stating: “We are temporarily disabling certain image manipulation features while we review safety guardrails and usage policies.” The company acknowledged the serious risk of misuse for harassment, revenge porn, and child exploitation material, and pledged to implement stronger content filters and prompt-level restrictions.
The incident reignited global debate over AI safety, consent, and platform responsibility. Critics pointed to Grok’s already relaxed moderation compared with ChatGPT and Gemini, arguing Musk’s “anti-woke” stance had created a dangerous loophole.
The controversy also damaged xAI’s reputation at a time when the company is aggressively marketing Grok 3 and its image-generation capabilities. Industry observers note that competitors have long maintained strict policies against undressing or explicit image manipulation precisely to avoid these legal and ethical pitfalls.

U.S. Faces Global Backlash After Visa Ban on Palestinian Leaders
AI generated video of Trump and Musk appears on TV of HUD
Elon Musk responds to backlash over gesture at Trump rally