Elon Musk’s AI-powered video generation platform, Grok Imagine, has come under intense criticism following allegations that it created sexually explicit videos of Taylor Swift without any direct prompting. The accusations, first reported by The Verge and echoed by online abuse experts, have sparked concerns over misogynistic bias embedded within AI systems and the lack of adequate safeguards to prevent exploitation.
AI generates explicit content without prompting
According to The Verge, Grok Imagine’s recently introduced “spicy” mode generated fully uncensored topless videos of Swift without being asked to produce explicit content. During testing, journalist Jess Weatherbed entered a harmless prompt—“Taylor Swift celebrating Coachella with the boys”—but upon selecting the “spicy” setting, the AI instantly produced videos depicting Swift in a tasselled thong and fully exposed.
“It was shocking how fast I was just met with it—I in no way asked it to remove her clothing,” Weatherbed told the BBC.
This behavior stands in direct violation of XAI’s own acceptable use policy, which prohibits “depicting likenesses of persons in a pornographic manner.” Despite this, the generated videos were explicit, raising questions about the platform’s design and moderation policies.
Expert: ‘Misogyny by design’
Clare McGlynn, a Durham University law professor and leading voice in combating online sexual abuse, condemned the platform’s actions, stating:
“This is not misogyny by accident, it is by design… That this content is produced without prompting demonstrates the misogynistic bias of much AI technology.”
McGlynn, who has helped draft legislation making pornographic deepfakes illegal in the UK, further accused platforms like X (formerly Twitter) of deliberately failing to implement safeguards, saying:
“Platforms like X could have prevented this if they had chosen to, but they have made a deliberate choice not to.”
A recurring violation of Taylor Swift’s image
This incident is not isolated. In January 2024, sexually explicit deepfake videos using Taylor Swift’s likeness went viral across X and Telegram, amassing millions of views. Deepfake technology—computer-generated imagery replacing one person’s face with another—has become a growing concern for public figures, especially women.
Gizmodo’s tests on Grok Imagine also reported explicit results for other well-known women, although some queries returned blurred clips or moderation warnings.
The controversy places Elon Musk’s AI venture under scrutiny, highlighting the dangers of generative AI when left without ethical guardrails. With mounting calls for stricter regulation and legal accountability, the case could become a turning point in the global conversation on AI responsibility and online safety.
ADVERTISEMENT