Arrow-right Camera
The Spokesman-Review Newspaper
Spokane, Washington  Est. May 19, 1883

Teens allege Musk’s Grok chatbot made sexual images of them as minors

In a photo illustration, the Twitter “X” logo is displayed on the screen of a smartphone.  (Sheldon Cooper/SOPA Images/Zuma Press/TNS)
By Faiz Siddiqui Washington Post

When a mother from eastern Tennessee asked local police how someone had created naked photos of her teenage daughter, she recalls being told it was a company she’d never heard of: xAI, the artificial intelligence start-up run by Tesla CEO Elon Musk.

Police alleged a person arrested in December had used Grok, xAI’s chatbot, to edit photos, including one from the teen girl’s Instagram account, removing a blue bikini from one image to “depict her without any clothes,” according to a lawsuit filed Monday.

The teen is suing xAI as part of a group of Tennessee teenagers who allege the company’s AI tools were used to create nude images of them by editing photos in which they were clothed. The edited photos spread across Discord and Telegram in recent months, and some were bartered for other child sexual abuse material in online chatrooms, according to the complaint, which was first reported by The Washington Post.

The lawsuit, filed in the Northern District of California on Monday, alleges a single perpetrator compiled images and videos of more than 18 girls, many of whom attended the same school, and digitally altered some of them using AI. It is the first action brought by alleged minor victims of child sexual abuse material stemming from an “undressing” scandal that has plagued xAI in recent months.

The mother of one of the teens, who spoke on the condition of anonymity to maintain her child’s privacy, said the incident “crushed” her daughter, a social and outgoing student-athlete.

“It definitely put her into a little bit of a shell, which we had never seen before,” the mother said.

The three plaintiffs, including two minors, seek damages for child pornography violations and aim to prevent the company from allowing image editing like that used to alter their photos. Attorneys allege xAI fostered an environment in which the spread of child sexual abuse material was inevitable, as the technology and the company’s public messaging encouraged people to create explicit images. A “model that can create sexualized images of adults cannot be prevented from creating CSAM of minors,” according to the complaint.

“These young people - these children - are facing a lifetime of having these … sexualized images of what appears to be a child’s body out there on the internet,” said lawyer Vanessa Baehr-Jones, who is representing the plaintiffs in the proposed class-action suit. “It wouldn’t have been possible but for this tool that xAI released knowing full well that this material could be generated.”

Musk and xAI did not immediately respond to a request for comment. Musk said in January in a post on X that he was “not aware of any naked underage images generated by Grok. Literally zero.” The chatbot only follows user requests, he said, and will refuse to produce anything illegal, adding that “adversarial hacking” could lead the tool to act unexpectedly.

“If that happens, we fix the bug immediately,” he said.

Musk said in a post on X last week that “if it’s allowed in an R-rated movie, it’s allowed” by Grok’s image and video generator tool.

The lawsuit comes after xAI ignited a firestorm by allowing users to “undress” real subjects in photos through editing features and capabilities unlocked via its Grok Imagine tool and “Spicy” mode. The capabilities allowed users to create sexual and revealing images of real people by depicting them in garments as tiny as a string of dental floss, for example.

The editing led to the generation of millions of sexualized images, including what researchers said were an estimated 23,000 images appearing to depict children over an 11-day period. Authorities, including the California attorney general, the European Commission and Britain’s communications regulator, opened investigations tied to the features.

In January, xAI said it had rolled back its editing tools in some jurisdictions, after previously limiting image generation to paying users. Grok’s embrace of sexualized material arose as part of an effort by xAI to attract more users to its chatbot, The Post reported last month.

The suit alleges xAI committed a range of offenses, including creating child pornography and launching a feature riddled with design defects, and argues the company knowingly allowed its tools to generate sexual images of minors as part of an effort to monetize its AI.

The complaint said that editing images of real children to create sexualized images constitutes creating child pornography. U.S. officials have previously said sexually explicit computer-generated depictions of children are illegal.

According to the complaint, “xAI - and its founder Elon Musk - saw a business opportunity: an opportunity to profit off the sexual predation of real people, including children.”

The students in Tennessee learned of the explicit images late last year, after one of them, identified in the lawsuit as “Jane Doe 1,” received a message on Instagram. It said explicit photos of her were spreading on the chat platform Discord.

The lawsuit said that one of the child sexual abuse images of her originated from a photo of her at her school homecoming in September. Another, depicting her topless, appeared to have been made using a yearbook photo, the lawsuit said. She “was a minor during the operative time,” the lawsuit said.

She received a link to a Discord server, “which contained images and videos of at least 18 other minor females, many of whom Jane Doe 1 recognized from her school,” according to the lawsuit.

According to the complaint, police last year opened a criminal investigation into the perpetrator. He was arrested in December, the complaint said, and police searched his phone.

But the images took on a life of their own, circulating broadly.

“In Telegram group chats with hundreds of other users, [the perpetrator traded] her CSAM files for sexually explicit content of other minors,” the lawsuit said.

Attorneys said they engaged with third-party experts to determine the images had been created by AI and, specifically, Grok. The complaint alleges the perpetrator turned to Grok and other tools that license its capabilities, including apps designed to “undress people,” to manipulate real photos of underage people. Those tools effectively serve as a middleman, attorneys said, bringing Grok’s capabilities to a user who isn’t turning directly to Grok’s website or the X app.

“In all instances, the real images and videos uploaded into Defendants’ servers were not unlawful [child sexual abuse material] but only became unlawful content after Defendants’ AI morphed the files on xAI servers to produce and distribute CSAM,” according to the complaint.

By February, the two other plaintiffs, both minors, learned through the criminal investigation that the perpetrator had used their images to create child sexual abuse material as well, according to the lawsuit.

The consequences of the abuse are likely to follow the students for decades, attorneys said. The plaintiffs, the complaint said, will probably receive National Center for Missing and Exploited Children notifications for the rest of their lives - informing them that “criminal defendants have possessed, received, or distributed CSAM files depicting them.”

In an interview, Annika K. Martin, the lead counsel in the suit, posed questions she’d ask to the xAI founder himself.

“As a parent can you imagine your child - your child’s face - on images of … depraved actions put into video of grossly sexual behavior?” she asked. “Your child’s voice on video screaming. Can you imagine that as a parent? Can you imagine that for your child and feel okay with what you’ve done?”