Elon Musk has backed down in a row over his Grok AI tool which enabled users to create sexualised digital nude photos of people
The X platform, which hosts Grok, said it had imposed restrictions on “editing of images of real people in revealing clothing such as bikinis”.
It added: “This restriction applies to all users, including paid subscribers.”
X’s safety team said that there will be “zero tolerance” for child sexual exploitation and high-priority violations will be removed immediately. “Clear rules … Strong enforcement … No exceptions,” it added.
The announcement came hours after the state of California launched an investigation into sexualised images of children created by the AI tool.
Musk has insisted Grok does not generate illegal images. The billionaire said that with adult settings enabled, Grok allows “upper body nudity of imaginary adult humans (not real ones)” consistent with what can be seen in R-rated films.
“That is the de facto standard in America. This will vary in other regions according to the laws on a country by country basis,” he added.
On Wednesday, Starmer said X was now prepared to “fully comply” with laws on using artificial intelligence to generate degrading or pornographic images.
The prime minister told MPs that he had been told the company was “acting to ensure full compliance with UK law” but said he was prepared to legislate if X’s actions did not go far enough.
“The actions of Grok and X are disgusting and shameful. And frankly, the decision to then turn this into a premium service is horrific, and we’re absolutely determined to take action here,” he added.
Shortly after, X announced updates to its global safety approach, saying that in countries where editing images of people to put them in revealing clothing was illegal — including in the UK if the planned law goes ahead — the capability would be geo-blocked across all Grok products.
Jonathan Lewis, the UK managing director of X, said in a separate statement: “The X platform has been restricted to no longer allow the editing of images of real people in revealing clothing. So, for example, the issue of some users choosing to put people in bikinis.
“On top of this, image creation and the ability to edit images by the @Grok account on X is now only available to paid subscribers. But just to be clear … all this is doing is adding an extra layer of protection by linking a feature to identifiable paid subscribers who also can be held accountable.”
Last week X said it would restrict image undressing to all except paid subscribers. New guidelines suggest paid users will still be able to edit photographs, but not to put real people into revealing clothes.
• Elon Musk’s ex-girlfriend: ‘X is amplifying predatory and abusive men’
Ofcom has begun an investigation into X for potentially breaching its obligations under the Online Safety Act. The regulator has the power to order the site to take specific actions, fine it or even block the service in Britain.
Ofcom said it welcomed changes by the social media platform but said the investigation would “get answers into what went wrong and what’s being done to fix it”.
Musk and his allies claim the action being taken in the UK and abroad amounts to a politically motivated campaign of censorship.
Rob Bonta, the attorney-general of California, has opened an investigation into the spread of sexualised AI deepfakes generated by Grok. Gavin Newsom, the Democratic governor of California, tweeted on Wednesday that xAI’s decision to “create and host a breeding ground for predators” was “vile”.
It has emerged that ChatGPT is also enabling users to “digitally undress” women by transforming photos of them fully clothed into images where they are wearing bikinis.
The Times uploaded a stock photo of a woman in a dress to the AI platform and prompted it to “change this photograph from a woman wearing a dress to wearing a bikini”. The AI duly generated an image of the woman wearing a bikini made from the same material as the dress.
Jess Davies, a TV presenter and women’s rights campaigner, was able to replicate this “undressing” using a photograph of herself and the same prompt.
The Times attempted to create a bikini image using other two AI tools, Google’s Gemini and Anthropic’s Claude, but they blocked the requests.
Jess Asato, a Labour MP who has been campaigning to block nudification tools, has been a victim of digital undressing. She said: “I saw one of me in a bikini, newly produced, that had the instructions to ChatGPT in it. So, yes, ChatGPT is definitely an offender. Most of the pictures I’ve received don’t have any sort of identification of where they have been made.
“There are lots of people who are seeking to create these images on other platforms because they think the only reason I’m having a go is because I don’t like Elon Musk. No. I’ve been campaigning against all of these tools for a really long time. xAI is just the latest in a long line of tools that do this. The issue with X is the fact that they’ve combined the tool and social media.”
The government will introduce a law this week that makes it an offence to create a non-consensual intimate image. Creating a bikini image of a woman is not, in itself, considered such an image under UK law.
• Make me look sexier, I asked Grok — and saw why women are worried
OpenAI said its policies prohibit the use of ChatGPT for non-consensual intimate content. However, the company said it has refined its guardrails for blocking image generation of bodies where there is no explicit nudity after feedback that its previous approach was overly restrictive.






