Grok controversy: Everything you need to know about X’s sexual AI image scandal

Social media platform X says it has made major changes to its AI chatbot Grok after users started making sexual photos of real women and children.

Images generated by Grok have manipulated photos of people, often removing their clothes or making them pose in suggestive ways.

Elon Musk’s platform has come under heavy scrutiny worldwide, including from the UK government, which called X’s initial response to the problem “insulting” to victims.

Here’s what you need to know about the controversy and how X has responded.

How did the controversy begin?

A significant number of X users started reporting examples of Grok altering images to sexualise real women and children towards the end of December and into the new year.

On public X posts that include photos, users could comment asking Grok’s tools to edit the image however they want.

Grok can also be used to create images privately. Last summer, a so-called “spicy mode” was introduced, specifically aimed at helping users generate sexually explicit images.

AI bots have safety features designed to reject inappropriate prompts, but reports suggest Grok’s had been failing to deny users whose requests were in breach of its own rules.

It is not known for how long Grok had allowed real photos of people to be sexualised, but the problem became widespread by early January, with users generating images by using requests such as: “Put her in a transparent bikini.”

An investigation by Reuters news agency found that over a single 10-minute period on 2 January, X users asked Grok to digitally edit photographs of people so that they would appear to be wearing bikinis at least 102 times.

It said the majority of those targeted were young women, but in a few cases, they were men, sometimes celebrities and politicians.

On the same day, X boss Elon Musk posted laugh-cry emojis in response to AI edits of famous people – including himself – in bikinis. He responded with the same emoji when one X user said their social media feed resembled a bar packed with bikini-clad women.

How has the UK government reacted?

Prime Minister Sir Keir Starmer has been critical of X over the images, calling the exploitation of Grok “absolutely disgusting and shameful”.

“If X cannot control Grok, we will – and we’ll do it fast because if you profit from harm and abuse, you lose the right to self-regulate,” he told a meeting of the Parliamentary Labour Party on 12 January.

His technology secretary Liz Kendall moved forward a bill to make the creation of non-consensual intimate images with AI a criminal offence, making it illegal to create or request the creation of non-consensual intimate images.

The Crime and Policing Bill, which is going through parliament, will make it a criminal offence for companies to supply tools designed to create non-consensual internet images.

Ms Kendall said this would be “targeting the problem at its source”.

Additionally, media watchdog Ofcom launched a formal investigation into Grok, including whether X has “failed to comply with its legal obligations under the Online Safety Act”.

In parliament on Wednesday, Sir Keir told MPs that X had insisted it was complying with UK law, but he also said his government was “absolutely determined to take action” and that if X didn’t act, “Ofcom has our full backing”.

How has X responded?

The developer of Grok and X’s parent company, xAI, initially put restrictions in place that meant only paid subscribers were able to use image generation and editing features on the platform.

The UK government criticised the move, with Ms Kendall saying it was merely “monetising abuse”.

On Thursday, X said it had introduced measures to “prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis”.

“This restriction applies to all users, including paid subscribers,” a company statement said.

X had insisted it was already taking action against illegal content on the platform, including child sexual abuse material, by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.

Read more:
Why robots still can’t match humans – and what’s holding them back

Astronauts evacuate space station over ‘serious medical condition’

Mr Musk had said he was “not aware of any naked underage images generated by Grok”.

“Obviously, Grok does not spontaneously generate images, it does so only according to user requests,” he said.

“When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state.

“There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.”

In response to ministers’ threats that X could be banned in the UK if it did not act on concerns about its AI chatbot, Mr Musk accused the UK government of being “fascist” and trying to curb free speech.

Why has X been singled out?

Mr Musk has hit back at critics of Grok, saying they “want any excuse for censorship” and sharing a post which suggested “millions” of other apps can make sexualised images of people.

AI technology that can digitally undress people has been around for years, but until recently was less accessible.

They also typically require a certain level of effort or payment.

The three laws you need to know about

X is facing scrutiny in the UK over a number of existing and incoming regulations…

The Data (Use and Access) Act:

Having been passed last year, sections of the Data (Use and Access) Act are slowly being implemented in the UK.

One of the most obvious is the criminalisation of creating non-consensual intimate images with AI, which Technology Secretary Liz Kendall announced was being brought forward on 12 January.

Overall, the act aims to make changes to the UK’s protection and privacy legislation in order to make the rules simpler for organisations, encourage innovation, help law enforcement agencies to tackle crime and allow responsible data-sharing while maintaining high data protection standards.

The Criminal and Policing Bill:

This bill, which is currently going through parliament, will introduce a range of measures aimed at addressing anti-social behaviour, sexual offences and knife crime, among other things.

Under the bill it will become illegal for companies to supply tools designed to create non-consensual internet images.

The Online Safety Act:

Hours after the government announced plans to criminalise the creation of AI sexualised images, media watchdog Ofcom said it has launched an investigation into whether X has “failed to comply with its legal obligations under the Online Safety Act”.

The act, which began being fully enforced in July last year, states that online platforms have to make sure they’re not hosting illegal content.

It aims to hold platforms responsible for illegal content such as self-harm material, child sexual abuse images and non-consensual explicit images.

Announcing its investigation into Grok, the regulator said: “There have been deeply concerning reports of the Grok AI chatbot account on X being used to create and share undressed images of people – which may amount to intimate image abuse or pornography – and sexualised images of children that may amount to child sexual abuse material.”

If X is found to not comply with the Online Safety Act, Ofcom can issue a fine of up to 10% of its worldwide revenue or £18m, and if that is not enough, can go as far as getting a court approval to block the site.

Experts say Grok’s imaging technology and easy interface lowered the barrier to entry, and many of its generated images are instantly made public.

Three experts who have followed the development of X’s policies around AI-generated explicit content told Reuters that the company had ignored warnings from civil society and child safety groups, including a letter sent last year warning that xAI was only one small step away from unleashing “a torrent of obviously nonconsensual deepfakes.”

Tyler Johnston, the executive director of The Midas Project, an AI watchdog group that was among the letter’s signatories, said: “In August, we warned that xAI’s image generation was essentially a nudification tool waiting to be weaponised.

“That’s basically what’s played out.”

Dani Pinter, the chief legal officer at the US’s National Centre on Sexual Exploitation, said X failed to pull abusive images from its AI training material and should have banned users requesting illegal content.

“This was an entirely predictable and avoidable atrocity,” Ms Pinter said.

Source: https://news.sky.com/story/grok-controversy-everything-you-need-to-know-about-xs-sexual-ai-image-scandal-13493882