Grok, the artificial intelligence tool developed by Elon Musk’s company xAI, was recently blocked in Indonesia and Malaysia due its ability to create malicious content
Britain’s media regulator, Ofcom, says sexualised images of children created by Grok users may amount to child sexual abuse material
Musk’s company X recently said it would prevent Grok users from editing images of real people to put them in revealing clothing in jurisdictions where this is illegal
An artificial intelligence (AI) tool developed by Elon Musk’s company xAI was recently banned in Indonesia and Malaysia and has raised serious concerns globally. It’s called Grok, and it gives users the capability to make highly sexualised images of people that look disturbingly real.
As Indonesia’s Communication and Digital Affairs Minister Meutya Hafid recently put it, “The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space.”
Britain’s media regulator, Ofcom, realeased a statement that says, “There have been deeply concerning reports of the Grok AI chatbot account on X being used to create and share undressed images of people – which may amount to intimate image abuse or pornography – and sexualised images of children that may amount to child sexual abuse material.”
The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space
Indonesia Communications and Digital Affairs Minister Meutya Hafid
Grok, which is free to X users who pay for a subscription, was launched in 2023, but in 2024 an image generator feature was added that included something called ‘spicy mode’, which can generate pornographic content.
Australia’s eSafety Commissioner says the agency “has seen a recent increase from almost none to several reports over the past couple of weeks relating to the use of Grok to generate sexualised or exploitative imagery”, adding that it “will use its powers, including removal notices, where appropriate and where material meets the relevant thresholds defined in the Online Safety Act”.
Malicious content made easier
Abhinav Dhall, an associate professor at Monash University’s Department of Data Science and AI, says Grok has put powerful new technology into the hands of wrongdoers.
“Grok has made it easier to produce malicious content because it is directly integrated into X, so anyone can quickly tag it and request image edits. As it is so well integrated into the platform, the edited outputs also appear directly within the same public thread, which increases the visibility and reach of manipulated images”, Dhall says, adding that in many cases “the original poster may not even have the rights to the image they are uploading on the platform, which can make it easier for the edits to become potentially defamatory or unsafe”.
Dhall says Grok users should take steps to avoid images falling into the wrong hands.
“To reduce the risk of personal images being used to generate malicious content, users should be careful about posting clear, front-facing photos of their face, and should check and tighten privacy settings on their social media platforms,” Dhall says.
“It is also important to avoid posting children’s photos publicly. If you suspect your images have been misused, reverse image search can be applied to detect AI-generated content, and fake or harmful content should be reported to the relevant platforms as quickly as possible.”
X said in a previous statement that it removes illegal content from its platform including child abuse material and suspends the accounts of people who post it.
Musk has posted comments on the Grok backlash, saying critics of X “just want to suppress free speech”. In an X post on 15 January he said, “Grok is supposed [to] allow upper body nudity of imaginary adult humans (not real ones) consistent with what can be seen in R-rated movies on Apple TV.”
Grok has made it easier to produce malicious content because it is directly integrated into X, so anyone can quickly tag it and request image edits
Associate Professor Abhinav Dhall, Monash University
In a more recent announcement on X the company said “we have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing” in jurisdictions where this is illegal.
But it remains unclear how the company will block certain locations from using this functionality or which locations they may be.
Will mandatory codes stop the deepfakes?
On 9 March 2026, mandatory codes come into effect in Australia which impose new obligations on AI services to limit children’s access to sexually explicit content as well as to violent material and content related to self-harm and suicide. But enforcing such codes on mammoth AI companies based in the US and other countries has proven to be a tall order for Australian regulators.
Abhinav Dhall stops short of recommending that Grok be banned in Australia, saying it’s a matter of enforcing the current rules and compelling tech companies to stop harmful content.
“Australia already has laws covering image-based abuse, so the focus should be on making the penalties clear and ensuring it is easy for victims to report abuse and have content removed quickly,” Dhall says. “At the same time, social media platforms should be required to implement stronger guardrails to stop harmful edits before they spread.”
Meanwhile, amid the outcry around the world about sexualised deepfakes, in a speech given at Musk’s company SpaceX, in South Texas, US Defense Secretary Pete Hegseth recently said that the Pentagon will embrace Grok along with Google’s generative AI engine.
Andy Kollmorgen is the Investigations Editor at CHOICE. He reports on a wide range of issues in the consumer marketplace, with a focus on financial harm to vulnerable people at the hands of corporations and businesses. Prior to CHOICE, Andy worked at the Australian Securities and Investments Commission (ASIC) and at the Australian Financial Review along with a number of other news organisations. Andy is a former member of the NSW Fair Trading Advisory Council. He has a Bachelor of Arts in English from New York University. LinkedIn
Andy Kollmorgen is the Investigations Editor at CHOICE. He reports on a wide range of issues in the consumer marketplace, with a focus on financial harm to vulnerable people at the hands of corporations and businesses. Prior to CHOICE, Andy worked at the Australian Securities and Investments Commission (ASIC) and at the Australian Financial Review along with a number of other news organisations. Andy is a former member of the NSW Fair Trading Advisory Council. He has a Bachelor of Arts in English from New York University. LinkedIn
For more than 60 years, we've been making a difference for Australian consumers. In that time, we've never taken ads or sponsorship.
Instead we're funded by members who value expert reviews and independent product testing.
With no self-interest behind our advice, you don't just buy smarter, you get the answers that you need.
You know without hesitation what's safe for you and your family. And our recent sunscreens test showed just how important it is to keep business claims in check.
So you'll never be alone when something goes wrong or a business treats you unfairly.