Elon Musk’s AI chatbot Grok is spreading a “torrent of disinformation” through its image generation tool, an expert has warned. Malicious images are being spread on X, showing politicians committing 9/11 and cartoon characters as murderers.
A new version of Grok, available to paying subscribers on the social media platform, launched on Wednesday. The new version features a new AI image generation tool, which has unleashed a flood of bizarre images.
The image tool appears to have few restrictions on what it can generate. There are no restrictions that have become the industry standard for competitors like ChatGPT, which, for example, rejects prompts for images of real-world violence and explicit content.
Grok, on the other hand, has allowed the creation of degrading and offensive images, often depicting politicians, celebrities or religious figures naked or performing violent acts.
The chatbot also doesn’t seem to refuse to generate images of copyrighted characters. Many images are also posted of cartoon and comic book characters participating in criminal or illegal activities.
Elon Musk’s AI chatbot Grok is spreading a ‘torrent of misinformation’ through its image generation tool, an expert has warned
Daniel Card, a fellow of BCS, the Chartered Institute for IT, said the problem of misinformation and disinformation about X was a “societal crisis” because of its potential impact.
“Grok may have some limitations, but it is unleashing a flood of disinformation, copyright chaos, and explicit deepfakes,” he said.
‘This is not just a defense problem – it is a societal crisis. Information warfare has become a bigger threat than cyber attacks, infiltrating our daily lives and distorting global perceptions.
“These challenges require bold, modern solutions. By the time regulators step in, disinformation will have reached millions and will spread at a pace we are simply not prepared for.
‘In the US, distorted views of countries like the UK are spreading, fueled by exaggerated messages of danger. We are at a critical juncture in navigating the truth in the AI age.
‘Our current strategies are falling short. As we transition to a digital-physical hybrid world, this threat could become society’s greatest challenge. We must act now – authorities, governments and tech leaders must take action.’
But Musk seemed to revel in the controversial nature of the chatbot update, posting a message on X on Wednesday: “Grok is the world’s most fun AI!”
Some users responded to Musk by mocking him with the tool, asking it to show him holding up offensive signs, for example. In one case, the ardent Trump supporter was shown holding a Harris-Walz sign.
Other fake photos show Kamala Harris and Donald Trump working together at an Amazon warehouse, going to the beach together and even kissing.
There are also more sinister AI creations, such as images of Musk, Trump and others participating in school shootings. There are also images of public figures carrying out the 9/11 terrorist attacks.
Other users asked Grok to create highly offensive images, including images of the Prophet Muhammad, in one case showing a bomb.
Several photographs also depicted politicians in Nazi uniforms and as historical dictators.
Alejandra Caraballo, an American civil rights attorney and clinical instructor at Harvard Law School’s Cyberlaw Clinic, criticized the lack of filters in the Grok application.
In her article about X, she described it as “one of the most reckless and irresponsible AI implementations I have ever seen.”
The wave of misleading images will be a major concern, especially in the run-up to the US elections in November. Only a few of the images carry warnings or community messages from X.
This comes after X and Musk were heavily criticised for the platform’s role in the recent riots in the UK, where misinformation was spread and fuelled much of the unrest. Musk has been seen communicating with far-right figures on the site and has reiterated his belief in ‘absolute freedom of speech’.
And last month, he was accused of violating his platform’s rules on deepfakes after he posted an edited video mocking Vice President Harris by dubbing her over with a manipulated voice.
The clip has been viewed nearly 130 million times by X users. In the clip, Harris’ fake voice says, “I was selected because I am the ultimate diversity asset.”
It then adds that anyone who criticizes her is “both sexist and racist.”
Other generative AI deepfakes in both the US and elsewhere have attempted to influence voters with misinformation, humor, or both.
In Slovakia in 2023, fake audio clips were released imitating a candidate discussing plans to rig the election and raise the price of beer, just days before the election.
In 2022, a satirical ad for a political action committee superimposed the face of a Louisiana mayoral candidate onto an actor portraying him as a substandard high school student.
Congress has yet to pass legislation on AI in politics, and federal agencies have taken only limited steps. Most existing U.S. regulation is left to the states.
According to the National Conference of State Legislatures, more than a third of states have passed their own laws regulating the use of AI in campaigns and elections.
In addition to X, other social media companies have also established policies regarding synthetic and manipulated media shared on their platforms.
For example, users of the video platform YouTube must disclose whether they have used generative artificial intelligence to create videos, otherwise they risk suspension.