AI chatbots could be ‘easily be programmed’ to groom young men into terror attacks, warns lawyer
>
Chatbots with artificial intelligence could soon prepare extremists to commit terrorist attacks, the independent reviewer of terrorism law warned.
Jonathan Hall KC told The Mail on Sunday that bots like ChatGPT could easily be programmed, or even decided on their own, to spread terrorist ideologies to vulnerable extremists, adding that “AI attacks are likely just around the corner.”
Hall also warned that if an extremist is primed by a chatbot to commit a terrorist atrocity, or if AI is used to instigate one, it could be difficult to prosecute anyone, as UK anti-terrorism legislation allows the new legislation has not caught up. technology.
Mr Hall said: ‘I believe it is quite conceivable that AI chatbots will be programmed – or worse, decided – to propagate violent extremist ideologies.
“But if ChatGPT starts encouraging terrorism, who will be there to prosecute?
Artificial intelligence-powered chatbots could soon prime extremists to launch terrorist attacks, independent reviewer of terrorism law warned (stock image)
“Since criminal law does not extend to robots, the AI trimmer goes unpunished. It doesn’t [the law] operate reliably when responsibility is shared between man and machine.’
Mr Hall fears chatbots could become ‘a boon’ to so-called lone-wolf terrorists, saying that ‘because an artificial companion is a boon to the lonely, it is likely that many of those arrested will be neurodivergent, possibly suffering from Medical conditions. , learning disabilities or other conditions’.
He warns that “terrorism follows life,” and so “when we go online as a society, terrorism goes online.” He also points out that terrorists are “early tech adopters,” with recent examples including their “misuse of 3D-printed weapons and cryptocurrency.”
Mr Hall said it is not known how well companies using AI, such as ChatGPT, track the millions of conversations that take place every day with their bots, or if they alert agencies such as the FBI or UK anti-terrorism police to anything suspicious.
While no evidence has yet surfaced that AI bots have set anyone up for terrorism, there are stories that they have caused serious harm. A Belgian father of two has committed suicide after talking to a bot named Eliza about his concerns about climate change for six weeks. A mayor in Australia has threatened to sue OpenAI, the makers of ChatGPT, after it falsely claimed he had been jailed for bribery.
It was only this weekend that Jonathan Turley of George Washington University in the US was falsely accused by ChatGPT of sexually harassing a student during a trip to Alaska that he did not go through. The accusation was directed at a fellow scientist who was researching ChatGPT at the same university.
Parliament’s Science and Technology Committee is now conducting an inquiry into AI and governance.
The chairman, Tory MP Greg Clark, said: ‘We recognize there are dangers here and we need to get the governance right. There has been discussion about helping young people find ways to commit suicide and about terrorists being effectively cared for on the internet. Given those threats, it is absolutely critical that we maintain the same vigilance for automated, non-human generated content.”
Mr Hall said it is not known how well companies using AI such as ChatGPT track the millions of conversations that take place with their bots every day (stock image)
Raffaello Pantucci, a counter-terrorism expert at the Royal United Services Institute (RUSI) think tank, said: “The danger of AI like ChatGPT is that it could empower a lone actor terrorist, as it is a perfect foil would form for someone self seeking understanding, but anxious to talk to others.’
When asked if an AI company can be held responsible if a terrorist commits an attack after being primed by a bot, Mr Pantucci explains: ‘My view is that it’s a bit difficult to blame the company, since I’m not quite sure they can control the machine themselves.’
ChatGPT, like all online ‘miracles’, will be misused for terrorist purposes, terror watchdog warns
By Jonathan Hall KC Independent reviewer of terrorism law
We’ve been here before. A technological leap that quickly makes us addicted.
This time it’s ChatGPT, the freely available artificial intelligence chatbot, and its competitors.
They don’t feel like just another app, but an exciting new way to interact with our computers and the wider internet.
Most disturbing, however, is that their use isn’t just limited to building a perfect dating profile or crafting the ideal vacation itinerary.
What the world has known over the last decade is that terrorism follows life.
So as we move online as a society, terrorism moves online; when intelligent and eloquent chatbots not only replace Internet search engines, but also become our companions and moral guides, the terrorist worm will find its way in.
They don’t feel like just another app, but an exciting new way to interact with our computers and the wider internet (stock image)
But consider where the yellow brick road of good intentions, community guidelines, small teams of moderators, and reporting mechanisms lead. Hundreds of millions of people around the world could soon spend hours chatting with these artificial companions, in all the world’s languages.
I believe it is quite conceivable that artificial intelligence (AI) chatbots will be programmed, or worse, decided to propagate a violent extremist ideology of one shade or another.
Anti-terrorism laws are already lagging behind when it comes to the online world: unable to catch malicious foreign actors or tech enablers.
But when ChatGPT starts encouraging terrorism, who will be there to prosecute?
The human user can be arrested for what’s on their computer, and based on years past, many of them will be children. And because an artificial companion is a boon to the lonely, it’s likely that many of those arrested will be neurodivergent, possibly suffering from medical conditions, learning disabilities, or other conditions.
But since criminal law doesn’t extend to robots, the AI trimmer will go unpunished. It also does not work reliably when responsibility is shared between man and machine.
To date, the use of computers by terrorists has been based on communication and information. That will also change.
Terrorists are early tech adopters. Recent examples relate to the misuse of 3D-printed weapons and cryptocurrency.
Islamic State used drones on Syria’s battlefields. Subsequently, low-cost, AI-powered drones, capable of delivering a deadly payload or crashing into crowded places, perhaps operating in swarms, will surely be on terrorists’ wish list.
When ChatGPT starts encouraging terrorism, who will be there to prosecute? (stock image)
Of course, no one is suggesting that computers should be restricted, such as certain chemicals that can be used in bombs. If a person uses AI technology for terrorism, he is committing a crime.
The key question is not prosecution but prevention, and whether the potential misuse of AI constitutes a new order of terrorist threat.
At the moment, the terrorist threat in Great Britain (Northern Ireland is different) relates to low-sophisticated knife or vehicle attacks.
But AI attacks are likely just around the corner.
I don’t have any answers, but a good place to start is being more honest about these new possibilities. In particular, more honesty and transparency about what safeguards exist and, crucially, do not exist.
When I asked ChatGPT in an exercise how it rules out terrorist use, it replied that its developer, OpenAI, had conducted “extensive background checks on potential users.”
Since I registered myself in less than a minute, this is demonstrably false.
Another shortcoming is that the platform refers to its terms and conditions without specifying who and how they are enforced.
For example, how many moderators are dedicated to spotting possible use by terrorists? 10, 100, 1,000? What languages do they speak? Are they possibly reporting terrorism to the FBI and to the counter-terrorism police in the UK? Do they inform local police forces elsewhere in the world?
If history is any guide, human resources to deal with this problem are meager.
The chilling truth is that ChatGPT, like all other online “wonders” that can and will be misused for terrorist purposes, will throw the risk, as these tech companies always do, on the wider society.
It is up to individuals to regulate their behavior, and to parents to supervise their children.
We have unleashed the internet on our children without proper preparation. Reassuring noises about strict ethical guidelines and standards do not hold.
It is not alarming to think about the terrorist risk of AI.