Technology expert warns there will be an 'explosion of AI-powered cybercrime' by 2024 – and 27 US government agencies are currently using these systems in place of human

A technology expert has warned that new developments in AI-powered technology will lead to an 'explosion' of cybercrime by 2024.

Shawn Henry, CrowdStrike's chief security officer, recently shared how cybercriminals can use AI to sneak through individuals' cybersecurity defenses, spread disinformation, or infiltrate corporate networks.

Cybercriminals can use AI to trick people into believing false stories during election season and potentially divulging sensitive information, the retired executive assistant director of the Federal Bureau of Investigation (FBI) said.

The cybersecurity veteran's warning comes as AI has taken on more jobs than ever, including in US federal and state governments.

Twenty-seven departments of the US federal government have deployed AI in some way, and many states have done so as well.

In Texas, for example, more than a third of government agencies have delegated essential tasks to AI, including answering people's questions about unemployment benefits. Ohio, Utah and other states are also deploying AI technology.

As AI becomes more powerful, cybercriminals have more tools to breach security measures or deceive the public

As AI becomes more powerful, cybercriminals have more tools to breach security measures or deceive the public

With AI being deployed in so many government and public service sectors, experts fear that these technologies could fall victim to bias, loss of control over the technology or breaches of privacy.

β€œThis is a major concern for everyone,” Henry continued CBS mornings. β€œAI has really put this enormously powerful tool in the hands of the average person, and it has made them incredibly more capable.”

In October, FBI Director Chris Wray said warned that AI is currently most dangerous when it can take low-level cybercriminals to the next level.

But soon, he predicted, it will give those who are already experts an unprecedented boost, making them more dangerous than ever.

One example, according to Henry, is the creation of AI-generated audio and video “that are incredibly believable and where people look at something, see something, believe it to be true, when in fact it has been manufactured – often by a foreign government.”

Rival governments could use these AI tools to spread disinformation to undermine democratic institutions and achieve other foreign policy objectives, cybersecurity experts argue.

Deepfake videos can create convincing copies of public figures, including celebrities and politicians, but the casual viewer may not be able to see it

Deepfake videos can create convincing copies of public figures, including celebrities and politicians, but the casual viewer may not be able to see it

Deepfake videos can create convincing copies of public figures, including celebrities and politicians, but the casual viewer may not be able to see it

It's important to look closely when confronted with information from unknown places on the Internet, because it could be someone trying to trick you or steal your personal information, Henry said.

β€œYou have to verify where it came from,” he said. β€œWho is telling the story, what is their motivation, and can you verify this through multiple sources?”

“It's incredibly difficult because people – when they use video – have 15 or 20 seconds, don't have time or often don't put in the effort to collect that data and that's a problem.”

The threat is not always foreign.

One in three Texas government agencies used some form of AI in 2022, the most recent year for which this data is available, according to the Texas Tribune.

Ohio labor officials have deployed AI predict fraud in unemployment insurance claims, and Utah uses AI to track livestock.

At a national level 27 Several federal departments are already using AI.

According to its AI webpage, the U.S. Department of Education uses a chatbot to answer financial aid questions and a workflow bot to manage back-office administrative schedules.

At the Ministry of Commerce, even more processes have been automated: fisheries monitoring, export market research and business-to-business matchmaking are just some of the jobs that have been partly taken over. assigned to AI.

The Ministry of Foreign Affairs has made a list 37 current AI roles on its website, including deepfake detection, behavioral analytics for online surveys and automated damage assessments.

According to business consultancy Deloitte, government and public services are key growth areas for AI.

However, a major obstacle for the technology is that government agencies must meet high standards in terms of technology security.

β€œGiven their responsibility to support the public in an equitable manner, public service providers tend to have high standards when responding to fundamental AI issues such as trust, security, morality and fairness,” the company said.

β€œIn light of these challenges, many government agencies are making a strong effort to harness the power of AI while cautiously navigating this maze of legal and ethical considerations.”