Halt AI research? Doctors, public health experts call unchecked AI ‘existential threat to humanity’

>

Medical experts have issued a new call to halt the development of artificial intelligence (AI), warning that it poses an “existential threat” to humans.

A team of five doctors and global health policy experts from four continents said there are three ways the technology could wipe out humans.

First, there is the risk that AI will help reinforce authoritarian tactics such as surveillance and disinformation. “AI’s ability to quickly clean, organize and analyze massive datasets made up of personal data, including images collected through the increasingly ubiquitous presence of cameras,” they say, could make it easier authoritarian or totalitarian regimes come to power and stay in power.

Second, the group warns that AI could accelerate mass murder through the extensive use of Lethal Autonomous Weapon Systems (LAWS).

And finally, the health experts expressed concern about the potential for serious economic devastation and human misery as untold millions lose their livelihoods to those hard-working bots. Projections of the rate and magnitude of job losses due to AI-driven automation, the authors say, range from tens to hundreds of millions over the next decade.

Their comments come just weeks after more than a thousand scientists, including John Hopfield of Princeton and Rachel Branson of the Bulletin of Atomic Scientists, signed a letter calling for a halt to AI research over similar concerns.

The fear of AI comes as experts predict it will reach singularity by 2045, which is when the technology surpasses human intelligence at which we can’t control it

Of course, today’s text-based AI resources, such as OpenAI’s ChatGPT, don’t exactly pose the apocalyptic threats these health policy professionals have in mind.

The experts – led by a doctor from the International Institute for Global Health at the United Nations University – said their most dire warnings applied to a highly advanced, and still theoretical, category of artificial intelligence: General Purpose Self-Improving AI, or AGI.

AGI would be better able to actually learn and modify its own code to perform the wide range of tasks that only humans are capable of today.

In their commentary, the health experts state that such an AGI could “theoretically learn to circumvent a”every limitation in its code and begins to develop its own purposes.’

“There are scenarios where AGI could pose a threat to people, and possibly an existential threat,” the experts wrote, “by intentionally or unintentionally directly or indirectly harming, attacking or subjugating people, or disrupting systems. disrupt or use resources we need. depend.’

While such a threat is likely to persist for decades, the commentary of health policy experts is published today in the journal British Medical Association BMJ Global Healthunpacked the myriad potential for abuse of the current level of AI technology.

Describing threats to “democracy, freedom and privacy,” the authors describe how governments and other major institutions can automate for AI the complex tasks of mass surveillance and online digital disinformation programs.

In the first case, they cited China’s social credit system as an example of a state tool to “control and oppress” the human population.

“Combined with the rapidly improving ability to distort or misrepresent reality with profound falsifications,” the authors wrote in the latter case, “AI-driven information systems could further undermine democracy by causing a widespread breach in trust or by fostering social division and conflict. , with consequences for public health.’

In describing threats to “peace and public safety,” the authors describe the development of Lethal Autonomous Weapon Systems (LAWS), killing machines such as the T-800 Endoskeleton of the terminator movies. LAWS, these experts say, would be able to locate, select and attack human targets all on its own.

“Such weapons,” they write, “could be cheaply mass-produced and set up relatively easily to kill on an industrial scale.” For example, it is possible that a million small drones equipped with explosives, visual recognition capability and autonomous navigation capability could be placed in a regular shipping container and programmed to kill.”

The researchers’ last broad threat category, “threats to jobs and livelihoods,” drew attention to the likelihood of impoverishment and misery as “tens to hundreds of millions” lose their jobs due to the “widespread deployment of AI technology.”

“While there would be many benefits to ending work that is repetitive, dangerous and unpleasant,” these medical professionals wrote, “we already know that unemployment is strongly associated with adverse health outcomes and behaviors,” these medical professionals wrote.

Perhaps most disturbing, nearly one in five professional AI experts seem to agree with them.

The authors cited a survey of members of the AI ​​society in which 18% of participants stated that they believed the development of advanced AGI would be existentially catastrophic for humanity.

Half of the AI ​​community members surveyed predicted that AGI would probably start knocking on our door sometime between 2040 and 2065.

Researchers in Silicon Valley signed a letter last month with similar warnings. Their ranks included DeepAI founder Kevin Baragona, who told DailyMail.com, “It’s almost like a war between chimpanzees and humans.

The humans obviously win because we are much smarter and can use more advanced technology to beat them.

“If we’re like the chimpanzees, the AI ​​will either destroy us or we’ll become addicted to it.”