Stanford University’s report says that “incidents and controversies” related to AI have increased 26 times in a decade.
More than a third of researchers believe artificial intelligence (AI) could lead to a “nuclear catastrophe,” according to a study from Stanford University, highlighting industry concerns about the risks posed by the rapidly advancing technology.
The research is one of the findings highlighted in the AI Index Report 2023, released by the Stanford Institute for Human-Centered Artificial Intelligence, which examines the latest developments, risks and opportunities in the fast-growing field of AI.
“These systems demonstrate capabilities in answering questions and generating text, graphics and code that were unimaginable a decade ago, and they outperform the state of the art on many old and new benchmarks,” the report’s authors said. .
“However, they are prone to hallucinations, are routinely biased, and can be tricked into serving nefarious ends, highlighting the complicated ethical challenges associated with their deployment.”
The report, released earlier this month, comes amid growing calls for regulation of AI following controversies ranging from a chatbot-linked suicide to deepfake videos of Ukrainian President Volodymyr Zelenskyy appearing to surrender to invading Russian forces. troops.
Last month, Elon Musk and Apple co-founder Steve Wozniak were among the 1,300 signatories of an open letter calling for a six-month pause in training AI systems beyond the level of Open AI’s chatbot GPT-4, since “powerful AI systems should only be developed”. once we are sure that their effects will be positive and that their risks will be manageable”.
In the research highlighted in the 2023 AI Index Report, 36 percent of researchers said decisions made by AI could lead to nuclear-level catastrophe, while 73 percent said they could quickly lead to “revolutionary societal change.” .
The survey was conducted among 327 experts in natural language processing, a branch of computer science key to the development of chatbots like GPT-4, between May and June last year, before the release of Open AI’s ChatGPT in November. tech world by storm.
In an IPSOS survey of the general public, also highlighted in the index, Americans seemed particularly wary of AI, with only 35 percent agreeing that “products and services leveraging AI had more benefits than drawbacks” , compared to 78 percent of Chinese respondents, 76 percent of Saudi Arabian respondents, and 71 percent of Indian respondents.
The Stanford report also noted that the number of “incidents and controversies” related to AI had increased 26 times over the past decade.
Government initiatives to regulate and control AI are gaining ground.
China’s Cyberspace Administration this week announced design regulations for Generative AI, the technology behind GPT-4 and domestic rivals like Alibaba’s Tongyi Qianwen and Baidu’s ERNIE, to ensure the technology meets the government’s “core value of socialism” not undermine.
The European Union has proposed the “Artificial Intelligence Act” to determine which types of AI are acceptable for use and which should be banned.
US public wariness about AI has yet to translate into federal regulation, but the Biden administration this week announced the launch of public consultations on how to ensure that “AI systems are legal, effective, ethical, safe and otherwise trustworthy ”.