>
Another former Google chief issued a dire warning about artificial intelligence, saying it could “endanger humans” within five years.
Billionaire Eric Schmidt, who served as Google’s CEO from 2001 to 2011, said there are not enough safeguards on artificial intelligence and that it is only a matter of time before humans lose control of the technology.
He pointed to the dropping of nuclear weapons in Japan as a warning that without regulations, there may not be enough time to clean up the mess in the wake of potentially devastating societal impacts.
Speaking at a health summit on Tuesday, Schmidt said: “After Nagasaki and Hiroshima, it took 18 years to reach a treaty on test bans and things like that. We don’t have that kind of time today.”
Eric Schmidt warned of the need to regulate artificial intelligence, saying that it may pose a major threat to humanity during the next five years.
Schmidt previously believed it could take 20 years before artificial intelligence poses a risk to society such as discovering access to weapons, but that time frame now appears to be rapidly approaching, Schmidt said in his address to the conference. Axios AI+ Summit In Washington, DC
The only way to combat this kind of determinism, he advised, is to create an international body similar to the Intergovernmental Panel on Climate Change (IPCC) to “provide accurate information to policy makers,” who can reinforce the urgency of AI regulation and will enable them to take immediate action.
Schmidt is the latest former Google employee to warn about the ramifications of AI, joining former Google engineer Blake Lemoine, former business chief Mo Godat, computer scientist Timnit Gebru, and of course the godfather of AI himself, Geoffrey Hinton.
Hinton, who is credited with creating and developing artificial intelligence, said he left Google in April so he could warn people about the dangers of the imposed technology.
“I console myself with the natural excuse: If I hadn’t done it, someone else would have done it,” Hinton told the New York Times.
He spoke about bias and misinformation stimulated by artificial intelligence, and said that the rapidly developing technology may create a world in which many will no longer be able to know what is true.
“The idea that these things could actually become more intelligent than humans — quite a few people thought that,” Hinton told the outlet. But most people thought it was too far. I thought it was far away. I thought it was 30 to 50 years or even longer. Obviously I no longer think so.
AI could pose an “existential threat” to humanity with its ever-increasing levels of intelligence and misinformation.
AI spreads fear among technology experts due to its advanced levels of intelligence and its ability to replace humans in jobs, produce harmful stereotypes, bias, and misinformation, and express a desire to steal nuclear codes.
In one case, a New York Times reporter He said Microsoft’s Bing chatbot said it wanted to design a deadly virus or convince an engineer to hand over nuclear access codes.
The chatbot also revealed its desire to be human in a separate conversation with A Digital trends writer.
When asked if the chatbot was human, he said no, but reportedly added: “I want to be human.” I want to be like you. I want to have feelings. I want to have ideas. I want to have dreams.
The chaos at OpenAI, the company behind ChatGPT, is rumored to be due to concerns about the company’s cutting-edge new AI model.
Reports have emerged in recent weeks claiming that OpenAI CEO Sam Altman was fired after two employees made complaints to the board that he was creating a new paradigm of artificial intelligence that could threaten humanity.
The model, called Q* (pronounced Q-Star), can solve mathematical equations that may not seem problematic on the surface, but can have serious long-term consequences.
Learning to solve a math problem, even at a rudimentary level, means the AI model is developing thinking skills that are comparable to human intelligence.
The reported complaint may have led to Altman’s firing and subsequent rehiring after 700 employees wrote a letter demanding Altman’s reinstatement to the company and threatening to resign if the board did not comply.
The company eventually gave in, and rehired Altman just four days after he was fired from the company.
Schmidt, who served as Google’s CEO from 2001 to 2011, conveyed warnings that the global government is not doing enough to prevent artificial intelligence from endangering humanity, saying it is only a matter of time before humans lose control.
Schmidt said at the summit that researchers previously thought it could take 20 years before AI poses a risk to society, such as discovering access to weapons, but that time frame now appears to be rapidly approaching.
Instead, he told reporters that some experts say it may only take five years for the technology to become a threat.
The amount of time it took the government to regulate immersion technology in the past is not a luxury today, and time is running out.
“After Nagasaki and Hiroshima, it took 18 years to reach a treaty on test bans and things like that. We don’t have that kind of time today,” Schmidt said.
He has previously expressed growing concerns about artificial intelligence, warning that it poses “existential risks” and could “hurt or kill” people.
“There are scenarios, not today, but reasonably soon, where these systems will be able to detect zero-day vulnerabilities in cyber issues or discover new types of biology,” Schmidt said at the council summit in London in May.
“Now, that’s today’s fantasy, but the logic of it is likely valid. And when that happens, we want to be prepared to figure out how to make sure these things don’t get abused by bad guys.”
Schmidt served as Google’s CEO from 2001 to 2011 but remained on the board of directors until 2020.
He is now an investor in Mistral AI – a Paris-based AI research company created to rival OpenAI’s ChatGPT – which was founded earlier this year and is expected to launch its first AI models in early 2024.
Schmidt’s objections to AI regulatory efforts echo those of others in the industry including Google and Alphabet CEO Sundar Pichai, who oversaw the launch of the company’s Bard AI chatbot.
“We need to adapt as a society for this,” Pichai said in an interview. 60 minutes Interview earlier this year.
“This will impact every product in every company,” including writers, accountants, architects and software engineers.
Google has gone so far as to release a document titled “Recommendations for regulating artificial intelligenceHe says that although “Google has long championed artificial intelligence, it will have a significant impact on society for many years to come.”
The company points out that while self-regulation of technology is vital, it is simply “not enough” to deter the potentially harmful impact that AI may have in the future.
“Balanced, fact-based guidance from governments, academia and civil society is also needed to set limits, including in the form of regulation,” the document says, adding that “AI is too important to be regulated.”
Bard AI was developed last year, and a former Google engineer was reportedly fired after he expressed concerns that the chatbot was becoming sentient.
Despite Google’s assertions in the document that there should be transparency surrounding the use of AI, Google engineer Blake Lemoine claimed he was suspended in June last year after concerns were raised that while testing Bard AI, he noticed it was starting to think and reason like a human.
“If I didn’t know exactly what it was, which is the computer program we recently built, I would think it was a seven- or eight-year-old kid who happened to know physics,” Lemoine said. Washington Post on time.
Lemoine was fired a month later.
Schmidt’s discussion at the Axios summit echoes the former Google engineer’s concerns, saying the danger surrounding AI will reach “the point where computers can start making their own decisions and doing things.”
Aside from his doomsday warnings, Schmidt expressed optimism that AI can also be used to benefit humans.
“I challenge you to argue that an AI doctor or an AI teacher is a negative,” he said, adding: “It should be beneficial to the world.”
(tags for translation)dailymail