White House lays out its AI damage control plan – and KAMALA HARRIS will be program’s czar

>

The White House has revealed its plan to crack down on the AI ​​race amid growing concerns that it could upend life as we know it.

The Biden administration said the technology was “one of the most powerful” of our time, adding, “but to seize the opportunities it presents, we must first mitigate its risks.”

The plan is to launch 25 research institutes in the US that will seek assurance from four companies, including Google, Microsoft and ChatGPT’s maker OpenAI, who will “participate in a public evaluation.”

Many of the world’s brightest minds have warned of the dangers of AI, particularly that it could destroy humanity if a risk assessment is not done now.

Tech giants like Elon Musk fear that AI will soon surpass human intelligence and think independently. This means that AI no longer needs or listens to humans, allowing it to steal nuclear codes, cause pandemics and start world wars.

Vice President Kamal Harris – with the lowest approval rating of any VP – will lead the containment effort as “AI czar” with a budget of just $140 million. By comparison, Space Force has a budget of $30 billion.

Vice President Kamal Harris leads the effort as “AI czar” as the White House shares plans to mitigate risks associated with the technologies

HARRIS will meet with executives from Google, Microsoft and OpenAI on Thursday to discuss how to mitigate such potential risks.

Thursday’s meeting will discuss the companies’ role in mitigating risk, how they can work with the government and setting up a public review of the AI ​​systems.

Each company’s AI will be tested this summer at a hacker convention to see if the technology aligns with the administration’s “AI Bill of Rights.”

The release of the ChatGPT chatbot in November has sparked more discussion about AI and the government’s role in the technology.

Since AI can generate human-like writings and fake images, there are ethical and societal concerns.

These include spreading harmful content, data privacy violations, reinforcing existing biases, and Elon Musk’s favorite, the destruction of humanity.

“President Biden has made it clear that when it comes to AI, we must put people and communities at the center of support by supporting responsible innovation that serves the public interest while protecting our society, security and economy,” the White House reads. announcement.

“Importantly, this means companies have a fundamental responsibility to ensure their products are safe before they are deployed or made public.”

The public review will be conducted by thousands of community partners and AI experts, according to the White House.

There is a big AI gap in Silicon Valley.  Brilliant minds are divided on the progress of the systems - some say it will improve humanity and others fear technology will destroy it

There is a big AI gap in Silicon Valley. Brilliant minds are divided on the progress of the systems – some say it will improve humanity and others fear technology will destroy it

The White House plan

VP Kamala Harris is now AI czar and will oversee an effort to ensure the technology is developed responsibly.

Harris will work with Microsoft, OpenAI and Google on how to mitigate risk and work with the government.

The companies will “cooperate in a public evaluation.”

The public review will be conducted by thousands of community partners and AI experts, according to the White House.

Testing by industry professionals will see how the models align with the principles and practices outlined in the AI ​​Bill of Rights and the AI ​​Risk Management Framework.

The White House is investing $140 million to open seven more AI research institutes, bringing the total to 25.

Testing by industry professionals will see how the models align with the principles and practices outlined in the AI ​​Bill of Rights and the AI ​​Risk Management Framework.

Biden’s AI Bill of Rights, announced in October 2022, provides a framework for how government, tech companies and citizens can work together to drive more responsible AI.

The bill includes five principles: secure and effective systems, protection against algorithmic discrimination, data privacy, notice and explanation, and human alternatives, considerations, and fallback.

“This framework applies to (1) automated systems that (2) have the potential to meaningfully affect the rights, opportunities, or access of the American public to critical resources or services,” the White House said in a statement. October.

The White House action plan follows an open letter signed by Musk and 1,000 other technology leaders, including Apple co-founder Steve Wozniak, in March.

The tech tycoons called for a pause in the “dangerous race” to advance AI, saying more risk assessment needs to be done before humans lose control and it becomes a sentient man-hating species.

At this point, AI would have reached singularity, meaning it has surpassed human intelligence and can think independently.

AI would no longer need humans or listen to humans, allowing it to steal nuclear codes, create pandemics and start world wars.

They have asked all AI labs to halt development of their products for at least six months while more risk assessment is done.

The fear of AI comes as experts predict it will reach singularity by 2045, which is when the technology surpasses human intelligence at which we can't control it

The fear of AI comes as experts predict it will reach singularity by 2045, which is when the technology surpasses human intelligence at which we can’t control it

When labs refuse, they want governments to “step in.”

Musk fears that the technology will become so advanced that human intervention will no longer be needed – or listened to.

It’s a fear that’s widespread, even acknowledged by the CEO of AI – the company that founded ChatGPT – who said earlier this month that the technology could be developed and used to launch “widespread” cyber-attacks.

DeepAI founder Kevin Baragona, who was one of the signatories, explained why the rapidly advancing field of AI was so dangerous.

“It’s almost like a war between chimpanzees and humans,” he told DailyMail.com.

“The humans win, of course, because we are much smarter and can use more advanced technology to beat them.

“If we’re like the chimpanzees, the AI ​​will either destroy us or we’ll become addicted to it.”

This month, AI’s “godfather” Geoffrey Hinton, 75, has thrown a grenade into the furious debate over the dangers of the technology – after sensationally quitting his job at Google and saying he regretted his life’s work.

Hinton said chatbots can already hold more general knowledge than a human brain.

He added that it was only a matter of time before AI overshadows us in terms of reasoning as well.

At this point, he said, “bad actors” like Russian President Vladimir Putin could use AI for “bad things” by programming robots to “get more power.”

In 1970, Marvin Minsky, co-founder of MIT’s AI lab, told Life magazine, “From three to eight years old, we will have a machine with the general intelligence of an average human being.”

And while the timing of the prediction was wrong, the idea that AI has human intelligence is not.

ChatGPT is proof of how fast the technology is growing.

In just a few months, it passed the bar exam with a score higher than 90 percent of the people who took it, and achieved an accuracy rate of 60 percent on the US Medical Licensing Exam.

Large language models as they exist today are revolution enough to sustain ten years of monumental growth, even without further technological advancements. They’re already incredibly disruptive, Baragona told DailyMail.com.

“To claim that we need more technological advances now is irresponsible and could have devastating consequences for the economy and the quality of life.”