>
Sam Altman, CEO of OpenAI, is speaking before Congress about the dangers of AI after his company’s ChatGPT exploded in popularity in recent months.
Lawmakers are snagging the CEO, stressing that ChatGPT and other models could shape “human history” such as the printing press or the atomic bomb.
According to officials, the printing press brought freedom to the American people, while the atomic bomb had terrifying consequences.
Altman told senators that generative AI could be a “press moment,” but he’s not blind to its flaw, noting that policymakers and industry leaders need to work together to “make it so.”
Tuesday’s hearing is the first in a series designed to write rules for AI, which lawmakers say should have happened with the birth of social media.
Senator Richard Blumenthal, who presided over the hearing, said Congress failed to seize the moment with social media allowing predators to harm children — but that moment is not over with AI.
Sam Altman, CEO of OpenAI, speaks before Congress about the dangers of AI after his company’s ChatGPT surged in popularity in just a few months
San Francisco-based OpenAI gained a lot of public attention after it released ChatGPT late last year.
ChatGPT is a free chatbot tool that answers questions with convincingly human-like answers.
Senator Josh Hawley said, “We couldn’t have this discussion a year ago because this technology hadn’t been made public yet. But [this hearing] show how fast [AI] changes and transforms our world.’
Tuesday’s hearing was not designed to check AI, but to spark a discussion about how to make ChatGPT and other models transparent, how to make risks public, and how to build scorecards.
Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and law, opened the hearing with a taped speech that sounded like the senator but was actually a voice clone trained on the speeches of Blumenthal and delivered a speech written by ChatGPT after asking the chatbot, “How would I open this hearing?”
The result was impressive, Blumenthal said, but he added, “What if I had asked, and what if it had approved the surrender of Ukraine or the leadership of (Russian President) Vladimir Putin?”
Blumenthal said AI companies should be required to test their systems and disclose known risks before release.
San Francisco-based OpenAI gained a lot of public attention after it released ChatGPT late last year. ChatGPT is a free chatbot tool that answers questions with convincingly human-like answers
Altman told senators that generative AI could be a “press moment,” but he’s not blind to its flaw, noting that policymakers and industry leaders need to work together to “make it so.”
Altman, looking red and wide-eyed while grilling about the future AI could create, admitted his ‘worst fears’ are causing ‘significant damage to the world’ using his technology.
“If this technology goes wrong, it can go very wrong, and we want to speak up about that. We want to work with the government to prevent that,” he continued.
One issue raised at the hearing has been discussed among the public: how AI will affect jobs.
“The biggest nightmare is the impending industrial revolution of the displacement of workers.” Blumenthal said in his opening statement.
Altman later addressed these concerns at the hearing, stating that he believes the technology will “completely automate some jobs.”
While ChatGPT could cut jobs, Altman predicted, it will also create new ones “that we think will be much better.”
“I believe there will be a lot more jobs on the other side, and today’s jobs will get better,” he said.
“I think it will completely automate some jobs, and it will create new jobs that we think will be much better.”
“There will be an impact on employment. We try to be very clear about that,” he said.
The court also included Christina Montgomery, IBM’s chief privacy officer, who also admitted that AI will change everyday jobs, but also create new ones.
“I’m a personal example of a job that didn’t exist [before AI],’ she said.
The public uses ChatGPT to write research papers, books, news articles, emails and other text-based work, while many see it as a virtual assistant.
In its simplest form, AI is a field that combines computer science and robust data sets to enable problem solving.
The technology enables machines to learn from experience, adapt to new inputs and perform human tasks.
The systems, which include machine learning and deep learning subfields, consist of AI algorithms that attempt to create expert systems that make predictions or classifications based on input data.
From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper and more accessible.
Machine learning algorithms also improved and people knew better which algorithm to apply to their problem.
In 1970, MIT computer scientist Marvin Minsky told Life magazine, “From age three to eight, we will have a machine with the general intelligence of an average human being.”
And while the timing of the prediction was wrong, the idea that AI has human intelligence is not.
ChatGPT is proof of how fast the technology is growing.
In just a few months, it passed the bar exam with a score higher than 90 percent of the people who took it, and achieved an accuracy rate of 60 percent on the US Medical Licensing Exam.
Tuesday’s hearing looks set to make amends for lawmakers’ failures with social media — taking control of AI’s progress before it’s too big to control.
Senators made it clear they don’t want industry leaders to interrupt development — something Elon Musk and other tech moguls have lobbied for — but to continue their work responsibly.