A Congressman wanted to understand AI. So he went back to a college classroom to learn
WASHINGTON — Don Beyer’s car dealers were among the first in the US to establish a website. As a representative, the Virginia Democrat leads a bipartisan group focused on advancing fusion energy. He reads books on geometry for fun.
So when questions arose about regulating artificial intelligence, the 73-year-old Beyer took what seemed like an obvious step to him: He enrolled at George Mason University to earn a master’s degree in machine learning . In an era when lawmakers and Supreme Court justices sometimes admit they don’t understand emerging technology, Beyer’s trip is an outlier, but it highlights a broader effort by members of Congress to educate themselves about artificial intelligence as they consider legislation that would shape its development. .
Frightening to some, exciting to others, baffling to many, artificial intelligence has been called a transformative technology, a threat to democracy or even an existential risk to humanity. It will be up to members of Congress to figure out how to regulate the industry in a way that promotes its potential benefits while reducing its worst risks.
But first they need to understand what AI is and what it is not.
“I’m generally an AI optimist,” Beyer told The Associated Press after a recent afternoon class on George Mason’s campus in suburban Virginia. “We can’t even imagine how different our lives will be in five, ten, twenty years because of AI. There won’t be any red-eyed robots coming after us anytime soon. But there are other deeper existential risks that we need to pay attention to.”
Risks such as massive job losses in industries made obsolete by AI, programs that retrieve biased or inaccurate results, or deepfake images, video and audio that could be used for political disinformation, scams or sexual exploitation. On the other side of the equation, burdensome regulations could hinder innovation, putting the U.S. at a disadvantage as other countries try to harness the power of AI.
Finding the right balance will require input not only from technology companies, but also from the industry’s critics, as well as from the industries that AI can transform. While many Americans may have formed their ideas about AI from science fiction films like The Terminator or The Matrix, it’s important that lawmakers have a clear understanding of the technology, said Rep. Jay Obernolte, R-Calif., and the chairman of the AI task force from the house.
When lawmakers have questions about AI, Obernolte is one of the people they look to. He studied engineering and applied sciences at the California Institute of Technology and received an MS in artificial intelligence from UCLA. The California Republican also started his own video game company. Obernolte said he was “very pleasantly impressed” by how seriously his colleagues on both sides of the aisle are taking their responsibility to understand AI.
That shouldn’t be a surprise, Obernolte said. After all, legislators regularly vote on bills that touch on complex legal, financial, health and scientific topics. If you think computers are complicated, check out the rules for Medicaid and Medicare.
Keeping up with the pace of technology has been a challenge for Congress since the steam engine and the cotton gin transformed the nation’s industrial and agricultural sectors. Nuclear energy and weapons are another example of a highly technical issue that lawmakers have had to deal with in recent decades, said Kenneth Lowande, a political scientist at the University of Michigan who has studied expertise and how it relates to policymaking in Congress.
Federal lawmakers have established several offices – the Library of Congress, the Congressional Budget Office, etc. – to provide resources and specialized input as needed. They also rely on staff with specific subject matter expertise, including technology.
Then there is another, more informal form of education that many members of Congress receive.
“They have advocacy groups and lobbyists banging on the door to give them briefings,” Lowande said.
Beyer said he has been interested in computers all his life and that when AI emerged as a topic of public interest, he wanted to learn more. Much more. Nearly all of his fellow students are decades younger; Most don’t seem all that surprised when they discover their classmate is a congressman, Beyer said.
He said the lessons, which he is fitting into his busy congressional schedule, are already paying off. He learned about the development of AI and the challenges facing the field. He said it helped him understand the challenges — biases, unreliable data — and the opportunities, such as improved cancer diagnoses and more efficient supply chains.
Beyer also teaches writing computer code.
“I find that learning to code – which is thinking step by step in these kind of mathematical, algorithmic steps – helps me think differently about a lot of other things – how to put an office together, how to work on a piece of the legislation,” Beyer said.
While a degree in computer science is not required, it is imperative that lawmakers understand the implications of AI for the economy, national defense, healthcare, education, personal privacy and intellectual property rights, said Chris Pierson, CEO of cybersecurity firm BlackCloak .
“AI is neither good nor bad,” said Pierson, who previously worked in Washington for the Department of Homeland Security. “It just depends on how you use it.”
The work to protect AI has already begun, although the executive branch is taking the lead so far. Last month, the White House unveiled new rules requiring federal agencies to demonstrate that their use of AI does not harm the public. An executive order issued last year requires AI developers to provide information about the safety of their products.
When it comes to more substantive action, America is playing catch-up to the European Union, which recently adopted the world’s first major rules governing the development and use of AI. The rules ban some uses — for example, routine AI-based facial recognition by law enforcement — while other programs are required to provide information about safety and public risks. The groundbreaking law is expected to serve as a blueprint for other countries considering their own AI laws.
As Congress begins that process, the focus should be on “mitigating potential harm,” said Obernolte, who said he is optimistic that lawmakers from both parties can find common ground on ways to prevent the worst AI risks.
“Nothing substantive will be done that is not bipartisan,” he said.
To guide the conversation, lawmakers have created a new AI task force (Obernolte is co-chair), as well as an AI caucus made up of lawmakers with particular expertise or interest in the topic. They have invited experts to brief lawmakers on the technology and its impact – and not just computer scientists and technology gurus, but also representatives from various sectors who see their own risks and benefits in AI.
Rep. Anna Eshoo is the Democratic caucus chair. She represents part of California’s Silicon Valley and recently introduced legislation that would require tech companies and social media platforms such as Meta, Google or TikTok to identify and label AI-generated deepfakes to ensure the public is not misled. She said the caucus has already proven its value as a “safe place” where lawmakers can ask questions, share resources and build consensus.
“There is no bad or foolish question,” she said. “You have to understand something before you can accept or reject it.”