First major attempts to regulate AI face headwinds from all sides
DENVER — Artificial intelligence is helping decide which Americans get the job interview, the apartment and even medical care, but the first major proposals to control bias in AI decision-making are facing pushback from all sides.
Lawmakers working on these bills in states including Colorado, Connecticut and Texas are meeting Thursday to argue their proposals, as civil rights-oriented groups and industry tug-of-war over the legislation’s core components.
Organizations, including labor unions and consumer advocacy groups, are pushing for more transparency from companies and more legal options for citizens to sue for AI discrimination. The sector offers cautious support, but continues to stand in the way of liability measures.
Bipartisan lawmakers in the middle — including those from Alaska, Georgia and Virginia — have worked together on AI legislation despite federal inaction. The aim of the press conference is to bring their work to the attention of states and stakeholders, reinforcing the importance of cooperation and compromise in this first step in regulation.
Lawmakers include James Maroney, the Democratic Senator from Connecticut, Robert Rodriguez, the Democratic Senate Majority Leader in Colorado, and Republican Senator Shelley Hughes of Alaska.
“At this point, we don’t have confidence that the federal government will approve anything anytime soon. And we see that there is a need for regulation,” Maroney said. “It is important that industry, government and academia advocates work together to get the best possible regulation and legislation.”
Lawmakers argue that the bills are a first step that can be built upon.
While more than 400 AI-related bills are being debated in statehouses across the country this year, most focus on one sector or just part of the technology – such as deepfakes used in elections or to create pornographic images.
The biggest bills introduced by this team of lawmakers provide a broad framework for oversight, especially around one of technology’s most perverse dilemmas: AI discrimination. Examples include an AI that failed to accurately assess black medical patients and another that downgraded women’s resumes while filtering job applications.
Yet up to 83% of employers use algorithms to help with hiring, according to estimates from the Equal Employment Opportunity Commission.
If left unchecked, there will almost always be bias in these AI systems, explains Suresh Venkatasubramanian, a computer and data science professor at Brown University who teaches a class on reducing bias in the design of these algorithms.
“You have to do something explicit to not be biased,” he said.
These proposals, mainly in Colorado and Connecticut, are complex, but the gist is that companies would be required to conduct “impact assessments” for certain AI systems. Those reports would include descriptions of how AI makes a decision, the data collected and an analysis of the risks of discrimination, along with an explanation of the company’s safeguards.
What matters most is who gets to see those reports. Greater access to information about the AI systems, such as the impact assessments, means greater responsibility and safety for the public. But companies fear that this also brings the risk of lawsuits and the revelation of trade secrets.
Under bills in Colorado, Connecticut and California, the company would not have to routinely submit impact assessments to the government. The responsibility would also largely fall on companies to disclose to the attorney general if they find discrimination – a government or independent organization would not test these AI systems for bias.
Unions and academics worry that an overreliance on corporate self-reporting will endanger the public’s or government’s ability to detect AI discrimination before it causes harm.
“It’s already difficult when you have such big companies with billions of dollars,” said Kjersten Forseth, who represents the AFL-CIO of Colorado, a federation of labor unions that opposes the Colorado bill. “Essentially you’re giving them an extra boot to pressure an employee or consumer.”
Technology companies say greater transparency will reveal trade secrets in a now hyper-competitive market. David Edmonson of TechNet, a bipartisan network of technology CEOs and senior executives who lobby for AI bills, said in a statement that the organization is working with lawmakers to “ensure any legislation addresses the risks of AI while allowing innovation to flourish .”
The California Chamber of Commerce opposes that state’s bill, citing concerns that impact assessments could be made public in lawsuits.
Another controversial part of the bills is who can file a lawsuit under the legislation, which the bills generally limit to attorneys general and other prosecutors, not citizens.
After a provision in the California bill that allowed citizens to file a lawsuit was dropped, Workday, a financial and HR software company, supported the proposal. Workday argues that civil actions by citizens would leave decisions up to judges, many of whom are not technology experts, and could result in an inconsistent approach to regulation.
“We can’t prevent AI from being woven into our daily lives, so it’s clear that the government will have to step in at some point, but it also makes sense that the industry itself will want a good environment to thrive,” says Chandler Morse, vice president of public policy and corporate affairs at Workday.
Sorelle Friedler, a professor who focuses on AI biases at Haverford College, is pushing back.
“That’s generally the way American society asserts our rights, which is by filing a lawsuit,” Friedler said.
Sen. Maroney of Connecticut said there has been criticism in articles claiming that he and Rep. Giovanni Capriglione, R-Texas, have introduced “industry-written bills” despite all the money being spent by the industry to oppose the legislation lobbying.
Maroney pointed out that an industry group, Consumer Technology Association, has taken out ads and built a website, urging lawmakers to reject the legislation.
“I believe we are on the right track. We have worked with people from industry, academia and civil society,” he said.
“Everyone wants to feel safe, and we are creating regulations that enable safe and reliable AI,” he added.
_____
Associated Press reporters Trân Nguyễn contributed from Sacramento, California, Becky Bohrer contributed from Anchorage, Alaska, and Susan Haigh contributed from Connecticut.
___
Bedayn is a staff member for the Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.