Republicans want changes from HHS on AI assurance labs

Some members of Congress are asking the U.S. Health and Human Services to abandon a yearslong effort to set up government-run artificial intelligence labs and create an AI assurance lab model in partnership with industry.

“We write to express our deep concerns about the potential role of assurance laboratories in the regulatory oversight of artificial intelligence technologies, and how this will lead to regulation and stifle innovation,” Reps. Dan Crenshaw, R-Texas; Brett Guthrie, R-Kentucky; Jay Obernolte, R-California; and dr. Mariannette Miller-Meeks, R-Iowa, said in a letter addressed to Micky Tripathi, acting chief AI officer at HHS.

WHY IT’S IMPORTANT

With deregulation a priority for the new Trump administration in 2025, Republicans say they are concerned about how AI in healthcare will be driven.

Writing to Tripathi, who also serves as assistant secretary for technology policy and national coordinator for health IT, the representatives sought clarification on the overarching objectives of the agency’s reorganization, according to a story in Politics on Monday.

As part of a larger technology restructuring by HHS, the new ASTP – formerly Office of the National Coordinator for Health Information Technology – announced in July that it would have increased responsibilities, including in healthcare AI, along with new staff and more financing. .

The letter also questions ASTP/ONC’s regulatory authorities and its role in the overall healthcare system through the establishment of assurance laboratories to complement the U.S. Food & Drug Administration’s assessment of AI tools, and suggests that there is a significant conflict of interest.

“We are particularly concerned about the potential creation of fee-based assurance labs, which would consist of competing companies,” the representatives said, adding that larger, established technology companies could gain an unfair competitive advantage in the sector and could have a negative impact on innovation.

The representatives asked eleven questions and requested answers by December 20.

This is what a spokesperson for ASTP says Healthcare IT news by email that the agency cannot comment on the letter at this time. CHAI did not respond to our request for comment, but this story will be updated if one is provided.

THE BIG TREND

One of the signatories of the letter, Representative Miller-Meeks, had previously asked the FDA’s then-director of the Center for Devices and Radiological Health about CHAI and its members.

At a House Energy and Commerce subcommittee on the regulation of drugs, biologics and medical devices, Guthrie, as chair of the subcommittee, said during opening remarks that several regulatory missteps have created “uncertainty among innovators.”

Miller-Meeks specifically asked if the FDA would outsource certification to the coalition. She noted that Google and Microsoft are founding memberswhile Mayo Clinic, which says it has more than 200 AI deployments, employs some of the coalition’s leaders.

“It does not pass the smell test,” she had said, and shows “clear signs of attempts at regulatory control.”

CHAI, which unveiled standards for AI transparency in healthcare in line with those in ASTP’s requirements for certifying health IT, said a highly anticipated AI nutrition label will be coming soon.

Dr. John Halamka, president of the Mayo Clinic Platform, discussed at HIMSS24 earlier this year the substantial potential benefits and real potential harms that could result from predictive and generative AI used in clinical settings.

“Mayo has an assurance lab and we test commercial algorithms and proprietary algorithms,” he said in March.

“And what you do is identify the biases and then mitigate them. These can be mitigated by feeding the algorithm back to different types of data, or just understanding that the algorithm can’t be completely fair to all patients. You just be extremely careful where and how you use it.”

Since its founding in 2021, CHAI says it has been working to provide AI transparency, creating guidelines and guardrails to address algorithmic biases in healthcare by taking into account government concerns and building on the AI White House Bill of Rights and NIST’s AI Risk Management Framework and provide support. AI assurance as outlined in President Joe Biden’s Executive Order on AI, which directs HHS to establish a safeguards program.

ON THE RECORD

“The ongoing dialogue around AI in healthcare must take into account the differing powers and duties of different agencies and offices to avoid overlapping responsibilities, which could lead to confusion among regulated entities,” the four Republican members of Congress said in their letter.

Andrea Fox is editor-in-chief of Healthcare IT News.
Email: afox@himss.org

Healthcare IT News is a HIMSS Media publication.

Related Post