Can government and industry solve racial bias in AI?

The promise of artificial intelligence in healthcare is enormous – with algorithms that can find answers to big questions in big data, and automation helping doctors in so many other ways.

On the other hand, according to the HHS Office of Civil Rights, there are “examples after examples” of AI and machine learning models trained on bad or biased data and resulting in discrimination that can make them ineffective or even unsafe for patients.

The federal government and the healthcare IT industry are both motivated to solve AI's bias problem and prove that it can be used safely. But can they 'get it right'?

That's question moderator Dan Gorenstein, host of the podcast Considerations, wondered last Friday at the annual meeting of the Office of the National Coordinator for Healthcare IT on Friday morning. According to him, answering it was absolutely necessary.

While eradicating racial bias in algorithms remains unsettled territory, the administration is rolling out action after action on AI, from ethics pledges in healthcare AI orchestrated by the White House to a series of regulatory requirements such as ONC's new AI algorithm transparency requirements. .

Federal agencies are also actively participating in industry coalitions and forming task forces to study the use of analytics, clinical decision support, and machine learning in healthcare.

FDA drives 'rule of the road'

It takes a lot of time and money to demonstrate performance in multiple subgroups and get an AI product through the Food & Drug Administration, which can frustrate developers.

But like the tightly controlled banking certification processes that every financial company must go through, says Troy Tazbaz, director of digital health at the FDA, the government must work with the healthcare industry to develop a similar approach to artificial intelligence.

“The government cannot regulate this alone because it is developing at a pace that requires very, very clear engagement between the public and private sectors,” he said.

Tazbaz said the government and industry are working to agree on a range of objectives, such as AI security controls and product lifecycle management.

When asked what the FDA can do better in bringing products to market, Suchi Saria, founder, CEO and chief scientific officer of Bayesian Health and founder and director of research and technical strategy at the Malone Center for Engineering in Healthcare, said: Johns Hopkins University says they value rigorous validation processes because they make AI products better.

However, she wants to shorten the FDA approval timeline to two and three months and she said she thinks this can be done without sacrificing quality.

Tazbaz acknowledged that while procedural improvements can be made – “first external auditors are a possible consideration” – it's not really possible to define a timeline.

“There is no one size fits all process,” he says.

Tazbaz added that while the FDA is optimistic and excited about how AI can solve so many challenges in healthcare, the risks associated with integrating AI products into a hospital are far too great not to be addressed as pragmatically as possible to be.

Algorithms are subject to data drift, so if the production environment is a healthcare system, discipline must be maintained.

“If you design something based on the criticality of the industry you're developing for, your processes, your development discipline, have to match that criticality,” he said.

Tazbaz said government and industry need to align based on the biggest needs: where technology can be used to solve problems, and from there “drive the discipline.”

“We need to be open and honest about where we start,” he said.

If the operational discipline is in place, “then you can prioritize where you want to integrate this technology and in what order,” he explains.

Saria noted that the AI ​​blueprint created by the Coalition for Health AI was followed by work to build assurance labs to create and accelerate the delivery of more products to the real world.

Knowing 'the full context'

Ricky Sahu, founder of GenHealth.ai and 1up.health, asked Tazbaz and Saria for their thoughts on how to be prescriptive about when an AI model has biases and when it solves a problem based on a particular ethnicity.

“It's actually very difficult to separate racial biases from the underlying demographic characteristics and predispositions of different races and people,” he said.

What needs to be done is “integrating a lot of knowledge and context that goes way beyond the data” – medical knowledge around a patient population, best practices, standard of care, etc., Saria responded.

“And this is another reason why when we build solutions, it has to be close to any monitoring and any coordination, all of this reasoning has to be really close to the solution,” she said.

“We need to know the full context to be able to reason about it.”

Statisticians translate for documents

With 31 source attributes, ONC aims to capture categories of AI in a product label breakdown – despite the lack of industry consensus on the best way to represent those categories.

The functionality of an AI nutrition label “has to be such that the customer, let's say the supplier organization, Oracle's customer, can fill that out,” explains Micky Tripathi, national coordinator for health IT.

With them, ONC does not advise whether an organization uses the AI ​​or not, he said.

“We say give that information to the provider organization and let them decide,” Tripathi said, noting that the information should be available to the board of directors, but it is not required to be available to the frontline user.

“We start with a functional approach to a certification, and as the industry starts to move toward the more standardized way of doing this, then we convert that into a specific technical standard.”

For example, Oracle is putting together an AI “nutrition label” and exploring how to demonstrate fairness as part of its development of the ONC certification.

By working with industry, ONC can reach a consensus that moves the AI ​​industry forward.

“The best standards come from the bottom up,” says Tripathi.

Gorenstein asked Dr. James Ellzy, vice president federal, health manager and market leader at Oracle Health, what doctors want from the nutrition label.

“Something I can digest in seconds,” he said.

Ellzy explained that with so little time with patients for discussion and a physical exam, “there may be only five minutes left to figure out what to do going forward.”

“I don't have time to dig out and read a long story about this population. I want you to actually tell me, based on the fact that you see what patient I have, and based on that the productivity is of 97% applies to your patient and This is what you need to do,” he said.

A reckoning for AI in healthcare?

The COVID-19 pandemic has put a spotlight on a crisis in the standard of care, said Jenny Ma, senior advisor at the HHS Office for Civil Rights.

“In particular, we saw an incredible increase in ageism and disability discrimination with very scarce resources being allocated unfairly and in a discriminatory manner,” she said.

“It was a very surprising experience to see firsthand how ill-equipped not only Duke was, but many health care systems across the country to cater to low-income, marginalized populations,” added Dr. Mark Sendak of the Duke Institute for Health Innovation.

Although OCR was a law enforcement agency, it did not take punitive action during the public health emergency, Ma noted.

“We worked with states to figure out how to develop fair policies that wouldn't discriminate, and then issued guidance accordingly,” she said.

At OCR, however, “we see all kinds of discrimination happening within the AI ​​space and elsewhere,” she said.

Note that Section 1557 of the Affordable Care Act The nondiscrimination statute is not intended to be set in stone; it is intended to create additional regulations as needed to address discrimination.

OCR has received 50,000 comments for proposed revisions to Section 1557 that are still under review, she noted.

Sendak said enforcement of non-discrimination in AI is reasonable.

“I'm actually very happy that this is happening and that there is enforcement,” he said.

As part of Duke's Health AI Partnership, Sendak said he personally conducted most of the 90 interviews with the health care system.

“I asked people how do you judge bias or inequality? And everyone's answer was different,” he said.

When bias is discovered in an algorithm, “it forces a very uncomfortable internal dialogue with healthcare system leaders to recognize what's in the data, and the reason it's in the data is because it happened in practice” , he said.

“In many ways, dealing with these questions is compelling or thinking that I think has implications beyond just AI.”

When the FDA looks at the developers' AI ingredients and ONC makes that ingredient list available to hospitals and healthcare providers, OCR is trying to say, “Hey, if you pick that product off the shelf and you look at that list, you're also an active participant,” said Ma.

Sendak said one of his biggest concerns is the need for technical assistance, noting that several organizations with fewer resources had to withdraw from the Health AI Partnership because they could not make time for interviews or participation in workshops.

“Like it or not, the health care systems that have the hardest time assessing the potential for bias or discrimination have the lowest resources,” he said.

“They are the most likely to rely on external types of procurement for AI adoption,” he added. 'And they are most likely to stumble upon a landmine of which they are not aware.

“These regulations must be accompanied by on-the-ground support for healthcare organizations,” Sendak said to applause.

“There are some providers who may use this technology without knowing what's in it and get caught with a complaint from their patients,” Ma acknowledged.

“We are absolutely willing to work with these providers,” but OCR will look to ensure providers properly train their staff on AI bias, take an active role in AI implementation, and establish and maintain audit mechanisms.

The AI ​​partnership could look different in the next year or two, Ma said.

“I think there is alignment across the ecosystem as regulators and the regulated parties continue to determine how we avoid bias and discrimination,” she said.

Andrea Fox is editor-in-chief of Healthcare IT News.
Email: afox@himss.org

Healthcare IT News is a HIMSS Media publication.

Related Post