A roadmap for designing more inclusive healthcare chatbots

Researchers from the University of Westminster; Kinsey Institute at Indiana University and Positive East looked to resources from the UK National Health Service and the World Health Organization to develop their community-driven approach to increasing inclusivity, acceptability and engagement with artificial intelligence chatbots.

WHY IT MATTERS

With the goal of identifying activities that reduce bias in conversational AI and make their designs and implementation fairer, researchers looked at several frameworks for evaluating and implementing new healthcare technologies, including the Consolidated Framework for Implementation Research, updated in 2022.

When they discovered that there were no guidelines for addressing unique challenges associated with conversational AI technologies – data security and management, ethical concerns, and the need for diverse training datasets – they followed the content analysis with a conceptual framework and consulted with stakeholders.

The researchers interviewed 33 key stakeholders from diverse backgrounds, including 10 community members, physicians, developers and mental health nurses with expertise in reproductive health, sexual health, AI and robotics and clinical safety, they said.

The habits framework method to analyze qualitative data from the interviews to develop their 10-step roadmap, Achieving health equity through conversational AI: A roadmap for design and implementation of including chatbots in healthcare, published Thursday in PLOS Digital Health,

The report guides 10 phases of AI chatbot development, starting with concept and planning, including safety measures, structure for preliminary testing, governance for healthcare integration and audits and maintenance, and ending with termination.

The inclusive approach is crucial for reducing prejudice, promoting trust and maximizing outcomes for marginalized populations, according to Dr Tomasz Nadarzynski, who led the research at the University of Westminster.

“The development of AI tools must go beyond simply ensuring effectiveness and safety standards,” he said in a statement.

“Conversational AI should be designed to address specific diseases or conditions that disproportionately affect minority groups due to factors such as age, ethnicity, religion, gender, gender identity, sexual orientation, socioeconomic status, or disability,” the researchers said.

Stakeholders emphasized the importance of identifying public health disparities that can be reduced using conversational AI. They said this was done from the beginning, as part of the initial needs assessments, before the tools were created.

“Designers must define and set behavioral and health outcomes that conversational AI aims to influence or change,” researchers said.

Stakeholders also said that conversational AI chatbots should be integrated into healthcare environments, designed with diverse input from the communities they aim to serve, and made highly visible. They must ensure accuracy with confidence and protected data security and are tested by patient groups and diverse communities.

Health AI chatbots should also be regularly updated with the latest clinical, medical and technical developments, monitored – including user feedback – and evaluated for their impact on healthcare services and staff workloads, according to the study.

Stakeholders also said that using chatbots to expand access to healthcare should be implemented into existing care pathways, and “not be designed to function as a standalone service,” and may require customization to align with the local needs.

THE BIG TREND

It was predicted that money-saving AI chatbots in healthcare would be a crawl-walk-run endeavor, with simpler tasks moved to chatbots, while the technology is advanced enough to handle more complex tasks.

Since ChatGPT made conversational AI available to every industry in late 2022, healthcare IT developers have ramped up testing to uncover information, improve communications, and shorten administrative tasks.

Last year, UNC Health tested an internal generative AI chatbot tool with a small group of physicians and administrators so staff can spend more time with patients and less time in front of a computer. Many other provider organizations are now using generative AI in their operations.

AI is being used in patient planning and post-discharge care to help reduce hospital readmissions and reduce social healthcare inequalities.

But, trust is crucial for AI chatbots in healthcare, healthcare leaders say, and they must be rigorously developed.

“You’ve got to have a human somewhere,” said Kathleen Mazza, clinical informatics consultant at Northwell Health, during a panel session at the HIMSS24 Virtual Care Forum.

“You don’t sell shoes to people online. This is healthcare.”

ON THE RECORD

“We have a responsibility to harness the power of ‘AI for good’ and focus it on addressing pressing societal challenges such as health inequality,” Nadarzynski said in a statement.

“To do this, we need a paradigm shift in how AI is created – one that emphasizes co-production with diverse communities throughout its lifecycle, from design to implementation.”

Andrea Fox is editor-in-chief of Healthcare IT News.
Email: afox@himss.org

Healthcare IT News is a HIMSS Media publication.