Warning over UK use of unregulated AI chatbots to create social care plans

Britain’s struggling carers need all the help they can get. But that shouldn’t include the use of unregulated AI bots, according to researchers who say the AI ​​revolution in social care needs a tough ethical edge.

A pilot study by academics at the University of Oxford found that some healthcare providers were using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care.

That poses a potential risk to patient confidentiality, said Dr. Caroline Green, an early-career researcher at the Institute for Ethics in AI in Oxford, who surveyed healthcare organizations for the study.

“If you put any type of personal data into a generative AI chatbot, that data is used to train the language model,” Green said. “That personal information could be generated and disclosed to someone else.”

She said healthcare providers can act on incorrect or biased information and unintentionally cause harm, and that an AI-generated care plan can be substandard.

But there were also potential benefits for AI, Green added. “It could help with this administrative burden and ensure that people can review their care plans more often. I wouldn’t encourage anyone to do that at this point, but there are organizations working on creating apps and websites that can do just that.”

Technology based on large language models is already being used by healthcare and healthcare organizations. PainChek is a phone app that uses AI-trained facial recognition to identify if someone who cannot speak is in pain by detecting small muscle twitches. Oxevisiona system used by half of the NHS’s mental health trusts uses infrared cameras mounted in seclusion rooms – for potentially violent patients with severe dementia or acute psychiatric needs – to monitor whether they are at risk of falling, how much sleep they get and other activity levels.

Earlier-stage projects include Sentai, a care monitoring system that uses Amazon’s Alexa speakers to remind people without 24-hour care to take medications and allow family members elsewhere to check in on them.

The Bristol Robotics Lab is developing a device for people with memory problems that has detectors that turn off the gas supply if a hob is left on, according to George MacGinnis, challenge director for healthy aging at Innovate UK.

“Historically, that would mean a call from a gas engineer to make sure everything was safe,” MacGinnis said. “Bristol is working with disability charities to develop a system that allows people to do this safely themselves.

“We have also funded a circadian lighting system that adapts to people and helps them regain their circadian rhythm, which is one of the things that is lost in dementia.”

While people working in the creative industries worry about the possibility of being replaced by AI, in social care there are around 1.6 million employees and 152,000 vacancies, while 5.7 million unpaid carers for relatives, friends or neighbors to assure.

“People see AI in a binary way: it replaces a worker, or you continue as we are now,” says Lionel Tarassenko, professor of engineering sciences and president of Reuben College, Oxford. “That is not the case at all. You have to train people with a low level of experience to reach the same level as someone with a lot of expertise.

“I was involved in the care of my father, who passed away four months ago at the age of 88. We had a live-in caregiver. When we took over at the weekend, my sister and I were actually caring for someone we loved and knew well who had dementia, but we didn’t have the same skill level as live-in caregivers. So with these tools we could have reached a comparable level as a trained, experienced healthcare provider.”

However, some healthcare managers fear that using AI technology risks inadvertently breaking rules and losing their licenses. Mark Topps, who works in social care and co-hosts The caring look podcast, said people working in social care were concerned that by using technology they would inadvertently break Care Quality Commission rules and lose their registration.

“Until the regulator releases guidance, many organizations will do nothing because of the backlash if they get it wrong,” he said.

Last month, 30 social care organisations, including the National Care Association, Skills for Care, Adass and Scottish Care, met at Reuben College to discuss how generative AI can be used responsibly. Green, who convened the meeting, said they planned to produce a good practice guide within six months and hoped to work with the CQC and the Department of Health and Social Care.

“We want to have guidelines enforceable by the DHSC that define what responsible use of generative AI and social care actually means,” she said.