GPs already have patients who come to ‘Dr Google’ for a diagnosis.
But Google has now developed AI that could perform just as well as a doctor in answering questions about ailments.
The tech giant reports in the journal Nature that its latest model, which processes language in a similar way to ChatGPT, can answer a range of medical questions with 92.6 percent accuracy.
That matches the answers of nine doctors from the UK, US and India, who had to answer the same 80 questions.
Google researchers say the technology does not threaten GP jobs.
Google has now developed AI that can perform as well as a doctor when answering questions about conditions. But researchers at Google say the technology does not threaten GP jobs
The tech giant reports in the journal Nature that its latest model, which processes language in a similar way to ChatGPT, can answer a range of medical questions with 92.6 percent accuracy.
But it does provide detailed and accurate answers to questions like “can incontinence be cured?” and the foods to avoid if you have rosacea.
That could lead to it being used for medical helplines such as NHS 111 in the future, the researchers suggest.
Dr. Vivek Natarajan, senior author of a study on the AI program called Med-PaLM, said: ‘This program is something we want doctors to trust.
“When people turn to the internet for medical information, they face information overload, so they can choose the worst case scenario out of 10 possible diagnoses and end up with a lot of unnecessary stress.
This language model will provide a brief expert judgment that is unbiased, cites the sources, and expresses any uncertainty.
“It could be used for triage, to understand the urgency of people’s condition and to line them up for medical treatment.
“We need this to help when we’re short on skilled doctors, and it will free them up to do their jobs.”
The artificial intelligence program Med-PaLM was adapted from a program called PaLM, which was an expert in language processing, but was not specifically trained in health.
Researchers have carefully trained the AI to provide more high-quality medical information and learn how to communicate uncertainty when there are gaps in knowledge.
The program was trained on doctors’ answers to questions, so it could reason properly and avoid information that could harm a patient.
It had to meet a benchmark called MultiMedQA, which combines six datasets of questions on medical topics, scientific research, and consumer medical questions, as well as HealthSearchQA — a dataset of 3,173 medical questions people searched for online.
Med-PaLM only gave answers in 5.8 percent of cases that put a patient at risk, reports the study, published in the journal Nature.
That is also comparable to the percentage of potentially harmful answers from the nine doctors surveyed, namely 6.5 percent.
There is still a risk of ‘hallucinations’ within the AI - meaning it can make up answers without any data behind them, for reasons engineers don’t fully understand, and the technology is still being tested.
But Dr Natarajan said: ‘This technology can answer questions that doctors get during medical examinations, which are very difficult.
“It’s really exciting and doctors don’t have to worry about AI taking over their jobs because it just gives them more time to spend with patients instead.”
However, James Davenport, Hebron and Medlock Professor of Information Technology at the University of Bath, said: ‘The press release is as accurate as it gets and describes how this article expands our understanding of how to use Large Language Models (LLMs) to answer medical questions . .
‘But there is an elephant in the room, that is the difference between ‘medical questions’ and actual medicine.
“Practicing medicine isn’t about answering medical questions—if it were purely about medical questions, we wouldn’t need teaching hospitals and doctors wouldn’t need years of training after college.”