One in five GPs uses AI such as ChatGPT for daily tasks, research shows

A survey found that a fifth of GPs are using artificial intelligence (AI) tools such as ChatGPT to help them with tasks such as writing letters for their patients after appointments.

The questionnairepublished in the journal BMJ Health and Care Informatics, spoke to 1,006 GPs. They were asked if they had ever used a form of AI chatbot in their clinical practice, such as ChatGPT, Bing AI or Google’s Gemini, and then asked what they used these tools for.

One in five respondents said they had used generative AI tools in their clinical practice. Nearly a third (29%) of them said they had used the tools to generate documentation following patient appointments, while 28% said they had used the tools to suggest an alternative diagnosis.

A quarter of respondents said they had used the AI ​​tools to suggest treatment options for their patients. These AI tools, such as ChatGPT, work by generating a written response to a question posed to the software.

The researchers said the findings showed that “GPs can derive value from these tools, particularly in administrative tasks and to support clinical reasoning”.

However, the researchers questioned whether these AI tools in use could harm and undermine patient privacy, “as it is not clear how the internet companies behind generative AI use the information they collect.”

They added: “While these chatbots are increasingly the target of regulatory efforts, it remains unclear how legislation will practically interact with these tools in clinical practice.”

According to Dr Ellie Mein, medico-legal adviser at the Medical Defence Union, the use of AI by GPs could lead to problems such as inaccuracy and confidentiality of patient data.

“This is an interesting piece of research and resonates with our own experience of advising MDU members,” Mein said. “It’s natural that healthcare professionals want to find ways to be smarter about the pressures they face. In addition to the applications mentioned in the BMJ article, we’ve found that some doctors are turning to AI programs to draft complaint responses for them. We’ve alerted MDU members to the issues this raises, including inaccuracy and patient confidentiality. There are also data protection considerations.”

She added: “When dealing with patient complaints, AI responses may sound plausible, but they can contain inaccuracies and refer to incorrect guidelines that are difficult to spot when embedded in highly eloquent passages of text. It is vital that clinicians use AI ethically and adhere to relevant guidelines and regulations. This is clearly an evolving area and we agree with the authors that current and future clinicians need to be more aware of the benefits and risks of using AI in the workplace.”