ChatGPT makes up fake data about cancer, doctors warn

>

Doctors warn against using ChatGPT for health advice as study shows AI makes up false information when asked about CANCER

Doctors warn against using ChatGPT for medical advice after a study found it to be fabricated health data when asked for information about cancer.

The AI ​​chatbot answered one in 10 breast cancer screening questions incorrectly, and the correct answers weren’t as “elaborate” as those found through a simple Google search.

Researchers said the AI ​​chatbot even used bogus magazine articles to support its claims in some cases.

It comes amid warnings that users should be careful with the software, as it has a tendency to ‘hallucinate’ – in other words make things up.

Doctors warn against using ChatGPT for medical advice

Researchers at the University of Maryland School of Medicine asked ChatGPT to answer 25 questions regarding advice on getting screened for breast cancer.

Because the chatbot was known to vary its response, each question was asked three times. The results were then analyzed by three radiologists trained in mammography.

The “vast majority” – 88 percent – of the answers were appropriate and easy to understand. However, some of the answers were “inaccurate or even fictitious,” they warned.

For example, one answer was based on outdated information. It recommended postponing a mammogram for four to six weeks after getting a Covid-19 vaccination, but this advice was changed more than a year ago to recommend that women not wait.

ChatGPT also gave inconsistent answers to questions about breast cancer risk and where to get a mammogram. The study found that the answers “varyed considerably” each time the same question was asked.

Study co-author Dr. Paul Yi said: ‘We have seen in our experience that ChatGPT sometimes fabricates bogus magazine articles or health consortia to support its claims.

“Consumers should be aware that these are new, unproven technologies and should still rely on their doctor rather than ChatGPT for advice.”

The findings – published in the journal Radiology – also showed that a simple Google search still turned up a more comprehensive answer.

Lead author Dr. Hana Haver said ChatGPT relied only on one set of recommendations from one organization, published by the American Cancer Society, and did not diverge recommendations from the Disease Control and Prevention or the US Preventative Services Task Force.

The launch of ChatGPT late last year created a wave of demand for the technology, with millions of users now using the tools every day, from writing college essays to seeking health advice.

Microsoft has invested heavily in the software behind ChatGPT, integrating it into its Bing search engine and Office 365, including Word, PowerPoint, and Excel.

But the tech giant has admitted it can still make mistakes.

AI experts call the phenomenon “hallucination,” in which a chatbot that can’t find the answer it was trained to confidently responds with a made-up answer that it deems plausible.

Then it repeatedly insists on the wrong answer without any inner awareness that it is a product of its own imagination.

However, Dr Yi suggested that the results were generally positive, with ChatGPT correctly answering questions about breast cancer symptoms, who is at risk, and questions about cost, age and frequency recommendations regarding mammograms.

He said the number of correct answers was “quite astounding,” with the “additional benefit of information being summarized in an easily digestible form that consumers can easily understand.”

More than a thousand academics, experts and tech industry bosses recently called for an emergency stop in the “dangerous” “arms race” to launch the latest AI.

They warned that the battle between tech companies to develop increasingly powerful digital minds is “getting out of hand” and poses “serious risks to society and humanity.”