The tiny changes in your voice that give away whether you are faking a sickie

Could uttering a single sentence reveal whether you have a serious health problem?

This may soon be possible thanks to artificial intelligence (AI) programs that promise to diagnose a range of conditions from Alzheimer’s and heart failure to depression and even the common cold – all by analyzing a speech sample.

Scientists around the world use computer software to analyze voices for early signs of illness that can be detected long before conventional symptoms emerge.

But it’s not without controversy, with concerns about misdiagnoses and fears that the technology could be misused and seriously violate our privacy.

This technological leap has been made possible thanks to the huge advances in the development of AI – powerful computing systems that have human capabilities to learn and solve problems.

Scientists around the world use computer software to analyze voices for early signs of illness that can be detected long before conventional symptoms appear (file image)

The AI ​​ChatGPT promises to identify the early signs of Alzheimer’s disease, according to US researchers (file image)

Most recently, in March, researchers at the European Space Agency showed how their computerized speech analysis system could detect early signs of depression.

One of the scientists, Dr. Gábor Kiss, a computer engineer at the University of Technology and Economics in Budapest, explains: ‘The speech of depressed patients tends to become monotonous and quieter. They pause more often. We teach those properties to the software.’

Using voice samples from 218 people, some healthy and others with depression, the team reported in the journal Frontiers in Psychiatry that the app identified depressed patients with up to 84 percent accuracy.

This tool can help assess explorers in isolated places – such as scientists in Antarctica or astronauts – and help them. It could also be used in general practices, Dr. Kiss said.

Meanwhile, the AI ​​ChatGPT promises to identify the early signs of Alzheimer’s disease, according to US researchers.

ChatGPT allows users to have human-like conversations with the computer program. It answers questions and is even claimed to help with tasks such as writing emails, essays, and computer code.

Researchers at Drexel University in Philadelphia found that by giving ChatGPT voice recordings of 237 people who are either healthy or developing Alzheimer’s, they successfully learned to detect signature cues in speech that can predict whether they are in early stage dementia , with an accuracy of 80 percent, the journal PLOS Digital Health reported in December.

ChatGPT’s approach to language analysis makes it a promising candidate for identifying subtle speech features [such as specific types of hesitation, grammar and pronunciation mistakes] that can predict the onset of dementia,” says Felix Agbavor, a language expert who led the study.

The researchers suggested that this approach could avoid the conventional process of reviewing a patient’s medical history and subjecting it to a series of tests.

Meanwhile, scientists are developing another AI system to detect early signs of Parkinson’s disease, the second most common neurodegenerative disease after Alzheimer’s.

Rytis Maskeliunas, a professor of computer science at the Technical University of Kaunas in Lithuania, who is leading this work, said: ‘Research shows that a large percentage of people with Parkinson’s have speech disorders such as soft, monotonous, hoarse and hoarse voices. as well as uncertain articulation.

“This may be hard for people to hear in the early stages of the disease, but it’s what our approach is looking for.”

In November, the researchers reported creating an AI program capable of detecting 80 percent of Parkinson’s cases from a sample of voices. They also now plan to develop a phone app for detecting early Parkinson’s – which in turn would enable earlier treatment.

Similar work is being done to detect heart failure, which affects more than 900,000 people in the UK.

An American company, Cordio Medical, sampled the voices of more than 250 heart failure patients and developed an AI system, HearO, that can predict whether the condition is about to get worse and needs urgent treatment.

Tamir Tal, head of Cordio Medical, says the HearO system detects changes in voices such as shortness of breath and changes in speech patterns – and was 80 percent successful in predicting worsening heart failure.

“Doctors cannot evaluate changes in patients’ voices on a daily basis, and the human ears are not sophisticated enough to detect early changes in voice or speech,” he says.

Compared to serious illnesses, voice-scanning AI’s ability to diagnose a cold seems trivial. But it exposes one of the dangers that automated voice diagnosis can pose.

An AI detection system for cold symptoms is being developed. Researchers in India trained an algorithm to recognize a cold infection using recordings of the voices of 630 people, 111 of whom had a cold.

The voices of people with a cold are said to have altered harmonics — tremors in the higher tones — that can be detected by the algorithm at least 70 percent of the time, the journal Biomedical Signal Processing and Control reported in February.

There is concern that this could be exploited by employers to see if employees’ symptoms are real when they call in sick. Further, that we could be diagnosed with diseases against our consent, and that information would then be used against us.

As Professor Jonathan Ives, deputy director of the Center for Ethics in Medicine at the University of Bristol, told Good Health: ‘If employers were to use voice tests on us without permission, it certainly seems unethical.’

He is also concerned that insurance companies that can use AI diagnosis apps could listen to people’s voices, with or without their consent, to assess their risk of future illness before committing to mortgage or life insurance.

AI voice diagnostic apps can also be dangerously inaccurate.

Most of the systems reported here under development offer about 80 percent accuracy, meaning one in five people would be misdiagnosed; either told they have a condition they don’t have, or declared clear of a disease they actually have.

As Professor Peter Bannister, Honorary Chair of the University of Birmingham Center for Regulatory Science and Innovation, told Good Health, ‘An accuracy of 80 per cent is not enough to pass the conventional scientific thresholds for acceptance.’

What’s more, the more inaccurate they turn out to be when AI diagnostic apps are taken out of the lab and tested on large populations.

As you start applying the technology to more and more people, they will cover a much more varied set of characteristics,” explains Professor Bannister, who is also the general manager of Romilly Life Sciences, which helps companies develop AI.

“For example, it’s been seen before with AI systems for detecting skin cancer lesions that were developed only in white Caucasian patients and got confused when they were used on people of different ethnicities.”

There are other ways the AI ​​voice apps could become even less accurate, he adds.

“The type of people on whom this technology would be used often suffer from more than one condition, such as the elderly and infirm.

“The speech abnormalities that the technology detects could be a side effect of medications or existing conditions, rather than a new disease.”

Indeed, research by the Center for Voice at Northwestern University in Illinois has previously shown that medications can affect your voice by drying out the mucous membrane that covers the vocal cords.

And Professor Bannister says these apps’ diagnostic alerts should be double-checked by human clinicians and medical scans like MRIs.

‘The NHS has already seen a 30 per cent increase in the data from scans made in recent years, but also a 30 per cent reduction in the number of radiologists checking the data. Are we ready for that?’

Maria Liakata, a professor of natural language processing at Queen Mary University in London, poses another risk.

“Changes in traits such as fluency, slurred language and speed of speech have all been linked to mental illness,” she told Good Health. “But to use this information effectively, you need to have good data about the individual’s normal speech.”

She suggests such AI apps “could help us monitor people’s already known health problems rather than diagnose them.”

Experts agree that voice-diagnosed AI apps may have potential, but they stress that the technology still needs to develop much further — and that society first needs to agree on how to use it safely.

Related Post