New AI Uses Speech Patterns To Predict Alzheimer’s Disease With Astonishing Accuracy
- BiotechnologyInternational News
- No Comment
- 537
By analyzing speech patterns, a new AI model can say with a high degree of
Now, Boston University researchers say they have designed a promising new “We wanted to predict what would happen in the next six years—and we found we can reasonably make that prediction with relatively good confidence and accuracy,” says Ioannis (Yannis) Paschalidis, director of the BU Rafik B. Hariri Institute for Computing and Computational Science & Engineering. “It shows the power of AI.” The multidisciplinary team of engineers, neurobiologists, and computer and data scientists published their findings in Alzheimer’s & Dementia, the journal of the Alzheimer’s Association.
“We hope, as everyone does, that there will be more and more Alzheimer’s treatments made available,” says Paschalidis, a BU College of Engineering Distinguished Professor of Engineering and founding member of the Faculty of Computing & Data Sciences. “If you can predict what will happen, you have more of an opportunity and time window to intervene with drugs, and at least try to maintain the stability of the condition and prevent the transition to more severe forms of dementia.”
Calculating the Probability of Alzheimer’s Disease
To train and build their new model, the researchers turned to data from one of the nation’s oldest and longest-running studies—the BU-led Framingham Heart Study. Although the Framingham study is focused on cardiovascular health, participants showing signs of cognitive decline undergo regular neuropsychological tests and interviews, producing a wealth of longitudinal information on their cognitive well-being.
Paschalidis and his colleagues were given audio recordings of 166 initial interviews with people, between ages 63 and 97, diagnosed with mild cognitive impairment—76 who would remain stable for the next six years and 90 whose cognitive function would progressively decline. They then used a combination of speech recognition tools—similar to the programs powering your smart speaker—and machine learning to train a model to spot connections between speech, demographics, diagnosis, and disease progression. After training it on a subset of the study population, they tested its predictive prowess on the rest of the participants.
“We combine the information we extract from the audio recordings with some very basic demographics—age, gender, and so on—and we get the final score,” says Paschalidis. “You can think of the score as the likelihood, the probability, that someone will remain stable or transition to dementia. It had significant predictive ability.”
AI: A Tool for Broadening Healthcare Access
Rather than using acoustic features of speech, like enunciation or speed, the model is just pulling from the content of the interview—the words spoken, how they’re structured. And Paschalidis says the information they put into the machine learning program is rough around the edges: the recordings, for example, are messy—low-quality and filled with background noise. “It’s a very casual recording,” he says. “And still, with this dirty data, the model is able to make something out of it.”
That’s important, because the project was partly about testing AI’s ability to make the process of dementia diagnosis more efficient and automated, with little human involvement. In the future, the researchers say, models like theirs could be used to bring care to patients who aren’t near medical centers or to provide routine monitoring through interaction with an at-home app, drastically increasing the number of people who get screened. According to Alzheimer’s Disease International, the majority of people with dementia worldwide never receive a formal diagnosis, leaving them shut off from treatment and care.