A.I. tool promises faster, more accurate Alzheimer's diagnosis

August 27, 2020

By detecting subtle differences in the way that Alzheimer's sufferers use language, researchers at Stevens Institute of Technology have developed an A.I. algorithm that promises to accurately diagnose Alzheimer's without the need for expensive scans or in-person testing. The software not only can diagnose Alzheimer's, at negligible cost, with more than 95 percent accuracy, but is also capable of explaining its conclusions, allowing physicians to double check the accuracy of its diagnosis.

"This is a real breakthrough," said the tool's creator, K.P. Subbalakshmi, founding director of Stevens Institute of Artificial Intelligence and professor of electrical and computer engineering at the Charles V. Schaeffer School of Engineering. "We're opening an exciting new field of research, and making it far easier to explain to patients why the A.I. came to the conclusion that it did, while diagnosing patients. This addresses the important question of trustability of A.I .systems in the medical field"

It has long been known that Alzheimer's can affect a person's use of language. People with Alzheimer's typically replace nouns with pronouns, such as by saying 'He sat on it' rather than 'The boy sat on the chair.' Patients might also use awkward circumlocutions, saying "My stomach feels bad because I haven't eaten" instead of simply "I'm hungry." By designing an explainable A.I. engine which uses attention mechanisms and convolutional neural network-- a form of A.I. that learns over time -- Subbalakshmi and her students were able to develop software that could not only accurately identify well-known telltale signs of Alzheimer's, but also detect subtle linguistic patterns previously overlooked.

Subbalakshmi and her team trained her algorithm using texts produced by both healthy subjects and known Alzheimer's sufferers as they described a drawing of children stealing cookies from a jar. Using tools developed by Google, Subbalakshmi and her team converted each individual sentence into a unique numerical sequence, or vector, representing a specific point in a 512-dimensional space.

Such an approach allows even complex sentences to be assigned a concrete numerical value, making it easier to analyze structural and thematic relationships between sentences. By using those vectors along with handcrafted features - those that subject matter experts have identified - the A.I. system gradually learned to spot similarities and differences between sentences spoken by healthy or unhealthy subjects, and thus to determine with remarkable accuracy how likely any given text was to have been produced by an Alzheimer's sufferer.

"This is absolutely state-of-the-art," said Subbalakshmi, who presented her work, in collaboration with her doctorate students, Mingxuan Chen and Ning Wang, on Aug. 24 at the 19th International Workshop on Data Mining in Bioinformatics at BioKDD. "Our A.I. software is the most accurate diagnostic tool currently available while also being explainable."

The system can also easily incorporate new criteria that may be identified by other research teams in the future, so it will only get more accurate over time. "We designed our system to be both modular and transparent," Subbalakshmi explained. "If other researchers identify new markers of Alzheimer's, we can simply plug those into our architecture to generate even better results."

In theory, A.I. systems could one day diagnose Alzheimer's based on any text, from a personal email to a social-media post. First, though, an algorithm would need to be trained using many different kinds of texts produced by known Alzheimer's sufferers, rather than just picture descriptions, and that kind of data isn't yet available. "The algorithm itself is incredibly powerful," Subbalakshmi said. "We're only constrained by the data available to us."

In coming months, Subbalakshmi hopes to gather new data that will allow her software to be used to diagnose patients based on speech in languages other than English. Her team is also exploring the ways that other neurological conditions -- such as aphasia, stroke, traumatic brain injuries, and depression -- can affect language use. "This method is definitely generalizable to other diseases," said Subbalakshmi. "As we acquire more and better data, we'll be able to create streamlined, accurate diagnostic tools for many other illnesses too."
-end-


Stevens Institute of Technology

Related Language Articles from Brightsurf:

Learning the language of sugars
We're told not to eat too much sugar, but in reality, all of our cells are covered in sugar molecules called glycans.

How effective are language learning apps?
Researchers from Michigan State University recently conducted a study focusing on Babbel, a popular subscription-based language learning app and e-learning platform, to see if it really worked at teaching a new language.

Chinese to rise as a global language
With the continuing rise of China as a global economic and trading power, there is no barrier to prevent Chinese from becoming a global language like English, according to Flinders University academic Dr Jeffrey Gil.

'She' goes missing from presidential language
MIT researchers have found that although a significant percentage of the American public believed the winner of the November 2016 presidential election would be a woman, people rarely used the pronoun 'she' when referring to the next president before the election.

How does language emerge?
How did the almost 6000 languages of the world come into being?

New research quantifies how much speakers' first language affects learning a new language
Linguistic research suggests that accents are strongly shaped by the speaker's first language they learned growing up.

Why the language-ready brain is so complex
In a review article published in Science, Peter Hagoort, professor of Cognitive Neuroscience at Radboud University and director of the Max Planck Institute for Psycholinguistics, argues for a new model of language, involving the interaction of multiple brain networks.

Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.

Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.

Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.

Read More: Language News and Language Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.