BA or DA? Decoding syllables to show the limits of artificial intelligence

January 31, 2018

For about the last ten years, researchers have been using artificial intelligence techniques called machine learning to decode human brain activity. Applied to neuroimaging data, these algorithms can reconstitute what we see, hear, and even what we think. For example, they show that words with similar meanings are grouped together in zones in different parts of our brain. However, by recording brain activity during a simple task--whether one hears BA or DA--neuroscientists from the University of Geneva (UNIGE), Switzerland, and the Ecole normale supérieure (ENS) in Paris now show that the brain does not necessarily use the regions of the brain identified by machine learning to perform a task. Above all, these regions reflect the mental associations related to this task. While machine learning is thus effective for decoding mental activity, it is not necessarily effective for understanding the specific information processing mechanisms in the brain. The results are available in the PNAS journal.

Modern neuroscientific data techniques have recently highlighted how the brain spatially organises the portrayal of word sounds, which researchers were able to precisely map by region of activity. UNIGE neuroscientists thus asked how these spatial maps are used by the brain itself when it performs specific tasks. "We have used all the available human neuroimagery techniques to try to answer this question", says Anne-Lise Giraud, a professor at the Department of Basic Neurosciences of the UNIGE Faculty of Medicine.

A focal region for selecting information

UNIGE neuroscientists had about fifty people listen to a continuum of syllables ranging from BA to DA. The central phonemes were very ambiguous and it was difficult to distinguish between the two options. They then used a functional MRI and magnetoencephalography to see how the brain behaves when the acoustic stimulus is very clear, or, on the contrary, when it is ambiguous and requires an active mental representation of the phoneme and its interpretation by the brain. "We have observed that regardless of how difficult it is to classify the syllable that was heard, between BA and DA, the decision always engages a small region of the posterior superior temporal lobe", notes Anne-Lise Giraud.

Neuroscientists then double-checked their results on a patient with an injury in the specific region of the posterior superior temporal lobe used to distinguish between BA and DA. "And indeed, although the patient did not appear to have symptoms, he was no longer able to distinguish between the BA and DA phonemes ... this confirms that this small region is important in processing this type of phoneme information", adds Sophie Bouton, a researcher from Anne-Lise Giraud's team.

The "false positives" of machine learning decoding

But is the information on the identity of the syllable just locally present, as the experiment of these Genevan scientists has shown, or is it present more generally in our brain, as suggested by the maps produced via machine learning? To answer this question, the neuroscientists reproduced the BA / DA task with people who have electrodes directly implanted in their brains for medical reasons. This technique can collect very focal neural activity. A univariate analysis made it possible to see which region of the brain was solicited during the task, electrode by electrode, contact by contact. Solely the contacts in the posterior superior temporal lobe were active, thus confirming the results of the Geneva study.

However, when a machine-learning algorithm was applied to all of the data, thus making a multivariate decoding of data possible, positive results were observed in the entire temporal lobe, and even beyond it.

"Learning algorithms are intelligent but ignorant", specifies Anne-Lise Giraud. "They are very sensitive and use all of the information in the signals. However, they do not allow us to know whether this information was used to perform the task, or if it reflects the consequences of this task--in other words, spreading information in our brain", continues Valérian Chambon, researcher at the Departement d'études cognitives at the ENS. The mapped regions outside of the posterior superior temporal lobe are thus false positives, in a way. These regions retain information on the decision that the subject makes (BA or DA), but aren't solicited to perform this task.

This research offers a better understanding of how our brain portrays syllables and, by showing the limits of artificial intelligence in certain research contexts, fosters welcome reflection on how to interpret data produced by machine learning algorithms.
-end-


Université de Genève

Related Brain Articles from Brightsurf:

Glioblastoma nanomedicine crosses into brain in mice, eradicates recurring brain cancer
A new synthetic protein nanoparticle capable of slipping past the nearly impermeable blood-brain barrier in mice could deliver cancer-killing drugs directly to malignant brain tumors, new research from the University of Michigan shows.

Children with asymptomatic brain bleeds as newborns show normal brain development at age 2
A study by UNC researchers finds that neurodevelopmental scores and gray matter volumes at age two years did not differ between children who had MRI-confirmed asymptomatic subdural hemorrhages when they were neonates, compared to children with no history of subdural hemorrhage.

New model of human brain 'conversations' could inform research on brain disease, cognition
A team of Indiana University neuroscientists has built a new model of human brain networks that sheds light on how the brain functions.

Human brain size gene triggers bigger brain in monkeys
Dresden and Japanese researchers show that a human-specific gene causes a larger neocortex in the common marmoset, a non-human primate.

Unique insight into development of the human brain: Model of the early embryonic brain
Stem cell researchers from the University of Copenhagen have designed a model of an early embryonic brain.

An optical brain-to-brain interface supports information exchange for locomotion control
Chinese researchers established an optical BtBI that supports rapid information transmission for precise locomotion control, thus providing a proof-of-principle demonstration of fast BtBI for real-time behavioral control.

Transplanting human nerve cells into a mouse brain reveals how they wire into brain circuits
A team of researchers led by Pierre Vanderhaeghen and Vincent Bonin (VIB-KU Leuven, Université libre de Bruxelles and NERF) showed how human nerve cells can develop at their own pace, and form highly precise connections with the surrounding mouse brain cells.

Brain scans reveal how the human brain compensates when one hemisphere is removed
Researchers studying six adults who had one of their brain hemispheres removed during childhood to reduce epileptic seizures found that the remaining half of the brain formed unusually strong connections between different functional brain networks, which potentially help the body to function as if the brain were intact.

Alcohol byproduct contributes to brain chemistry changes in specific brain regions
Study of mouse models provides clear implications for new targets to treat alcohol use disorder and fetal alcohol syndrome.

Scientists predict the areas of the brain to stimulate transitions between different brain states
Using a computer model of the brain, Gustavo Deco, director of the Center for Brain and Cognition, and Josephine Cruzat, a member of his team, together with a group of international collaborators, have developed an innovative method published in Proceedings of the National Academy of Sciences on Sept.

Read More: Brain News and Brain Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.