Bluesky Facebook Reddit Email

From genetics to AI: Integrated approaches to decoding human language in the brain

03.10.26 | Cognitive Neuroscience Society

Meta Quest 3 512GB

Meta Quest 3 512GB enables immersive mission planning, terrain rehearsal, and interactive STEM demos with high-resolution mixed-reality experiences.

VANCOUVER - March 8, 2026 - Learning French, reading the latest Andy Weir novel, hanging out with friends for St. Patrick’s Day — language is central to all these everyday activities. Seemingly effortless from childhood, language, it turns out, is quite complex, not constrained to one set of genes or one region in the brain. Cognitive neuroscientists are now using a diverse arsenal of tools, including novel genetic analyses and AI, to gain insights into both healthy and disordered communication across individuals.

“We still tend to study language one level at a time — genes, brain pathways, neural activity, behavior, computation — without fully connecting those levels into a coherent mechanistic account,” says Tamara Swaab, who is chairing a symposium on language at the annual meeting of the Cognitive Neuroscience Society (CNS) in Vancouver, B.C. “Now, however, we can study those connections at multiple levels and in far more detail.”

This relatively novel, integrated approach is already yielding results, from AI-based models that can test, and potentially predict, language development in children, to genetics research that links rhythm disorders and dyslexia. These studies mark a dramatic shift from traditional research about where language happens in the brain to how it occurs and why it differs so vastly across people, says Swaab of the University of California, Davis, and University of Birmingham in the UK, who studies how various factors affect how language is processed and understood.

Driving this work is a desire among researchers to understand how humans’ unique ability to communicate shapes what we learn, how we remember, and how our species has evolved. For cognitive neuroscientist Jean-Rémi King at Meta, investigating how human language has evolved means tapping a new form of learning: AI deep learning models. The question, he says, is how do humans acquire language so efficiently — with orders of magnitude less exposure to words than today's large language models (LLMs) — while other species cannot reach similar competence?

“With the rise of small and then large language models, using artificial neural networks became, de facto, the most efficient way to model and decode language representations in the brain,” King says. “These AI models learn, and thus follow a specific learning trajectory, which provides a new source of hypotheses and ideas for how children effectively acquire language.”

In a new study , King and colleagues found that LLMs can effectively account for the neural representations of language in both adults and children as young as 2 years old. Working with the Rothschild Foundation Hospital’s pediatric epileptology unit, the researchers investigated neural activity recorded from more than 7,400 electrodes implanted in the brains of 46 children, teenagers, and adults who have intractable epilepsy and are thus temporarily implanted with stereotactic electrodes prior to surgery.

“We discovered that their brain responses to an audiobook can be accurately modeled using AI,” as King will present at the CNS meeting in Vancouver. They found that high-level language features, such as grammar, compared to low-level features, such as fast phonetic building blocks, continue to mature between ages 2 and 10 years old. “While the underlying mechanisms remain to be uncovered, this work offers the first compelling evidence that modern AI systems can provide powerful new insights into how language develops in the human brain,” King says.

For cognitive neuroscientist Stephanie Forkel of Radboud University Nijmegen in The Netherlands, better understanding how language develops uniquely in different individuals means taking a different approach: studying the brain’s wiring that connects language regions. Classical neuroscience points to “Broca’s area” or “Wernicke’s area,” as if language lives in two spots. But after working with stroke patients who had different types of brain damage and varying language challenges, Forkel quickly realized that language is “not a single ‘thing’ in the brain — it is a system.” And understanding that system is key to understanding neurovariability.

In a new study using ultra–high-field 7 Tesla diffusion MRI, she and colleagues reconstructed seven major white-matter pathways involved in language in 172 individuals. The researchers then asked whether the participants fell into clear “left-brained” versus “right-brained” types for language. The answer was “no,” says Forkel, who will present this new work at CNS. “Instead of distinct categories, we found that language is not binary in the brain; it forms a continuum. This challenges long-standing categorical models of hemispheric dominance and reframes how we think about individual differences.”

Forkel’s team now has funding for a new five-year project to understand the emergence of language from its biological foundations. The hope is to not only understand how language is created but also how it can be protected from, or restored after, injury or disease.

This work dovetails with newly emerging work to understand the genetic underpinnings of language in the brain, something that has gotten a massive boost in the last several years from large, sometimes private, often publicly available, datasets. Whether from genetic sites such as 23andMe or government funded organizations like the U.S. National Institutes of Health, these databases, combined with innovative new genetic analyses, are giving researchers brand new insights into the “polygenic” nature of language, says Reyna Gordon of Vanderbilt University Medical Center.

Indeed, as Gordon will emphasize in her talk at CNS in Vancouver, language is influenced by many genes. While researchers cannot pinpoint in single individuals how much of their language skills derive from genetics versus the environment, large populations of people can reveal significant patterns. “Thanks to publicly funded data resources, we've been able to start studying language genetics at scale and to start to link that to its neural basis in some really innovative ways.”

She and her team are triangulating data about the functions of specific genes with questionnaires and other types of data specific to language and music development to show how genetic variation contributes to individual differences in language skills. In one recent study , for example, researchers looked at people with and without dyslexia in 1 million participants from 23andMe, in addition to another dataset that included language testing. They found multiple genes associated with dyslexia, which might contribute to earlier diagnosis and treatment for the language disorder.

Crucially, these approaches allow researchers to combine insights from multiple large datasets, rather than relying on a single group of participants, something not previously possible in traditional neuroscience research. “Using new methods, we can actually do this data integration across data streams, which allows us to formulate basic science hypotheses, as well as potential clinical applications,” Gordon says.

In another study, Gordon and colleagues showed that there is a shared biological underpinning between language and music that maps all the way back to the genome. They identified 16 separate regions of the genome that are common to rhythm impairments and dyslexia. “We've also looked at the overlap epidemiologically in large samples , so rhythm impairments may actually be a risk factor for language problems and reading disorders,” she says.

Together, the studies being presented at the CNS meeting on a multimethod approach to understanding language in the brain show the adaptable nature of the brain. “The human brain is not built from rigid blueprints, but rather from adaptable architectures,” Forkel says.

Indeed, says session chair Swaab: “Language comprehension is a form of fast, adaptive cognition. We finally begin to more fully understand it by linking the story from genes, to brain pathways and networks, to neural decoding and computational models that help explain how the brain comprehends and produces language.”

The symposium “ How the Brain Creates Language: Insights from Genes, Neural Pathways, Neuroprosthetics, and Computational Models ” is taking place at 10amPDT on Tuesday, March 10, as part of the CNS 2026 annual meeting from March 7-10, in Vancouver, British Columbia, Canada.

CNS is committed to the development of mind and brain research aimed at investigating the psychological, computational, and neuroscientific bases of cognition. Since its founding in 1994, the Society has been dedicated to bringing its 2,000 members worldwide the latest research to facilitate public, professional, and scientific discourse.

Keywords

Contact Information

Lisa M.P. Munoz
Cognitive Neuroscience Society
cns.publicaffairs@gmail.com

How to Cite This Article

APA:
Cognitive Neuroscience Society. (2026, March 10). From genetics to AI: Integrated approaches to decoding human language in the brain. Brightsurf News. https://www.brightsurf.com/news/LDEMD708/from-genetics-to-ai-integrated-approaches-to-decoding-human-language-in-the-brain.html
MLA:
"From genetics to AI: Integrated approaches to decoding human language in the brain." Brightsurf News, Mar. 10 2026, https://www.brightsurf.com/news/LDEMD708/from-genetics-to-ai-integrated-approaches-to-decoding-human-language-in-the-brain.html.