Bluesky Facebook Reddit Email

Does the brain work like an LLM in predicting words? New study spells out a complicated answer

04.21.26 | New York University

Apple iPhone 17 Pro

Apple iPhone 17 Pro delivers top performance and advanced cameras for field documentation, data collection, and secure research communications.

The appearance of predictive text in writing an email or text message has become, for better or worse, a regular feature of our lives—saving us time by seamlessly filling in a word before we can type it or frustrating us by repeatedly doing the same with an unrelated term.

Like AI systems more broadly, the predictive-text feature of large language models (LLMs) is said to be analogous to the way the brain works—in this case, our ability to forecast what words come next when listening to others speak.

But while this next-word-prediction feature humans possess is well-known, less clear is how the brain functions during this process and what considerations it makes in doing so. Put another way, does the brain predict words in the same way that AI does?

A newly published study by a team of scientists shows that, in fact, we predict words by undergoing a more complex process. The research, which appears in the journal Nature Neuroscience, shows that we take into account a larger linguistic structure, focusing on a word’s surroundings within groups of words—a constituent—rather than only what word comes next. This is similar to how we look at surrounding pieces of a puzzle in deciding where to place the next piece.

“While LLMs are trained and optimized to predict the next word, the human brain makes predictions by grammatically grouping words into phrases,” explains co-author David Poeppel, a professor of psychology and neural science at New York University. “With LLMs, predictions are by and large created equally: each word exploits its predictive context the same way. By contrast, the human brain makes predictions by first taking into account chunks of words—what we call grammatical constituents—and then determining which words are predicted best within that structure.”

How the study was done

The study, which included Ernst Struengmann Institute for Neuroscience’s Jiajie Zou, a postdoctoral researcher with Poeppel at the time of the study, and Nai Ding, a professor at Zhejiang University and a former postdoctoral fellow in Poeppel’s lab, centered on a series of experiments with Mandarin Chinese speakers. It used magnetoencephalography (MEG) to measure participants’ brain activity while they were exposed to Mandarin sentences. In addition, the study used behavioral word-prediction tasks—specifically, Cloze tests, which assess linguistic prediction by removing specific words from a passage and requiring participants to fill in the blanks. The study also analyzed brain data from patients exposed to English to confirm that the findings would apply to other languages.

The researchers used LLMs to quantify the predictability of words based on both their “entropy” and “surprisal.” High entropy indicates that the context does not strongly constrain which words may follow, giving it less predictability. For example, the word after “I saw a” has higher entropy than does the word after “I sat on a” because there are more objects you can see than you can sit on. High surprisal indicates that the next word is not well expected based on the context. For example, the appearance of “cat” has higher surprisal after “I sat on a” than does “I saw a.”

The study’s authors then examined how the brain responded to each word, taking into account the level of predictability of the words. The key comparison, the researchers note, was to correlate the word responses between brain data predictions and LLM model predictions of the same sentences: If brains are just like next-word-prediction devices, such as LLMs, these correlations should be uniformly high; by contrast, variance would suggest a different process is at work.

What the research found

The results showed that the brain reacted differently to words depending on their linguistic structural position. This indicated that participants were taking into account grammatical constituents in anticipating next words.

By contrast, LLMs don’t require or reflect such sensitivity to linguistic constituent structure—they simply offer predictions.

“Our brains can, like AI systems, exploit next-word prediction. However, brains are highly sensitive to linguistic constituent structure,” concludes Poeppel. “This research shows that next-word prediction is balanced and modulated by our consideration of grammatically organized ‘chunks of words’—quite different from how LLMs work.”

# # #

Nature Neuroscience

10.1038/s41593-026-02272-6

Experimental study

People

Constituent-constrained word prediction during language comprehension

21-Apr-2026

Keywords

Article Information

Contact Information

James Devitt
New York University
james.devitt@nyu.edu

Source

How to Cite This Article

APA:
New York University. (2026, April 21). Does the brain work like an LLM in predicting words? New study spells out a complicated answer. Brightsurf News. https://www.brightsurf.com/news/147ZVGO1/does-the-brain-work-like-an-llm-in-predicting-words-new-study-spells-out-a-complicated-answer.html
MLA:
"Does the brain work like an LLM in predicting words? New study spells out a complicated answer." Brightsurf News, Apr. 21 2026, https://www.brightsurf.com/news/147ZVGO1/does-the-brain-work-like-an-llm-in-predicting-words-new-study-spells-out-a-complicated-answer.html.