Bluesky Facebook Reddit Email

How the brain handles large text units

02.27.25 | Technion-Israel Institute of Technology

Apple iPhone 17 Pro

Apple iPhone 17 Pro delivers top performance and advanced cameras for field documentation, data collection, and secure research communications.

Unlike artificial language models, which process long texts as a whole, the human brain creates a "summary" while reading, helping it understand what comes next.

In recent years, large language models (LLMs) like ChatGPT and Bard have revolutionized AI-driven text processing, enabling machines to generate text, translate languages, and analyze sentiment. These models are inspired by the human brain, but key differences remain.

A new Technion-Israel Institute of Technology study, published in Nature Communications, explores these differences by examining how the brain processes spoken texts. The research, led by Prof. Roi Reichart and Dr. Refael Tikochinski from the Faculty of Data and Decision Sciences . It was conducted as part of Dr. Tikochinski’s Ph.D., co-supervised by Prof. Reichart at Technion and Prof. Uri Hasson at Princeton University.

The study analyzed fMRI brain scans of 219 participants while they listened to stories. Researchers compared the brain's activity to predictions made by existing LLMs. They found AI models accurately predicted brain activity for short texts (a few dozen words). However, for longer texts, AI models failed to predict brain activity accurately.

The reason? While both the human brain and LLMs process short texts in parallel (analyzing all words at once), the brain switches strategies for longer texts. Since the brain cannot process all words simultaneously, it stores a contextual summary – a kind of "knowledge reservoir" – which it uses to interpret upcoming words.

In contrast, AI models process all previously heard text at once, so they do not require this summarization mechanism. This fundamental difference explains why AI struggles to predict human brain activity when listening to long texts.

To test their theory, the researchers developed an improved AI model that mimics the brain’s summarization process. Instead of processing the entire text at once, the model created dynamic summaries and used them to interpret future text. This significantly improved AI predictions of brain activity, supporting the idea that the human brain is constantly summarizing past information to make sense of new input.

This ability allows us to process vast amounts of information over time, whether in a lecture, a book, or a podcast. Further analysis mapped brain regions involved in both short-term and long-term text processing, highlighting the brain areas responsible for context accumulation, which enables us to understand ongoing narratives.

Nature Communications

10.1038/s41467-025-56162-9

Incremental accumulation of linguistic context in artificial and biological neural networks

18-Jan-2025

Keywords

Article Information

Contact Information

Doron Shaham
Technion-Israel Institute of Technology
sdoron@technion.ac.il

Source

How to Cite This Article

APA:
Technion-Israel Institute of Technology. (2025, February 27). How the brain handles large text units. Brightsurf News. https://www.brightsurf.com/news/1ZZOKYN1/how-the-brain-handles-large-text-units.html
MLA:
"How the brain handles large text units." Brightsurf News, Feb. 27 2025, https://www.brightsurf.com/news/1ZZOKYN1/how-the-brain-handles-large-text-units.html.