Bluesky Facebook Reddit Email

AI learns better when it talks to itself

01.27.26 | Okinawa Institute of Science and Technology (OIST) Graduate University

SAMSUNG T9 Portable SSD 2TB

SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.


Talking to oneself is a trait which feels inherently human. Our inner monologues help us organize our thoughts, make decisions, and understand our emotions. But it’s not just humans who can reap the benefits of such self-talk. Published in Neural Computation , scientists from the Okinawa Institute of Science and Technology (OIST) have demonstrated the potential of inner speech to improve AI learning, showing how AI models can generalize across different tasks more easily when supported by both inner speech and short-term memory.

“This study highlights the importance of self-interactions in how we learn. By structuring training data in a way that teaches our system to talk to itself, we show that learning is shaped not only by the architecture of our AI systems, but by the interaction dynamics embedded within our training procedures,” says first author Dr. Jeffrey Queißer, Staff Scientist within OIST’s Cognitive Neurorobotics Research Unit .

By combining self-directed ‘mumbling’ with a unique working memory architecture, the researchers improved how their AI models learned, adapted to new situations, and multitasked.

The team has long been interested in content agnostic information processing; the ability to perform tasks beyond the exact situations that we’ve previously encountered, by learning general methods and operations.

“Rapid task switching and solving unfamiliar problems is something we humans do easily every day. But for AI, it’s much more challenging,” notes Dr. Queißer. “That’s why we take an interdisciplinary approach, blending developmental neuroscience and psychology with machine learning and robotics amongst other fields, to find new ways to think about learning and inform the future of AI.”

The researchers initially focused on the AI models’ memory architecture, examining the importance of working memory for task generalization. From remembering instructions to quick mental math, working memory is the short-term ability of a system to retain and use information. By simulating tasks of varying difficulty, they examined the effectiveness of different memory structures, demonstrating that systems which included multiple working memory slots (temporary containers for pieces of information) could generalize better on the tricky tasks of reversing the order of and regenerating patterns.

Upon adding self-mumbling targets—telling the system to talk to itself a certain number of times—the researchers gained better performance, particularly when multitasking or completing tasks with many steps.

“Our combined system is particularly exciting because it can work with sparse data instead of the extensive data sets usually required to train such models for generalization. It provides a complementary, lightweight alternative,” emphasizes Dr. Queißer.

Looking forward, the researchers plan to make things ‘messier’. Dr Queißer says, “In the real world, we’re making decisions and solving problems in complex, noisy, dynamic environments. To better mirror human developmental learning, we need to account for these external factors.”

This ties in with the team’s overarching goal to understand the neural basis of human learning. “By exploring phenomena like inner speech, and understanding the mechanisms of such processes, we gain fundamental new insights into human biology and behavior,” concludes Dr. Queißer. “We can also apply this knowledge, for example in developing household or agricultural robots which can function in our complex, dynamic worlds.”

Neural Computation

10.1162/NECO.a.36

Computational simulation/modeling

Working Memory and Self-Directed Inner Speech Enhance Multitask Generalization in Active Inference

22-Dec-2025

Keywords

Article Information

Contact Information

Catherine Hodges
Okinawa Institute of Science and Technology (OIST) Graduate University
catherine.hodges@oist.jp

How to Cite This Article

APA:
Okinawa Institute of Science and Technology (OIST) Graduate University. (2026, January 27). AI learns better when it talks to itself. Brightsurf News. https://www.brightsurf.com/news/LN2POGK1/ai-learns-better-when-it-talks-to-itself.html
MLA:
"AI learns better when it talks to itself." Brightsurf News, Jan. 27 2026, https://www.brightsurf.com/news/LN2POGK1/ai-learns-better-when-it-talks-to-itself.html.