Researchers at the HSE Institute for Cognitive Neuroscience have studied how the brain responds to audio deepfakes—realistic fake speech recordings created using AI. The study shows that people tend to trust the current opinion of an authoritative speaker even when new statements contradict the speaker’s previous position. This effect also occurs when the statement conflicts with the listener’s internal attitudes. The research has been published in the journal NeuroImage .
Modern deepfakes are becoming increasingly difficult to distinguish from genuine recordings and are ever more often used to spread false information. In healthcare, the consequences of disinformation are particularly dangerous, as they pose a threat to public health.
Researchers from the HSE Institute for Cognitive Neuroscience (ICN) conducted an experiment to examine how people perceive audio deepfakes attributed to celebrities who speak either in favour of or against COVID-19 vaccination.
The study involved 61 participants. Half of them supported vaccination, while the other half opposed it. The participants listened to AI-generated audio recordings of well-known opinion leaders—a doctor who supports vaccination and a popular actress known for her anti-vaccination stance. While listening, the participants’ brain activity was recorded using electroencephalography (EEG). At a certain point, the speakers uttered statements that contradicted their real public positions: the doctor unexpectedly said that COVID vaccinations were unnecessary, while the actress, on the contrary, emphasised the need for vaccination. In these cases, the EEG recorded the N400 component—a brain response to semantic incongruity that occurs approximately 400 milliseconds after we see or hear an unexpected stimulus. The greater the incongruity, the stronger the signal.
Data analysis showed that, regardless of their own attitudes, participants rated the doctor’s statements more highly across all measures: they found them more persuasive and authoritative, considered them more trustworthy, and were more willing to share the broadcasted information with friends and acquaintances. The EEG recorded the N400 component when the doctor spoke out against COVID-19 vaccination; by contrast, this response was significantly weaker or entirely absent when contradictory statements came from the actress, who is less authoritative in medical matters.
‘Initially, we assumed that participants’ internal attitudes would influence how they perceived the audio recording. That is why we first established whether they supported vaccination or opposed it and divided them into two groups. In addition, we carried out special personality tests to assess their level of analytical thinking, need for cognition, and conformity. However, when listening to the deepfakes, it turned out that all these parameters were almost irrelevant. The decisive factor was speakers’ authoritativeness in the medical field,’ explains Eliana Monahhova , first author of the article and Junior Research Fellow at the Centre for Cognition and Decision Making , HSE Institute for Cognitive Neuroscience.
The findings are important for understanding the mechanisms behind the spread of disinformation. They show that messages attributed to authoritative sources can have a strong impact on audiences even if they contain internal contradictions and diverge from the speaker’s public stance.
‘To the best of our knowledge, this is the first study to examine the neurocognitive mechanisms involved in processing semantic contradictions in deepfakes from the perspective of message and source credibility. Understanding these mechanisms makes it possible to develop more effective strategies to counter digital fraud and information manipulation,’ said Eliana Monahhova.
The study was carried out with the support of Russian Science Foundation grant No. 24-18-00432, ‘Neurophysiological Mechanisms of Perceiving Manipulative Information: Factors and Strategies of Resilience’.
NeuroImage
10.1016/j.neuroimage.2026.121727
Experimental study
People
ERP correlates of semantic inconsistencies in deepfakes
15-Feb-2026