During acoustic interaction, mammals, including humans, monitor both external sounds and their own vocalizations. However, most previous studies of auditory processing have been conducted in restrained animals that were not actively vocalizing. As a result, scientists know relatively little about how the brain processes self-generated sounds during natural vocal behavior. So, Jinhong Luo and colleagues established an ethological paradigm to record single-unit activity in the inferior colliculus (IC) of unrestrained, freely vocalizing bats. The study focused on the great roundleaf bat, Hipposideros armiger , a highly vocal species that relies heavily on auditory feedback for vocal control. When perched, these bats show elaborate ear (pinna) and head movements, synchronized with the production of biosonar vocalizations.
Using chronically implanted 16-channel silicon probes, the researchers recorded the activity of 106 single neurons from the IC of five bats. Aligning neural activity to vocal onset revealed three response types: excitatory (88% of units), suppressive (6%), and non-responsive (6%). They then asked whether neural responses to self-produced vocalizations could be explained simply by frequency tuning. Although most neurons showed strong responses to pure tones near the frequency of the bats’ echolocation calls, many responded differently to real vocalizations than to simple tones.
To test whether the brain distinguishes self-produced sounds from external sounds, the researchers compared neural responses to the bats’ own calls with responses to playback of the same calls recorded from the same animal. The results were striking: 95% of neurons responded differently to self-generated vocalizations than to their playback. In many neurons, the differences were qualitative rather than merely quantitative. Some neurons showed one response peak during self-vocalization but two peaks during playback, while others switched from excitation to suppression or vice versa.
The researchers suggest several possible mechanisms that may contribute to these differences in neural responses. One possibility is spectro-temporal differences between the vocalization and its playback resulting from self-vocalization. Another is that the bat’s behavioral state differs when it is actively vocalizing compared with when it passively listens. A third possibility involves a neural signal known as an efference copy or corollary discharge—an internal copy of a motor command sent to sensory brain regions when an animal produces a vocalization.
“Our data further indicate that non-acoustic mechanisms contribute to the differential responses of IC neurons to self-produced vocalizations and their playbacks. These mechanisms likely include both the efference copy hypothesis and the behavioral state hypothesis.” says Luo. These findings provide new insights into the neural mechanisms underlying echolocating bats’ ability to distinguish between self-produced vocalizations and external acoustic interference, inspiring our minds for the development of anti-interference communication technologies.
Science China Life Sciences
Experimental study