Scientists have successfully decoded inner speech using brain-computer interfaces (BCIs) with an accuracy rate of up to 74%. The BCI system can interpret neural activity related to thought processes, allowing individuals with severe paralysis or motor impairments to communicate more naturally and comfortably.
Researchers found that speakers from different cultures convey emotions differently when complaining. Complainers use specific vocal expressions to convey negativity, with Québécois sounding more angry and French speakers sounding sadder. The study highlights the importance of tone of voice in social interactions and may have implicati...
A study by researchers from the University of Zurich found that humans are the most frequent users of child-directed speech among five species of great apes. However, non-human great ape infants may acquire language through surrounding communication and gestures, similar to human children.
DJI Air 3 (RC-N2)
DJI Air 3 (RC-N2) captures 4K mapping passes and environmental surveys with dual cameras, long flight time, and omnidirectional obstacle sensing.
Researchers discovered how monkeys produce 'voice breaks' and 'ultra-yodels' using their vocal membranes, which allow for a wider range of calls. These unique vocalizations enable monkeys to communicate in different ways, particularly in complex social lives.
A natural citrus oil, when combined with a lipid formulation, may effectively relieve dry mouth in cancer patients. The new formula has demonstrated improved solubility and bioavailability compared to pure limonene.
Researchers have mapped the long-range synaptic connections involved in vocal learning in zebra finches, uncovering new details about how the brain organises learned vocalisations. The study provides a framework for understanding how the brain integrates sensory and motor information to guide learned vocal behaviour.
Researchers identified specific non-frontal brain areas involved in speech intent, which can be used to distinguish between language production and perception. This study is a crucial step towards developing a brain-computer interface to treat patients with Broca's aphasia.
Apple AirPods Pro (2nd Generation, USB-C)
Apple AirPods Pro (2nd Generation, USB-C) provide clear calls and strong noise reduction for interviews, conferences, and noisy field environments.
The project's recordings help improve voice recognition tools by providing diverse speech patterns to train artificial intelligence models. Microsoft has seen significant improvements in recognizing non-standard English speech, with accuracy gains ranging from 18% to 60%, depending on the speaker's disability.
Researchers found that babies' first vocalizations and attempts at forming words coincide with fluctuations in their heart rate. This discovery may indicate that successful speech development depends on predictable ranges of autonomic activity during infancy.
A new study found that consonant lengthening is a universal trait in many languages, helping listeners identify word boundaries. The researchers analyzed data from the DoReCo corpus and found evidence of lengthening in 43 out of 51 languages.
A breakthrough in brain-computer interfaces allows a patient to communicate using only the power of thought. The study, led by Dr. Ariel Tankus, enables individuals with paralysis to signal 'yes' and 'no' through electrical signals in their brain.
Apple MacBook Pro 14-inch (M4 Pro)
Apple MacBook Pro 14-inch (M4 Pro) powers local ML workloads, large datasets, and multi-display analysis for field and lab teams.
A new AI-driven tool can accurately diagnose Lewy body dementia by analyzing changes in vocal emotional expressions. The study found that individuals with Lewy body dementia exhibited more negative and calmer emotional expressions compared to those with Alzheimer's disease and healthy controls.
A study led by the University of Turku has identified the brain network responsible for stuttering, which may lead to effective treatments. The research found that stuttering is associated with structural changes in a specific brain network involving the putamen, amygdala, and claustrum.
Researchers at the University of Helsinki found that singing repairs the structural language network of the brain after a cerebrovascular accident. Singing also improved tract connectivity and increased grey matter volume in language regions, leading to improved speech production in patients with aphasia.
Apple Watch Series 11 (GPS, 46mm)
Apple Watch Series 11 (GPS, 46mm) tracks health metrics and safety alerts during long observing sessions, fieldwork, and remote expeditions.
Italian speakers gesture more frequently than Swedes when retelling a story, using pragmatic gestures to present new parts of the narrative. Swedes instead rely on representational gestures to convey events and actions, indicating differing rhetorical styles and ways of conceptualizing narratives.
A new study reveals that pauses in speech can provide information about how people's brains plan and produce speech. The research found that neighboring brain regions play a crucial role in speech planning, with longer latencies corresponding to planning and shorter latencies indicating physical mechanics of speaking.
Researchers found a brainstem region that regulates breathing rhythm, ensuring breathing remains dominant over speech. The circuit also involves premotor neurons in the hindbrain region called the retroambiguus nucleus (RAm), which are activated during vocalization.
Researchers developed a novel method to model 3D tongue morphology from fossilized hominin skulls, providing insights into the evolution of human speech. The approach involves geometrical skull matching and has potential applications for understanding primate tongue development.
Researchers analyzed a large database of languages to verify the relationship between climate and language sound. They found that languages around the equator tend to have higher sonority indexes, but some exceptions exist, such as Mesoamerica and Mainland Southeast Asia.
Apple iPhone 17 Pro
Apple iPhone 17 Pro delivers top performance and advanced cameras for field documentation, data collection, and secure research communications.
A team of NYU researchers developed a vocal reconstruction technology that recreates the voices of patients who have lost their ability to speak. By analyzing brain recordings, they disentangled the intricate processes of feedback and feedforward during speech production.
WVU linguists analyze speech recordings to determine how intensity and duration shape pronunciation in American English and Spanish. The two-year study aims to redefine sound patterns across languages.
A recent study at the University of Helsinki found that speech production and singing are supported by the same neural network in the brain, challenging previous notions about their separate functions. The left hemisphere plays a crucial role in both abilities, particularly in terms of singing.
Research from UTHealth Houston reveals that different brain regions are engaged when processing simple versus complex melodies and sentences. The study used intracranial electrodes to map brain activity during music and language tasks, finding shared temporal lobe activity but distinct sensitivities to melodic and syntactic complexity.
Researchers studied systematic variation in AAE speech production, including final consonant reduction, to prevent misdiagnosis of speech disorders. The study highlights the importance of accepting variation in human language without penalizing users.
SAMSUNG T9 Portable SSD 2TB
SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.
A study by the University of Zurich found that chimpanzees understand and respond strongly to combined calls, which they use to recruit group members in threatening situations. This discovery sheds light on the potential evolutionary origins of language's compositional structure, suggesting it may be at least 6 million years old.
Researchers found that anatomical variations in a speaker's vocal tract affect speech production, with factors such as horizontal and vertical length, head inclination, and hard palate shape influencing vowel frequencies. The study suggests that understanding anatomy is crucial for producing speech.
A study of multilingual children in Vanuatu found that they produced as much speech as monolingual populations, despite hearing fewer minutes of speech per hour. The strongest association was with vocalisations by other children, highlighting the importance of children's language acquisition.
A recent study at the University of Helsinki found that singing-based group rehabilitation can increase communication and speech production in stroke survivors, while also reducing social isolation. The study's results suggest that this type of rehabilitation could be a valuable addition to existing aphasia treatment protocols.
A study by HSE researchers found that only the left inferior frontal gyrus is critically involved in action naming, which could help preserve speech in patients after brain surgery. The study used fMRI and rTMS to stimulate the brain and found that stimulating this region led to more accurate action naming.
Meta Quest 3 512GB
Meta Quest 3 512GB enables immersive mission planning, terrain rehearsal, and interactive STEM demos with high-resolution mixed-reality experiences.
A study published in Alzheimer's & Dementia: Diagnosis, Assessment & Disease Monitoring found that analyzing patients' speech can distinguish between different types of dementia. The research team developed a machine learning model using speech features to identify Alzheimer's disease and Lewy body dementia.
A new study found that people with severe speaking difficulties pay more attention to hand gestures when their verbal communication is impeded. This highlights the importance of incorporating gestures into therapy for individuals with aphasia.
Research by University of Cambridge academics found that children as old as five or six struggle to reconcile inference and perspective, leading to difficulties in conversations. Combining these skills is crucial for understanding 'implicatures' - inferences made in conversation when people mean more than they say.
New research finds elevated activity in the right dorsolateral prefrontal cortex when anticipating stuttering, indicating a key role in cognitive control. This study provides fresh insight into how stutterers process anticipation, offering potential implications for therapy.
Researchers found a brain region, dorsal precentral gyrus, monitors auditory feedback to produce smooth speech. Hearing one's own voice with unnatural delays leads to increased activity in this region and the superior temporal gyrus.
Sky & Telescope Pocket Sky Atlas, 2nd Edition
Sky & Telescope Pocket Sky Atlas, 2nd Edition is a durable star atlas for planning sessions, identifying targets, and teaching celestial navigation.
Researchers from Harvard University and Carnegie Mellon University found that matching the locations of faces with speech sounds significantly improves understanding of speech, especially in noisy areas. Spatial alignment is more important when background noise is louder.
A study found that beat gestures produced by infants between 14 and 58 months predict improvements in their oral narrative skills at later ages. The researchers analyzed speech and gesture production in 45 children, comparing the predictive value of beat gestures to other types of gestures.
Recent studies challenge the long-held assumption that variability in children's speech is solely due to developmental delays. Instead, factors such as socioeconomic status, dialects, and exposure to languages play a significant role. Experts argue that caregivers can aid in this process through conversations about different words and ...
Researchers discovered that Neandertals possessed the ability to perceive and produce human speech, with similar auditory capacities as modern humans. The study found that Neandertal ear structures were 'tuned' to hear frequencies within the range of modern human speech sounds.
Researchers found that speech can spread salivary and mucus droplets for at least a meter in front of a speaker, potentially transmitting viruses like coronavirus. Using lip balm, the researchers reduced the droplet size, suggesting a possible mitigation strategy.
GoPro HERO13 Black
GoPro HERO13 Black records stabilized 5.3K video for instrument deployments, field notes, and outreach, even in harsh weather and underwater conditions.
Researchers have developed a brain-machine interface that can generate synthetic speech by controlling a virtual vocal tract based on brain activity. The technology has the potential to restore fluent communication in individuals with severe speech disabilities, including those with paralysis and neurological diseases.
A new case study from New York University finds that face transplant surgery can significantly improve speech production in patients who have experienced severe facial trauma. The study used optical tracking technology to examine the effects of the procedure on facial movements and speech intelligibility, with remarkable results.
Research finds that infants' early speech production can predict their later literacy skills. Children with more complex babble as infants performed better in letter identification tests at age six.
Researchers link vocal repertoire to brain region size in primates, finding a positive correlation between cortical association areas and vocal complexity. The study reveals the importance of specific brain regions in controlling vocal production, providing insight into human speech evolution.
A new study by UC San Francisco scientists reveals how complex articulatory movements are coordinated in the brain during fluent speech. The research found that brain regions responsible for producing speech are organized according to physical needs of the vocal tract, not just linguistic features like phonemes.
CalDigit TS4 Thunderbolt 4 Dock
CalDigit TS4 Thunderbolt 4 Dock simplifies serious desks with 18 ports for high-speed storage, monitors, and instruments across Mac and PC setups.
Researchers found that individuals with agrammatic aphasia omit grammatical pronouns due to cognitive resource limitations. The study suggests a new usage-based theory of grammar, prioritizing lexical elements over grammatical ones.
A new study found that sign language users' speed of comprehension depends on their conversation partners' handedness, with left-handers responding better to fellow left-handers and right-handers to right-handers. The research suggests that how signers produce their own signs plays a role in understanding others' signing.
Researchers found that Broca's area is active early in forming sentences and ends its work before a word is spoken, suggesting it plays a key role in organizing the string of sounds that express ideas. The study could benefit the treatment of language impairments due to stroke, epilepsy, and brain injuries.
Celestron NexStar 8SE Computerized Telescope
Celestron NexStar 8SE Computerized Telescope combines portable Schmidt-Cassegrain optics with GoTo pointing for outreach nights and field campaigns.
A new study reveals that Broca's area, traditionally considered the command center for human speech, actually switches off when we speak out loud. This finding has major implications for diagnosing and treating stroke, epilepsy, and brain injuries that result in language impairments.
Neuroscientists found that hand gestures are part of prosody, influencing how meaning is interpreted. The study demonstrates the role of gestures in speech prosody, highlighting their importance in human communication.
Researchers found that children with brain lesions use gestures similar to typically developing children to convey simple sentences. They also discovered that producing complex sentences across gesture and speech is delayed in children with larger brain lesions.
Aphasia affects 80,000 people annually in the US, causing difficulty speaking and understanding language. Researchers at Brown University have developed a therapy to strengthen speech networks using guided speech and repetition exercises. Early testing with four patients showed improved precision and reduced errors.
Researchers found that primate lip smacking and human speech share similar frequency ranges, developmental trajectories, and neural controls. This study provides insight into the neural basis of human communication disorders by exploring the evolution of primate facial expressions and their potential link to speech.
Rigol DP832 Triple-Output Bench Power Supply
Rigol DP832 Triple-Output Bench Power Supply powers sensors, microcontrollers, and test circuits with programmable rails and stable outputs.
Dr. Michael Wagner's research explores the use of identical rhymes in poetry to understand how languages use emphasis and prosody. The study reveals a systematic difference between French and English speakers' evaluation of poetry, which can help improve computer programmers' production of realistic speech.
Researchers are using songbirds to understand how the human brain produces complex vocal behaviors, including speech. By studying the neural mechanisms that govern birdsong, they hope to develop a better understanding of speech disorders and language processing.
Researchers find that stretching facial skin during speech affects perceived sound, linking production and perception. The study contributes to understanding the relationship between speech perception and production.
Studies reveal that word order affects language production, with French speakers using a different syntax than English speakers. Additionally, researchers find commonalities in naming patterns for locomotion across languages, suggesting universal rules and constraints.
Kestrel 3000 Pocket Weather Meter
Kestrel 3000 Pocket Weather Meter measures wind, temperature, and humidity in real time for site assessments, aviation checks, and safety briefings.
Researchers at Duke University found that music tones correspond to the same numerical ratios as human speech frequencies. The study suggests that our ears favor these relationships due to exposure through everyday speech. This discovery may explain why certain tuning schemes sound more natural, like just intonation.
The study reveals that the brain uses non-auditory sensory cues to control speech, with subjects learning to compensate for robotic interference even when speech sounds normal.
Researchers discovered that jaw stiffness significantly impacts speech production, causing variability in motion as sounds are made. The study measured jaw stiffness and its effect on kinematic variability during speech production, revealing a relationship between the two.
Researchers at the Max Planck Institute developed a theory to explain lexical selection and form encoding in spoken word production. The theory proposes two major processing components: lexical selection and form encoding, which can be completed within two-thirds of a second.