Earphone tracks facial expressions, even with a face mask

October 12, 2020

ITHACA, N.Y. - Cornell University researchers have invented an earphone that can continuously track full facial expressions by observing the contour of the cheeks - and can then translate expressions into emojis or silent speech commands.

With the ear-mounted device, called C-Face, users could express emotions to online collaborators without holding cameras in front of their faces - an especially useful communication tool as much of the world engages in remote work or learning.

With C-Face, avatars in virtual reality environments could express how their users are actually feeling, and instructors could get valuable information about student engagement during online lessons. It could also be used to direct a computer system, such as a music player, using only facial cues.

"This device is simpler, less obtrusive and more capable than any existing ear-mounted wearable technologies for tracking facial expressions," said Cheng Zhang, assistant professor of information science and senior author of "C-Face: Continuously Reconstructing Facial Expressions by Deep Learning Contours of the Face With Ear-Mounted Miniature Cameras."

The paper will be presented at the Association for Computing Machinery Symposium on User Interface Software and Technology, to be held virtually Oct. 20-23.

"In previous wearable technology aiming to recognize facial expressions, most solutions needed to attach sensors on the face," said Zhang, director of Cornell's SciFi Lab, "and even with so much instrumentation, they could only recognize a limited set of discrete facial expressions."

Because it works by detecting muscle movement, C-Face can capture facial expressions even when users are wearing masks, Zhang said.

The device consists of two miniature RGB cameras - digital cameras that capture red, green and bands of light - positioned below each ear with headphones or earphones. The cameras record changes in facial contours caused when facial muscles move.

Once the images are captured, they're reconstructed using computer vision and a deep learning model. Since the raw data is in 2D, a convolutional neural network - a kind of artificial intelligence model that is good at classifying, detecting and retrieving images - helps reconstruct the contours into expressions.

The model translates the images of cheeks to 42 facial feature points, or landmarks, representing the shapes and positions of the mouth, eyes and eyebrows, since those features are the most affected by changes in expression.

These reconstructed facial expressions represented by 42 feature points can also be translated to eight emojis, including "natural," "angry" and "kissy-face," as well as eight silent speech commands designed to control a music device, such as "play," "next song" and "volume up."

The ability to direct devices using facial expressions could be useful for working in libraries or other shared workspaces, for example, where people might not want to disturb others by speaking out loud. Translating expressions into emojis could help those in virtual reality collaborations communicate more seamlessly, said Francois Guimbretière, professor of information science and a co-author of the C-Face paper.

One limitation to C-Face is the earphones' limited battery capacity, Zhang said. As its next step, the team plans to work on a sensing technology that uses less power.
-end-
The research was supported by the Department of Information Science at Cornell University.

Cornell University

Related Technology Articles from Brightsurf:

December issue SLAS Technology features 'advances in technology to address COVID-19'
The December issue of SLAS Technology is a special collection featuring the cover article, ''Advances in Technology to Address COVID-19'' by editors Edward Kai-Hua Chow, Ph.D., (National University of Singapore), Pak Kin Wong, Ph.D., (The Pennsylvania State University, PA, USA) and Xianting Ding, Ph.D., (Shanghai Jiao Tong University, Shanghai, China).

October issue SLAS Technology now available
The October issue of SLAS Technology features the cover article, 'Role of Digital Microfl-uidics in Enabling Access to Laboratory Automation and Making Biology Programmable' by Varun B.

Robot technology for everyone or only for the average person?
Robot technology is being used more and more in health rehabilitation and in working life.

Novel biomarker technology for cancer diagnostics
A new way of identifying cancer biomarkers has been developed by researchers at Lund University in Sweden.

Technology innovation for neurology
TU Graz researcher Francesco Greco has developed ultra-light tattoo electrodes that are hardly noticeable on the skin and make long-term measurements of brain activity cheaper and easier.

April's SLAS Technology is now available
April's Edition of SLAS Technology Features Cover Article, 'CURATE.AI: Optimizing Personalized Medicine with Artificial Intelligence'.

Technology in higher education: learning with it instead of from it
Technology has shifted the way that professors teach students in higher education.

Post-lithium technology
Next-generation batteries will probably see the replacement of lithium ions by more abundant and environmentally benign alkali metal or multivalent ions.

Rethinking the role of technology in the classroom
Introducing tablets and laptops to the classroom has certain educational virtues, according to Annahita Ball, an assistant professor in the University at Buffalo School of Social Work, but her research suggests that tech has its limitations as well.

The science and technology of FAST
The Five hundred-meter Aperture Spherical radio Telescope (FAST), located in a radio quiet zone, with the targets (e.g., radio pulsars and neutron stars, galactic and extragalactic 21-cm HI emission).

Read More: Technology News and Technology Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.