How will people interact with technology in the future?

May 09, 2016

New research that discusses how people will interact with technology in the future will be presented this week at one of the world's most important conferences on human-computer interfaces, ACM CHI 2016, in San Jose, USA [7-12 May].

A team of researchers led by Professor Mike Fraser and Dr Anne Roudaut from Bristol University's Bristol Interaction Group (BIG) group, will present six papers at the international conference. The conference brings together researchers from universities, corporations and start-ups from across the world and could change the way people interact and collaborate in the future.

The research being presented, which could be future applications, includes:

PowerShake - power transfer interactions for mobile devices

Current devices have limited battery life, typically lasting less than one day. This can lead to situations where critical tasks, such as making an emergency phone call, are not possible. PowerShake is an exploration of power as a shareable commodity between mobile and wearable devices using wireless power transfer to enable power-sharing on the go. Other devices that people may have with them, such as a smartwatch or camera, may have sufficient battery to support this emergency task.

Investigating text legibility on non-rectangular displays

Emerging technologies allow for the creation of non-rectangular displays with unlimited constraints in shape. In this paper, the researchers investigate how to display text on such free-form displays.

EMPress - practical hand gesture classification with wrist-mounted electromyography (EMG) and pressure sensing

Practical wearable gesture tracking requires that sensors align with existing ergonomic device forms. This paper shows that combining EMG and pressure data sensed only at the wrist can support accurate classification of hand gestures. The EMPress technique senses both finger movements and rotations around the wrist and forearm, covering a wide range of gestures.

GauntLev - a wearable to manipulate free-floating objects

GauntLev is a tool that is able to generate remote forces that would allow people to handle dangerous materials and adrift objects in zero-g environments without contact or constrictions. The research team found basic manoeuvres can be performed when acoustic levitators are attached to moving hands. A Gauntlet of Levitation and a sonic screwdriver will be presented showing their manoeuvres for capturing, moving, transferring and combining particles.

Sustainable interaction design, cloud services and the digital infrastructure

Design-for-environment methods tend to focus on the impact of device manufacturing and use. However, nowadays significant environmental impact comes from the infrastructure which provides services the device enables. The paper, which has won a Best Paper award, offers an analysis of the different ways in which design decisions result in environmental impacts through their use of the digital infrastructure, and extend Blevis' Sustainable Interaction Design rubric to incorporate considerations of the digital infrastructure.

Shared language and the design of home healthcare technology

This paper explores the importance of language for the design of smart home technologies for healthcare. The research team present data gathered through an ethnographic study and through meetings with user advisory groups that show the need for a shared language that avoids the use of jargon, ambiguous words, and emotive words. A workshop with researchers who are developing smart health technologies and a focus group with end users were run, where the focus was on generating a shared language.

Dr Anne Roudaut, Lecturer from the University's Department of Computer Science and BIG group, said: "The body of research we are presenting shows that human-computer interfaces have an important role to play in how people will interact and use technology in the future."
-end-


University of Bristol

Related Language Articles from Brightsurf:

Learning the language of sugars
We're told not to eat too much sugar, but in reality, all of our cells are covered in sugar molecules called glycans.

How effective are language learning apps?
Researchers from Michigan State University recently conducted a study focusing on Babbel, a popular subscription-based language learning app and e-learning platform, to see if it really worked at teaching a new language.

Chinese to rise as a global language
With the continuing rise of China as a global economic and trading power, there is no barrier to prevent Chinese from becoming a global language like English, according to Flinders University academic Dr Jeffrey Gil.

'She' goes missing from presidential language
MIT researchers have found that although a significant percentage of the American public believed the winner of the November 2016 presidential election would be a woman, people rarely used the pronoun 'she' when referring to the next president before the election.

How does language emerge?
How did the almost 6000 languages of the world come into being?

New research quantifies how much speakers' first language affects learning a new language
Linguistic research suggests that accents are strongly shaped by the speaker's first language they learned growing up.

Why the language-ready brain is so complex
In a review article published in Science, Peter Hagoort, professor of Cognitive Neuroscience at Radboud University and director of the Max Planck Institute for Psycholinguistics, argues for a new model of language, involving the interaction of multiple brain networks.

Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.

Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.

Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.

Read More: Language News and Language Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.