Web of psychological cues may tempt people to reveal more online

April 25, 2020

UNIVERSITY PARK, Pa. -- While most people will say they are extremely concerned with their online privacy, previous experiments have shown that, in practice, users readily divulge privacy information online.

In a study published in the latest Proceedings of Computer-Human Interaction (CHI 2020), a team of Penn State researchers identified a dozen subtle -- but powerful -- reasons that may shed light on why people talk a good game about privacy, but fail to follow up in real life.

"Most people will tell you they're pretty worried about their online privacy and that they take precautions, such as changing their passwords," said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory.

"But, in reality, if you really look at what people do online and on social media, they tend to reveal all too much. What we think is going on is that people make disclosures in the heat of the moment by falling for contextual cues that appear on an interface."

Sundar, who is also an affiliate of Penn State's Institute for Computational and Data Sciences (ICDS), said that certain cues analyzed by the researchers significantly increased the chance that people would turn over private information such as social security numbers or phone numbers. The cues exploit common pre-existing beliefs about authority, bandwagon, reciprocity, sense-of-community, community-building, self-preservation, control, instant gratification, transparency, machine, publicness and mobility.

"What we did in this study is identify 12 different kinds of appeals that influence people to reveal information online," said Sundar. "These appeals are based on rules of thumb that we all hold in our head, called heuristics."

For example, the rule of thumb that 'if most others reveal their information, then it is safe for me to disclose as well' is labeled 'bandwagon heuristic' by the study.

"There are certainly more than 12 heuristics, but these are the dominant ones that play an important role in privacy disclosure," added Sundar, who worked with Mary Beth Rosson, Professor-in-Charge of Human Computer Interaction and director of graduate programs in the College of Information Sciences and Technology.

The researchers explain that heuristics are mental shortcuts that could be triggered by cues on a website or mobile app.

"These cues may not always be obvious," according to Rosson. "The bandwagon cue, for example, can be as simple as a statement that is added to a website or app to prompt information disclosure," she added.

"For example, when you go on LinkedIn and you see a statement that says your profile is incomplete and that 70 percent of your connections have completed their profiles, that's a cue that triggers your need to follow others -- which is what we call a bandwagon effect," said Sundar. "We found that those with a stronger pre-existing belief in 'bandwagon heuristic' were more likely to reveal personal information in such a scenario."

For the authority cue, Rosson said that a graphic that signals the site is being overseen by a trusted authority may make people comfortable with turning private information over to the company.

"The presence of a logo of a trusted agency such as FDIC or even a simple icon showing a lock can make users of online banking feel safe and secure, and it makes them feel that somewhere somebody is looking after their security," said Rosson.

The researchers said that ingrained trust in authority, or what they call 'authority heuristic,' is the reason for disclosure of personal information in such scenarios.

"When interviewed, our study participants attributed their privacy disclosure to the cues more often than other reasons," said Sundar.

An awareness of major cues that prey on common rules of thumb may make people more savvy web users and could help them avoid placing their private information into the wrong hands.

"The number one reason for doing this study is to increase media literacy among online users," said Sundar.

He added that the findings could also be used to create alerts that warn users when they encounter these cues.

"People want to do the right thing and they want to protect their privacy, but in the heat of the moment online, they are swayed by these contextual cues," said Rosson. "One way to avoid this is to introduce 'just-in-time' alerts. Just as users are about to reveal information, an alert could pop up on the site and ask them if they are sure they want to do that. That might give them a bit of a pause to think about that transaction," she added.

For the study, the researchers recruited 786 people to participate in an online survey. The participants were then asked to review 12 scenarios that they might encounter online and asked to assess their willingness to disclose personal information based on each scenario.

To ensure that the sample was representative nationally, the participants were chosen so that their demographics were consistent with statistics provided by the U.S. Census Bureau.
The team also included Jinyoung Kim and Maria D. Molina, doctoral candidates in mass communications at Penn State.

The National Science Foundation supported this research.

Penn State

Related Privacy Articles from Brightsurf:

Yale team finds way to protect genetic privacy in research
In a new report, a team of Yale scientists has developed a way to protect people's private genetic information while preserving the benefits of a free exchange of functional genomics data between researchers.

Researchers simulate privacy leaks in functional genomics studies
In a study publishing November 12 in the journal Cell, a team of investigators demonstrates that it's possible to de-identify raw functional genomics data to ensure patient privacy.

Some children at higher risk of privacy violations from digital apps
While federal privacy laws prohibit digital platforms from storing and sharing children's personal information, those rules aren't always enforced, researchers find.

COVID-19 symptom tracker ensures privacy during isolation
An online COVID-19 symptom tracking tool developed by researchers at Georgetown University Medical Center ensures a person's confidentiality while being able to actively monitor their symptoms.

New research reveals privacy risks of home security cameras
An international study has used data from a major home Internet Protocol (IP) security camera provider to evaluate potential privacy risks for users.

Researcher develops tool to protect children's online privacy
A University of Texas at Dallas study of 100 mobile apps for kids found that 72 violated a federal law aimed at protecting children's online privacy.

Do COVID-19 apps protect your privacy?
Many mobile apps that track the spread of COVID-19 ask for personal data but don't indicate the information will be secure.

COVID-19 contact tracing apps: 8 privacy questions governments should ask
Imperial experts have posed eight privacy questions governments should consider when developing coronavirus contact tracing apps.

New security system to revolutionise communications privacy
A new uncrackable security system created by researchers at King Abdullah University of Science and Technology (KAUST), the University of St Andrews and the Center for Unconventional Processes of Sciences (CUP Sciences) is set to revolutionize communications privacy.

Mayo Clinic studies patient privacy in MRI research
Though identifying data typically are removed from medical image files before they are shared for research, a Mayo Clinic study finds that this may not be enough to protect patient privacy.

Read More: Privacy News and Privacy Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.