More similar than they think: Liberals and conservatives exaggerate perceived moral views

December 12, 2012

Moral stereotypes about "typical" liberals and conservatives held by both groups are generally correct, but exaggerated both for their own group and the other, according to new research published December 12 in the open access journal PLOS ONE by Jesse Graham at the University of Southern California and his colleagues at the University of Virginia and New York University.

The researchers asked 2,212 U.S participants to answer questions about moral beliefs either with their own views, or with their idea of a typical liberal or conservative person's answers. They found that liberals endorsed individualistic moral concerns of compassion and fairness more than conservatives, and conservatives endorsed group-focused concerns such as loyalty and respect for authority. Across the political spectrum, participants' responses correctly reflected the moral endorsements of "typical" liberals and conservatives, but increased the extremity of the views. The authors found that these perceived stereotypes exaggerated the moral ideologies of both the respondent's own group as well as that of the other group, and that liberals were least accurate about the views held by both groups.

Graham explains, "Rather than finding that liberals think conservatives are immoral, and conservatives think the same about liberals, we found that all three groups shared exaggerated moral stereotypes about partisans on either side. These moral stereotypes were basically that liberals don't care at all about loyalty, authority, and sanctity, and that conservatives don't care at all about compassion, fairness, and equality. The findings suggest that liberals and conservatives, while differing systematically in their moral worldviews, are actually more similar in their moral judgments than anyone thinks."
-end-
Citation: Graham J, Nosek BA, Haidt J (2012) The Moral Stereotypes of Liberals and Conservatives: Exaggeration of Differences across the Political Spectrum. PLoS ONE 7(12): e50092. doi:10.1371/journal.pone.0050092

Financial Disclosure: This work was supported by Project Implicit. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing Interest Statement: The authors have declared that no competing interests exist.

PLEASE LINK TO THE SCIENTIFIC ARTICLE IN ONLINE VERSIONS OF YOUR REPORT (URL goes live after the embargo ends):http://dx.plos.org/10.1371/journal.pone.0050092

PLOS

Related Fairness Articles from Brightsurf:

Reactions to perceived broken promises lead to workplace stress for police officers
Negative feelings resulting from perceived broken promises from employers within UK police forces are a major cause of workplace stress, according to new research at the University of Birmingham.

The unintended consequence of becoming empathetic
Many people want to become more empathetic. But, these changes in personality may also lead to changes in political ideologies.

New tool improves fairness of online search rankings
In a new paper, Cornell University researchers introduce a tool they've developed to improve the fairness of online rankings without sacrificing their usefulness or relevance.

People view rationality and reasonableness as distinct principles of judgment
When it comes to making sound judgements, most people understand and distinguish that being rational is self-serving and being reasonable is fair and balanced, finds new research from the University of Waterloo.

New machine learning algorithms offer safety and fairness guarantees
Writing in Science, Thomas and his colleagues Yuriy Brun, Andrew Barto and graduate student Stephen Giguere at UMass Amherst, Bruno Castro da Silva at the Federal University of Rio Grande del Sol, Brazil, and Emma Brunskill at Stanford University this week introduce a new framework for designing machine learning algorithms that make it easier for users of the algorithm to specify safety and fairness constraints.

Algorithm for preventing 'undesirable behavior' works in gender fairness and health tests
A new framework for designing machine learning algorithms helps to prevent intelligent machines from exhibiting undesirable behavior, researchers report.

New algorithms train AI to avoid specific bad behaviors
Robots, self-driving cars and other intelligent machines could become better-behaved if machine-learning designers adopt a new framework for building AI with safeguards against specific undesirable outcomes.

Are hiring algorithms fair? They're too opaque to tell, study finds
New research from a team of Computing and Information Science scholars at Cornell University raises questions about hiring algorithms and the tech companies who develop and use them: How unbiased is the automated screening process?

Political affiliation may help drive and shape a person's morals
While it may seem intuitive that a person's beliefs or moral compass may steer them toward one political party over another, a new Penn State study suggests it may be the other way around.

New scientific model can predict moral and political development
A study from a Swedish team of researchers recently published in the social science journal Nature Human Behaviour answers several critical questions on how public opinion changes on moral issues.

Read More: Fairness News and Fairness Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.