Nav: Home

Could your computer please be more polite? Thank you

June 30, 2020

PITTSBURGH--In a tense time when a pandemic rages, politicians wrangle for votes and protesters demand racial justice, a little politeness and courtesy go a long way. Now researchers at Carnegie Mellon University have developed an automated method for making communications more polite.

Specifically, the method takes nonpolite directives or requests -- those that use either impolite or neutral language -- and restructures them or adds words to make them more well-mannered. "Send me the data," for instance, might become "Could you please send me the data?"

The researchers will present their study on politeness transfer at the Association for Computational Linguistics annual meeting, which will be held virtually beginning July 5.

The idea of transferring a style or sentiment from one communication to another -- turning negative statements positive, for instance -- is something language technologists have been doing for some time. Shrimai Prabhumoye, a Ph.D. student in CMU's Language Technologies Institute (LTI), said performing politeness transfer has long been a goal.

"It is extremely relevant for some applications, such as if you want to make your emails or chatbot sound more polite or if you're writing a blog," she said. "But we could never find the right data to perform this task."

She and LTI master's students Aman Madaan, Amrith Setlur and Tanmay Parekh solved that problem by generating a dataset of 1.39 million sentences labeled for politeness, which they used for their experiments.

The source of these sentences might seem surprising. They were derived from emails exchanged by employees of Enron, a Texas-based energy company that, until its demise in 2001, was better known for corporate fraud and corruption than for social niceties. But half a million corporate emails became public as a result of lawsuits surrounding Enron's fraud scandal and subsequently have been used as a dataset for a variety of research projects.

But even with a dataset, the researchers were challenged simply to define politeness.

"It's not just about using words such as 'please' and 'thank you,'" Prabhumoye said. Sometimes, it means making language a bit less direct, so that instead of saying "you should do X," the sentence becomes something like "let us do X."

And politeness varies from one culture to the next. It's common for native North Americans to use "please" in requests to close friends, but in Arab culture it would be considered awkward, if not rude. For their study, the CMU researchers restricted their work to speakers of North American English in a formal setting.

The politeness dataset was analyzed to determine the frequency and distribution of words in the polite and nonpolite sentences. Then the team developed a "tag and generate" pipeline to perform politeness transfers. First, impolite or nonpolite words or phrases are tagged and then a text generator replaces each tagged item. The system takes care not to change the meaning of the sentence.

"It's not just about cleaning up swear words," Prabhumoye said of the process. Initially, the system had a tendency to simply add words to sentences, such as "please" or "sorry." If "Please help me" was considered polite, the system considered "Please please please help me" even more polite.

But over time the scoring system became more realistic and the changes became subtler. First person singular pronouns, such as I, me and mine, were replaced by first person plural pronouns, such as we, us and our. And rather than position "please" at the beginning of the sentence, the system learned to insert it within the sentence: "Could you please send me the file?"

Prabhumoye said the researchers have released their labeled dataset for use by other researchers, hoping to encourage them to further study politeness.
-end-
In addition to the students, the study's co-authors included several professors from the LTI and the Machine Learning Department -- Barnabas Poczos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov and Alan Black. The Air Force Research Laboratory, Office of Naval Research, National Science Foundation, Apple and NVIDIA supported this research.

Carnegie Mellon University

Related Adds Words Articles:

NIH study finds out why some words may be more memorable than others
In a recent study of epilepsy patients and healthy volunteers, National Institutes of Health researchers found that our brains may withdraw some common words, like ''pig,'' ''tank,'' and ''door,'' much more often than others, including ''cat,'' ''street,'' and ''stair.'' By combining memory tests, brain wave recordings, and surveys of billions of words published in books, news articles and internet encyclopedia pages, the researchers not only showed how our brains may recall words but also memories of our past experiences.
Exploring the use of 'stretchable' words in social media
An investigation of Twitter messages reveals new insights and tools for studying how people use stretched words, such as 'duuuuude,' 'heyyyyy,' or 'noooooooo.' Tyler Gray and colleagues at the University of Vermont in Burlington present these findings in the open-access journal PLOS ONE on May 27, 2020.
How the brain separates words from song
The perception of speech and music -- two of the most uniquely human uses of sound -- is enabled by specialized neural systems in different brain hemispheres adapted to respond differently to specific features in the acoustic structure of the song, a new study reports.
Words matter when it comes to apparel for people living with disabilitie
Brands should consider the language they use when marketing products to this group of consumers, according to a new study from the University of Missouri.
Brain patterns can predict speech of words and syllables
Neurons in the brain's motor cortex previously thought of as active mainly during hand and arm movements also light up during speech in a way that is similar to patterns of brain activity linked to these movements, suggest new findings published today in eLife.
'I predict your words': that is how we understand what others say to us
We are at a fun but noisy party: how can we understand the words someone is saying to us despite the background music and voices?
The brain processes words placed on the right side of a screen more quickly
When reading words on a screen, the human brain comprehends words placed on the right side of the screen faster.
Nearly identical representations of spoken, written words in the brain
The brain activity evoked from processing written or heard semantic information is almost identical, according to research in adults published in JNeurosci.
The voice is key to making sense of the words in our brain
Scientists at the Basque research centre BCBL conclude that the voice is fundamental for mentally presenting the meaning of words in the brain.
Texts like networks: How many words are sufficient to recognize the author?
We are more original than we think -- this is what is being suggested by literary text analysis carried out by a new method of stylometry proposed by scientists from the Institute of Nuclear Physics Polish Academy of Sciences.
More Adds Words News and Adds Words Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Our Relationship With Water
We need water to live. But with rising seas and so many lacking clean water – water is in crisis and so are we. This hour, TED speakers explore ideas around restoring our relationship with water. Guests on the show include legal scholar Kelsey Leonard, artist LaToya Ruby Frazier, and community organizer Colette Pichon Battle.
Now Playing: Science for the People

#568 Poker Face Psychology
Anyone who's seen pop culture depictions of poker might think statistics and math is the only way to get ahead. But no, there's psychology too. Author Maria Konnikova took her Ph.D. in psychology to the poker table, and turned out to be good. So good, she went pro in poker, and learned all about her own biases on the way. We're talking about her new book "The Biggest Bluff: How I Learned to Pay Attention, Master Myself, and Win".
Now Playing: Radiolab

Uncounted
First things first: our very own Latif Nasser has an exciting new show on Netflix. He talks to Jad about the hidden forces of the world that connect us all. Then, with an eye on the upcoming election, we take a look back: at two pieces from More Perfect Season 3 about Constitutional amendments that determine who gets to vote. Former Radiolab producer Julia Longoria takes us to Washington, D.C. The capital is at the heart of our democracy, but it's not a state, and it wasn't until the 23rd Amendment that its people got the right to vote for president. But that still left DC without full representation in Congress; D.C. sends a "non-voting delegate" to the House. Julia profiles that delegate, Congresswoman Eleanor Holmes Norton, and her unique approach to fighting for power in a virtually powerless role. Second, Radiolab producer Sarah Qari looks at a current fight to lower the US voting age to 16 that harkens back to the fight for the 26th Amendment in the 1960s. Eighteen-year-olds at the time argued that if they were old enough to be drafted to fight in the War, they were old enough to have a voice in our democracy. But what about today, when even younger Americans are finding themselves at the center of national political debates? Does it mean we should lower the voting age even further? This episode was reported and produced by Julia Longoria and Sarah Qari. Check out Latif Nasser's new Netflix show Connected here. Support Radiolab today at Radiolab.org/donate.