QUT algorithm could quash Twitter abuse of women

August 27, 2020

Online abuse targeting women, including threats of harm or sexual violence, has proliferated across all social media platforms but QUT researchers have developed a statistical model to help drum it out of the Twittersphere.

Associate Professor Richi Nayak, Professor Nicolas Suzor and research fellow Dr Md Abul Bashar from QUT have developed a sophisticated and accurate algorithm to detect these posts on Twitter, cutting through the raucous rabble of millions of tweets to identify misogynistic content.

The team, a collaboration between QUT's faculties of Science and Engineering and Law and the Digital Media Research Centre, mined a dataset of 1M tweets then refined these by searching for those containing one of three abusive keywords - whore, slut, and rape.

Their paper - Regularising LSTM classifier by transfer learning for detecting misogynistic tweets with small training set - has been published by Springer Nature.

"At the moment, the onus is on the user to report abuse they receive. We hope our machine-learning solution can be adopted by social media platforms to automatically identify and report this content to protect women and other user groups online," said Professor Nayak.

"The key challenge in misogynistic tweet detection is understanding the context of a tweet. The complex and noisy nature of tweets makes it difficult.

"On top of that, teaching a machine to understand natural language is one of the more complicated ends of data science: language changes and evolves constantly, and much of meaning depends on context and tone.

"So, we developed a text mining system where the algorithm learns the language as it goes, first by developing a base-level understanding then augmenting that knowledge with both tweet-specific and abusive language.

"We implemented a deep learning algorithm called Long Short-Term Memory with Transfer Learning, which means that the machine could look back at its previous understanding of terminology and change the model as it goes, learning and developing its contextual and semantic understanding over time."

While the system started with a base dictionary and built its vocabulary from there, context and intent had to be carefully monitored by the research team to ensure that the algorithm could differentiate between abuse, sarcasm and friendly use of aggressive terminology.

"Take the phrase 'get back to the kitchen' as an example--devoid of context of structural inequality, a machine's literal interpretation could miss the misogynistic meaning," said Professor Nayak.

"But seen with the understanding of what constitutes abusive or misogynistic language, it can be identified as a misogynistic tweet.

"Or take a tweet like 'STFU BITCH! DON'T YOU DARE INSULT KEEMSTAR OR I'LL KILL YOU'. Distinguishing this, without context, from a misogynistic and abusive threat is incredibly difficult for a machine to do.

"Teaching a machine to differentiate context, without the help of tone and through text alone, was key to this project's success, and we were very happy when our algorithm identified 'go back to the kitchen' as misogynistic--it demonstrated that the context learning works."

The research team's model identifies misogynistic content with 75% accuracy, outperforming other methods that investigate similar aspects of social media language.

"Other methods based on word distribution or occurrence patterns identify abusive or misogynistic terminology, but the presence of a word by itself doesn't necessarily correlate with intent," said Professor Nayak.

"Once we had refined the 1M tweets to 5000, those tweets were then categorised as misogynistic or not based on context and intent, and were input to the machine learning classifier, which used these labelled samples to begin to build its classification model.

"Sadly, there's no shortage of misogynistic data out there to work with, but labelling the data was quite labour-intensive."

Professor Nayak and the team hoped the research could translate into platform-level policy that would see Twitter, for example, remove any tweets identified by the algorithm as misogynistic.

"This modelling could also be expanded upon and used in other contexts in the future, such as identifying racism, homophobia, or abuse toward people with disabilities," she said.

"Our end goal is to take the model to social media platforms and trial it in place. If we can make identifying and removing this content easier, that can help create a safer online space for all users."
-end-
The full paper can be viewed online at: https://link.springer.com/article/10.1007/s10115-020-01481-0#Ack1

Media contact:

Amanda Weaver, QUT Media, 07 3138 3151, amanda.weaver@qut.edu.au

After hours: Rose Trapnell, 0407 585 901, media@qut.edu.au

Queensland University of Technology

Related Language Articles from Brightsurf:

Learning the language of sugars
We're told not to eat too much sugar, but in reality, all of our cells are covered in sugar molecules called glycans.

How effective are language learning apps?
Researchers from Michigan State University recently conducted a study focusing on Babbel, a popular subscription-based language learning app and e-learning platform, to see if it really worked at teaching a new language.

Chinese to rise as a global language
With the continuing rise of China as a global economic and trading power, there is no barrier to prevent Chinese from becoming a global language like English, according to Flinders University academic Dr Jeffrey Gil.

'She' goes missing from presidential language
MIT researchers have found that although a significant percentage of the American public believed the winner of the November 2016 presidential election would be a woman, people rarely used the pronoun 'she' when referring to the next president before the election.

How does language emerge?
How did the almost 6000 languages of the world come into being?

New research quantifies how much speakers' first language affects learning a new language
Linguistic research suggests that accents are strongly shaped by the speaker's first language they learned growing up.

Why the language-ready brain is so complex
In a review article published in Science, Peter Hagoort, professor of Cognitive Neuroscience at Radboud University and director of the Max Planck Institute for Psycholinguistics, argues for a new model of language, involving the interaction of multiple brain networks.

Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.

Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.

Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.

Read More: Language News and Language Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.