Bluesky Facebook Reddit Email

Dr. Emily DeJeu on using large language models to analyze sensitive discourse

05.08.25 | Carnegie Mellon University

Rigol DP832 Triple-Output Bench Power Supply

Rigol DP832 Triple-Output Bench Power Supply powers sensors, microcontrollers, and test circuits with programmable rails and stable outputs.

Large language models (LLMs) are artificial intelligence (AI) systems that can understand and generate human language by analyzing and processing large amounts of text. In a new essay, a Carnegie Mellon University researcher critiques an article on LLMs and provides a nuanced look at the models’ limits for analyzing sensitive discourse, such as hate speech. The commentary is published in the Journal of Multicultural Discourses .

“Discourse analysts have long been interested in studying how hate speech legitimizes power imbalances and fuels polarization,” says Emily Barrow DeJeu, Assistant Teaching Professor of Business Management Communication at Carnegie Mellon’s Tepper School of Business, who wrote the commentary. “This seems especially relevant today amid rising populism, nativism, and threats to liberal democracy.”

DeJeu’s commentary is of an article that appears in the same issue of the Journal of Multicultural Discourses entitled “ Large Language Models and the Challenge of Analyzing Discriminatory Discourse: Human-AI Synergy in Researching Hate Speech on Social Media ,” by Petre Breazu, Miriam Schirmer, Songbo Hu, and Napoleon Katsos. The article explores the extent to which LLMs can code racialized hate speech.

Using computerized tools to analyze language is not new. Since the 1960s, researchers have been interested in computational methods for examining bodies of work. But some forms of qualitative analysis have historically been considered strictly within the purview of human analysts, DeJeu says. Today, there is increasing interest in using new LLMs to analyze discourse.

Unlike other analytical tools, LLMs are flexible: They can conduct an array of analytical tasks on a variety of text types. While the article by Breazu et al. is timely and significant, DeJeu says it also presents challenges because LLMs have strict safeguards to prevent them from issuing offensive, harmful content.

While DeJeu commends the authors for doing human- and LLM-driven coding of YouTube comments made on videos of Roma migrants in Sweden begging for money, she identifies two problems with their work:

DeJeu says the article is valuable in considering how new the definition of synergy is when working with AI tools. She concludes her commentary by addressing what roles LLMs should play in critical discourse analysis. Should LLMs be used iteratively to refine thinking, should researchers try to get them to perform like humans to validate or semi-automate resource processes, or should there be some combination of both?

“The field will probably eventually clarify what human-AI coding looks like, but for now, we should consider these questions carefully, and the methods we use should be designed and informed by our answers,” DeJeu cautions.

Journal of Multicultural Discourses

10.1080/17447143.2025.2492145

Can (and should) LLMs perform critical discourse analysis?

22-Apr-2025

Keywords

Article Information

Contact Information

Caitlin Kizielewicz
Carnegie Mellon University
ckiz@andrew.cmu.edu

Source

How to Cite This Article

APA:
Carnegie Mellon University. (2025, May 8). Dr. Emily DeJeu on using large language models to analyze sensitive discourse. Brightsurf News. https://www.brightsurf.com/news/147N4XJ1/dr-emily-dejeu-on-using-large-language-models-to-analyze-sensitive-discourse.html
MLA:
"Dr. Emily DeJeu on using large language models to analyze sensitive discourse." Brightsurf News, May. 8 2025, https://www.brightsurf.com/news/147N4XJ1/dr-emily-dejeu-on-using-large-language-models-to-analyze-sensitive-discourse.html.