LAWRENCE — Artificial intelligence is steadily becoming more embedded in journalism, part of how journalists write, edit, research and more. But little is known about how future journalists are learning about the technology. New research from the University of Kansas has found journalism classes across the country are taking varying approaches from considering its use academic dishonesty to encouraging its use or discussing the matter philosophically. That scattershot approach can both shortchange and confuse students, while more consistency could better serve education and practice, according to the authors.
Researchers compared 60 journalism course syllabi from 15 universities across the United States, finding variation within schools and from one type of class to the next on how AI should or should not be used. Three general approaches emerged: AI as a threat to learning and professional standards, AI as a tool permitted under strict boundaries and AI as a subject of ethical and professional inquiry.
The research stemmed from a class project Samuel Muzhingi, a doctoral student, took at KU. A researcher whose work focuses on how emerging technologies are adopted, regulated and sustained in communication contexts, he analyzed existing literature on how programs in countries such as Egypt, Spain and Brazil approached AI use in journalism education. He found inconsistency.
“That's something that I also saw here in the U.S., like, you get different kinds of policies where, for example, at one institution some classes are adopting it, then another class is not adopting it, and it's the same institution, and it is something that confuses students,” Muzhingi said. “Students are like, ‘OK, so which class or which professor should I listen to more?’”
Analysis showed that syllabi of certain types of classes tended to adhere to certain approaches to AI. Writing classes tended to take the “threat to learning” approach and discourage its use. The finding is not surprising as institutions want students to be able to write on their own, a skill at the heart of journalism, the researchers said. Design and photography classes tended more to the side of permissible use under strict boundaries, while media ethics and law classes tended to treat it as a source of professional inquiry.
While it is not entirely surprising that there is a variety of approaches in education, just as the field is figuring out how to use AI, such a varied approach is not necessarily best serving students.
“That's very much been a discussion among professors of these classes about how we can best prepare students to enter these fields when professionals are still trying to figure out best practices,” said Alyssa Appelman, associate professor of journalism & mass communications at KU and a co-author. “I was very excited when Samuel mentioned that he wanted to do some research about this topic, because I think it's a ripe area of research to look at this overlap between education and technology, specifically in the context of journalism education.”
Course syllabi offered a wide range of approaches to AI. Approaches that fell under the existential threat theme emphasized that AI writing lacks integrity and rhetorical judgment required in journalism. They also noted that a failure to cite AI-created content would be considered plagiarism and reported for academic dishonesty.
Courses often listed AI as a tool, but not as a writer, something that could be used to check grammar or spelling, but often with warnings that the technology is prone to hallucinations and bias. Some said AI’s use would be allowed, but only by approval of the instructor.
Those that viewed AI as a topic of professional inquiry often incorporated it in class readings or assigned students to write about and discuss how it has presented challenges to the media industry.
The study, written with Hong Tien Vu of the University of Colorado and Tamar Wilner, assistant professor of journalism & mass communications at KU, was published in Journalism & Mass Communication Educator.
The inconsistency and mixed messages indicate a need for more clear approaches, at least within courses offered at a given institution, the authors wrote. And guidance from accrediting bodies such as the Association for Education in Journalism and Mass Communication could help schools craft clear, consistent policies.
“As an instructor, even if I have concerns about the tool, I still see a responsibility to help students to engage with it critically. It’s not just about using AI but understanding its limits and its impact on journalistic practice,” Muzhingi said. “We may not be able to avoid it, but we can be intentional about how it is integrated, especially as employers are beginning to ask about these skills.”
Muzhingi and Appelman have also published a study gauging journalism students’ ethical concerns about adopting AI usage in the field. They hope to further research how students respond and engage AI tools in their work when given clear guidelines compared to how they do so without.
“One of my biggest takeaways from this study is how important it is for instructors to be clear about their expectations at the onset of class or at the onset of each assignment,” Appelman said. “As of right now, it's so different across different programs, professors can't assume that students are coming in knowing where the boundaries are, what the appropriate uses are. Professors need to be very clear, because these findings suggest that semester to semester, or even class to class, students are getting different advice from different programs.”
Journalism & Mass Communication Educator
Data/statistical analysis
Not applicable
When AI Enters the Syllabus: Journalism’s Crossroads of Threat and Opportunity
23-Mar-2026