Bluesky Facebook Reddit Email

AI could undermine meaningful learning unless feedback stays rooted in connection, researchers recommend

03.31.26 | University of Surrey

Apple iPhone 17 Pro

Apple iPhone 17 Pro delivers top performance and advanced cameras for field documentation, data collection, and secure research communications.

The rise of generative AI in higher education is reshaping how feedback is delivered, but meaningful learning could be undermined if its use is not carefully guided by principles of care, trust and connection, according to new research led by the University of Surrey.

Published in Assessment & Evaluation in Higher Education , the paper explores how generative AI technologies, including chatbots such as ChatGPT, are transforming feedback for students – highlighting both the opportunities and risks of AI in education. While AI can generate responses at speed and scale, researchers argue it cannot fully replicate the judgement and relationships that make feedback effective.

Instead, they call for a “care-full” approach – one that treats feedback not as a set of comments, but as an ongoing process of dialogue, reflection and growth. Without this, they warn, it risks reducing feedback to a transactional exercise rather than a meaningful part of learning.

Professor Naomi Winstone, Professor of Educational Psychology at the University of Surrey, and lead author of the paper, said:

“There’s a real danger that AI takes us backwards, towards seeing feedback as something we simply ‘give’ to students. Decades of research show that feedback only works when students actively engage with it and make sense of it over time.”

The paper also finds that while AI-generated feedback can be useful, students tend to place greater trust in feedback from human educators. Human feedback is more likely to be acted upon because it reflects better understanding, empathy and context – qualities AI cannot replicate.

At the same time, the research team note that AI can play a valuable complementary role. For some students, it offers a low-pressure way to explore ideas or seek feedback with reduced fear of judgement. However, over-reliance on AI may reduce meaningful interaction between students and educators and could even exacerbate existing educational inequalities if some students benefit more than others.

The research builds on an international manifesto developed by the team, which sets out ten principles for feedback in the age of AI. These are:

Feedback is a process, not corrective comments

Feedback is a relational practice

Feedback can be messy, uncomfortable, challenging and joyous.

Feedback should be an ethical practice

Feedback should promote learning over time

Feedback and associated technologies should be designed in conversation with learners and educators

Feedback engagement requires time and care

Learning, not technological efficiency or compliance, should drive thinking and decision-making regarding feedback processes

Feedback can be enhanced by digital technologies, but digital technologies do not always enhance feedback

More feedback is not necessarily better for learning

Professor Winstone added:

“The key question isn’t what AI can do, it’s what it should do. If we want to protect meaningful learning, we need to design feedback around care, trust and relationships, not just speed and scale.”

To ensure meaningful learning, the authors suggest generative AI should be integrated thoughtfully into education, alongside a continued commitment to “care-full” feedback practices that are continually evaluated and evolved, while upholding equity, professional expertise and connection.

[ENDS]

Notes to editors

Professor Naomi Winstone is available for interview; please contact mediarelations@surrey.ac.uk to arrange.

The full paper can be found here: https://www.tandfonline.com/doi/ref/10.1080/02602938.2026.2643333?scroll=top

Assessment & Evaluation in Higher Education

10.1080/02602938.2026.2643333

The care-full craft of feedback in an age of generative AI

18-Mar-2026

Keywords

Article Information

Contact Information

Dalitso Njolinjo
University of Surrey
d.njolinjo@surrey.ac.uk

Source

How to Cite This Article

APA:
University of Surrey. (2026, March 31). AI could undermine meaningful learning unless feedback stays rooted in connection, researchers recommend. Brightsurf News. https://www.brightsurf.com/news/LPENOWV8/ai-could-undermine-meaningful-learning-unless-feedback-stays-rooted-inconnection-researchersrecommend.html
MLA:
"AI could undermine meaningful learning unless feedback stays rooted in connection, researchers recommend." Brightsurf News, Mar. 31 2026, https://www.brightsurf.com/news/LPENOWV8/ai-could-undermine-meaningful-learning-unless-feedback-stays-rooted-inconnection-researchersrecommend.html.