In July 2025, Canvas LMS announced integration with ChatGPT for instructor use. Features touted by the tech company include the generation of image descriptions, rubrics, and feedback for assignments.
It’s that last one that makes me pause. Many other educators, too.
Because how ethical is it to ask students not to use AI if instructors use it for feedback? Isn’t giving feedback part of the jobs we’re paid to do?
On the other hand, some instructors are bogged down with ridiculous student loads. AI-generated comments may be the best way to give timely feedback for formative assessments.
So what’s a teacher to do?
Research into this area is limited, but here are some findings to help you make an informed decision about relying on AI for feedback.
Human Feedback > AI Feedback
Steiss et al (2024) compared AI-generated feedback with feedback provided by trained instructors in five different areas: essay criteria, directions for improvement, accuracy, supportive tone, and prioritizing important feedback comments. Instructors scored better in four out of five areas. AI only scored better in criteria-based feedback. As much as AI has improved in essay feedback, human scorers still have the advantage in most areas.
A couple of things to note in this study. One, the instructors in the study received training. How many instructors receive training and professional development in giving feedback? How many colleges provide intensive work in feedback writing to their pre-service teachers? In my 25 years in education, I’ve received none outside of my own pursuits. This makes me wonder how a random selection of teachers would perform in this situation, not to mention the need for more education geared toward writing effective feedback to students.
Two, instructors outperformed AI in four areas, but AI wasn’t far behind. This wasn’t a slam dunk for instructors, merely a slight edge. This opens the door for other considerations, such as available time and student load. Timeliness is a key factor in effective feedback, but student loads of 150–200 students can take a teacher several days (or weeks!) to give fully developed feedback. Does the timeliness that AI provides outweigh the slight advantage instructors have in those other feedback categories?
Some teachers may decide the answer is yes.
Student Perceptions
We can’t forget the other key ingredient in this dilemma: students. After all, they’re relying on our expertise to guide their learning. What are their perceptions of teachers using AI feedback?
The results were mixed. According to Nazaretsky et al (2024), students generally preferred human feedback, even if those same students rated the AI feedback as higher in quality. In contrast, Zhang et al (2025) found that students preferred the feedback produced by AI or co-produced by AI with human modifications. Students in the Zhang study rated the AI feedback as less genuine after they learned that the feedback was given by ChatGPT. They didn’t lower “co-produced” (AI feedback modified by a human) ratings in the genuine category, though.
So genuineness is important. Students want human interaction.
To take advantage of both AI and human feedback, the answer may be what Zhang et al (2025) term “co-produced” feedback and Nazaretsky et al (2024) call “human-in-the-loop.” Instructors use AI to develop feedback for student work and then modify those comments by adding or deleting comments, prioritizing key suggestions, and adding encouragement.
Still, teachers need to be competent at feedback to effectively modify the comments that AI provides.
The research in these areas is limited. It was conducted with college students and instructors, not in secondary classrooms. Zhang et al (2025) also note that their findings may be different from Zaretsky et al (2024) due to increased time pressures that negatively affected the quality of human feedback.
The human component is a problem for any study. Many factors can affect the quality of human feedback, including time, training, experience, and stress levels. Different students will also prefer different approaches; some appreciate more encouraging feedback, while others want brutal honesty. These preferences will also affect how students rate AI and human feedback.
Knowing what’s best when using AI in student feedback is complicated. Both Zhang et al (2025) and Nazarestsky et al (2024) agree that instructors and schools need to consider the ethics of AI use in feedback.
Transparency in AI Use in Feedback
Instructors using AI for feedback need to be honest and explain their reasons for doing so. The need for a quick turnaround in feedback may be crucial for some assignments. Or a family emergency has taken over your life, and AI would provide better feedback than you’re able to give.
Otherwise, if we pass AI feedback off as our own, we’re just as guilty for using AI unethically. Students rely on their teacher's expertise to guide their learning, and we have a professional obligation to provide that guidance.
Like everything else in the AI and education world, opinions vary across the spectrum, and the best thing you can do is stay in touch with your principles and stay updated on the latest research on the effectiveness of AI.
Source: Melissa Pilakowski
https://medium.com/educreation/ai-generated-feedback-vs-human-feedback-639321d530b8
No comments:
Post a Comment