Greg Kestin visited the Penn State Eberly College of Science last month as part of the Grove Center spring speaker series and a physics colloquium. Kestin is the associate director of science education and a lecturer on physics at Harvard University, where he earned his doctorate in theoretical particle physics and quantum field theory. His recent work focuses on AI-powered technology for education.
In his talk, Kestin focused on how AI tools like ChatGPT can both hinder and enhance learning, depending on how they are used. He first described research showing that when students use general AI tools on their own, they often let the AI do too much of the thinking. In two experiments, students who used these tools performed worse on tests than students who studied with more traditional methods, suggesting that overreliance on AI can weaken critical thinking and independent problem solving.
Kestin then turned to an experiment with a very different AI design. In this study, students were split into two groups. One group learned through active learning methods, while the other used an AI‑supported learning environment built on principles of effective teaching, such as scaffolding and encouraging students to explain their reasoning. In this case, students using the AI tutor reported feeling more engaged and motivated and performed better on tests than those in the active learning group.
For faculty and graduate students in the audience, Kestin’s message was clear. AI is not inherently good or bad for learning. When tools are grounded in learning principles and aligned with disciplinary goals, they can support and even enhance student learning. When they simply make it easier for students to avoid thinking, they can get in the way.