No posts for a while as I was very busy with teaching this term. Last week I saw this provocative article which really resonated with the course I taught: Everyone is cheating their way through college. In summary, if students can use a large language model (LLM) to complete an assessment (even when expressly forbidden), they will.
In the electromagnetism course I just taught this was also my experience. Many take-home assignments had responses that looked convincing at a first glance, but upon reading made no sense. Which meant the student didn't even bother to vet the response. Straight from ChatGPT to the assignment submission, no thinking required!
Unsurprisingly, students who relied in generative AI to complete their take-home assignments fared very poorly in the closed-book exams, failing to grasp even basic concepts or sanity check their answers. Many failed the course.
It is sad to see so many students forking out substantial course fees and then delegating their "thinking" to a large language model.
Why are they doing so?
Some students in the course feedback noted that they didn't see the relevance of the course content to their future major, particularly those interested in architecture and information systems. Since it's a compulsory course they just want to pass it and be done with it. They don't think the material will be useful for them later on, so whatever is the fastest route to a passing grade will be taken.
This is one area where we need to do better as educators. Physics is not just the facts and various equations to be solved - it's also the mindset of decomposing a complex system into its fundamental components to understand how it really works. This is exemplified beautifully by the unification of the different laws of electricity and magnetism into Maxwell's equations. Unfortunately we only get to this point in the final week of the course, long after the disinterested students have checked out.
Real world problems aren't solved by exams. But now they are the only way to reliably measure the student's mastery of the subject, rather than their ability to outsource thinking to an easily-available LLM. This isn't going to change anytime soon. Students who use LLMs as a crutch will fare poorly in the exams.
The student distribution is becoming increasingly bimodal - the top ones get better with the help of LLMs, while the lower end is doing worse, particularly in exams. The middle suffers the most. It becomes hard to distinguish a cheater who aces the take-home assignments and bombs the exams from an honest student who receives an average grade for both. Only the students with the very top marks (guaranteeing a good exam score) can be trusted to have truly mastered the subject.
Moreover, I've seen how the students on the top end of the curve are able to use LLMs to enormously enhance their productivity, for example by quickly generating draft code for numerical simulations (which they they go through to fix the inevitable bugs). There's no longer a need to wade through the matplotlib documentation to make a useable plot. But you still need to learn the fundamentals to be able to fix the errors!
No comments:
Post a Comment