Thursday, January 12, 2023

AI and physics education

New technologies bring new opportunities for physics education. ANU Physics has for several years been looking at incorporating virtual reality (VR) into their courses, particularly for first year physics. Two examples:

"Dissonance-VR targets misconceptions around forces, by quizing students about forces acting on a basketball, and then presenting them with the physical world that manifests their answer. Any misconception they have results in an unphysical world that feels wrong. A narrator guides them to correct their choice and their misconception."

"Field-VR is an electric and magnetic field sandbox that allows students to visualise electric and magnetic fields and be immersed in them. They can build complex fields using standard electric and magnetic sources, as well as test charged particle trajectories. The hope is that this visual tool will aid students in learning EM, particularly those who find spatial problems challenging. This software has recently been upgraded to allow for multiple users - i.e. collaborative VR tutorials."

How can emerging AI tools such for text generation (e.g. ChatGPT) and image analysis/generation (e.g. StableDiffusion) be useful for education? 

From what I've seen in the news and social media, the academic perspective on these tools has so far been largely negative, focusing on how they may be used for cheating, making take-home assignments obsolete. Even if methods for detecting AI-generated text are improved and made widely available, the student who modifies an AI-generated first draft will probably have an advantage. 

Rather than making futile attempts to stamp out these new tools, we should be thinking about how they will change our workflows in physics (and other fields) and how courses should be updated to incorporate them.

Already there are businesses springing up selling AI content generators for blogs, marketing materials, coursework, etc. It won't be long before text-to-text models will be used by working scientists for tedious tasks such as drafting article introductions, literature reviews, and maybe even PhD theses, most likely using a general purpose model augmented with text harvested from the articles your work cites. This will give us more time to do actual science, provided we remain mindful of the biases and limitations of AI models. We will need to train students on how to prompt these models to obtain a useful first draft and to identify and correct physics errors in the generated text. This is not unlike the role of professors today, who will assign a research topic and get the student started by providing a list of references to read and eventually (nominally) make improvements to the first draft written by the student.

Lab courses will also be transformed by the availability of fast AI computer vision tools for object detection, image segmentation, and so on. I remember my first year physics labs frequently involved mucking around with stopwatches to make (inaccurate) measurements of stuff such as the local gravitational acceleration. These kinds of experiments can be made enormously easier and more precise by recording a video using a smartphone camera and post-processing to extract the needed observables. This will change the required skills and dominant sources of error.

Apart from changes to the techniques, we will also be able to conduct experiments at a much larger scale, e.g. involving hundreds or thousands of objects that can be individually tracked using computer vision libraries. An infamous thermodynamics lab experiment at ANU involved testing the ergodic hypothesis by observing the motion of several battery-powered cat balls on a partitioned table (explained here) and periodically counting the number of "atoms" (balls) in each partition. This was not only tedious, but randomly-failing batteries often led to experimental results apparently violating the laws of statistical mechanics. With computer vision you can get the full time series of the position data and not only compute the required observables for much larger system sizes, but also identify and exclude balls as soon as their batteries fail.

What do you think? Is this the future? Or have these techniques already been implemented by lab demonstrators and I'm just showing my age?

No comments:

Post a Comment