Showing posts with label predictions. Show all posts
Showing posts with label predictions. Show all posts

Tuesday, May 30, 2023

Physics models that are wrong but useful

 "All models are wrong, but some are useful" is a saying usually attributed to statistician George Box. In physics we are often tempted to create a model that might be correct, but ends up being hopelessly useless. 

For example, the multi-particle Schrodinger equation in principle can give us an exact description of the energy levels of any molecule we would like to study, underlying the field of ab-initio quantum chemistry. But it cannot be solved except for the simplest of molecules. Heuristic approximation schemes which may not rigorously justified are essential to obtain useful predictions for large problems of practical interest. String theory is another example, with some arguing it is not even wrong.

There are many neat examples of models that, while wrong, lead to useful predictions and progress in our understanding:

  • The Drude model of electrical conductivity. In the original paper there was a fortuitous cancellation of two big errors yielding agreement with experimental data for the specific heat. Nevertheless, the model remains a very good approximation for the frequency-dependent conductivity of metals.
  • Conductivity at low temperatures: Before 1911 there were various predictions for the resistivity of metals cooled to zero temperature: zero, a finite value, and even infinite (argued by Lord Kelvin). Efforts to determine which prediction was correct led to the unexpected discovery of superconductivity.
  • The Quantum Hall effect: quantization of the Hall conductivity was originally predicted in the absence of scattering, and thus the quantization was expected to only hold to a finite precision. Effects to measure a finite accuracy of the quantization led to the Nobel Prize-winning experiments.

A good model doesn't need to be 100% correct. A good model needs to give an actionable prediction.

Thursday, January 12, 2023

AI and physics education

New technologies bring new opportunities for physics education. ANU Physics has for several years been looking at incorporating virtual reality (VR) into their courses, particularly for first year physics. Two examples:

"Dissonance-VR targets misconceptions around forces, by quizing students about forces acting on a basketball, and then presenting them with the physical world that manifests their answer. Any misconception they have results in an unphysical world that feels wrong. A narrator guides them to correct their choice and their misconception."

"Field-VR is an electric and magnetic field sandbox that allows students to visualise electric and magnetic fields and be immersed in them. They can build complex fields using standard electric and magnetic sources, as well as test charged particle trajectories. The hope is that this visual tool will aid students in learning EM, particularly those who find spatial problems challenging. This software has recently been upgraded to allow for multiple users - i.e. collaborative VR tutorials."

How can emerging AI tools such for text generation (e.g. ChatGPT) and image analysis/generation (e.g. StableDiffusion) be useful for education? 

From what I've seen in the news and social media, the academic perspective on these tools has so far been largely negative, focusing on how they may be used for cheating, making take-home assignments obsolete. Even if methods for detecting AI-generated text are improved and made widely available, the student who modifies an AI-generated first draft will probably have an advantage. 

Rather than making futile attempts to stamp out these new tools, we should be thinking about how they will change our workflows in physics (and other fields) and how courses should be updated to incorporate them.

Already there are businesses springing up selling AI content generators for blogs, marketing materials, coursework, etc. It won't be long before text-to-text models will be used by working scientists for tedious tasks such as drafting article introductions, literature reviews, and maybe even PhD theses, most likely using a general purpose model augmented with text harvested from the articles your work cites. This will give us more time to do actual science, provided we remain mindful of the biases and limitations of AI models. We will need to train students on how to prompt these models to obtain a useful first draft and to identify and correct physics errors in the generated text. This is not unlike the role of professors today, who will assign a research topic and get the student started by providing a list of references to read and eventually (nominally) make improvements to the first draft written by the student.

Lab courses will also be transformed by the availability of fast AI computer vision tools for object detection, image segmentation, and so on. I remember my first year physics labs frequently involved mucking around with stopwatches to make (inaccurate) measurements of stuff such as the local gravitational acceleration. These kinds of experiments can be made enormously easier and more precise by recording a video using a smartphone camera and post-processing to extract the needed observables. This will change the required skills and dominant sources of error.

Apart from changes to the techniques, we will also be able to conduct experiments at a much larger scale, e.g. involving hundreds or thousands of objects that can be individually tracked using computer vision libraries. An infamous thermodynamics lab experiment at ANU involved testing the ergodic hypothesis by observing the motion of several battery-powered cat balls on a partitioned table (explained here) and periodically counting the number of "atoms" (balls) in each partition. This was not only tedious, but randomly-failing batteries often led to experimental results apparently violating the laws of statistical mechanics. With computer vision you can get the full time series of the position data and not only compute the required observables for much larger system sizes, but also identify and exclude balls as soon as their batteries fail.

What do you think? Is this the future? Or have these techniques already been implemented by lab demonstrators and I'm just showing my age?

Friday, December 30, 2022

2022 in review

Quite a lot happened this year:

1. Travel has returned to pre-covid normalcy, and I even had the chance to attend an in-person conference in Korea. Online is no substitute for the discussions that take place in the breaks between talks. I am glad that our students have also had the chance to travel abroad for inspiring conferences (ICOAM and QTML).

2. In academia it is hard to say no - we are always enticed by opportunities to get another paper, get more citations, increase our h-index. In the first half of the year I was incredibly overworked, supervising several PhD students while trying to find time to finish my own projects. After finishing my two overdue review articles in July I decided to cut back on commitments so I would have time to properly supervise students. This was a great success, and it's quite liberating not having to care about getting just one more paper in PRL/Nature/whatever.

3. I have now worked a full year as a remote editor for Physical Review A, handling over 300 submissions. This has been a great learning experience and has given me a better appreciation for how peer review can improve the quality and rigor of research articles. Sadly it is a minority of researchers who are willing to offer their time to provide well-crafted, thoughtful reports. It is promising to see that publishers including APS and Optica are providing more resources for referees, particularly early career researchers. It would be good to see referee training integrated directly into graduate research programs.

4. Machine learning models for image generation (such as Stable Diffusion) and text generation (ChatGPT) are going to change the world. There's no putting the genie back into the bottle now that anyone can download the trained model weights in a few minutes and run them on their own personal computer (InvokeAI doesn't even require a high end GPU!). Some professions such as graphic artists will be irrevocably changed. Still, the models are not perfect and they often fail in subtle and unpredictable ways, requiring human vetting. Thus, at least in the near term they will be primarily used to enhance productivity, not destroy entire professions.

5. In quantum computing, the most exciting developments for me were several groups proposing efficient classical algorithms for spoofing the results of random quantum circuit sampling experiments and debates over quantum supremacy using quantum topological data analysis.

Stay tuned next year for more on flat bands, Weyl semimetals, (quantum) machine learning, quantum scars, and more blogging. Happy 2023!

Friday, December 31, 2021

The year in review

Some thoughts to end 2021:

1. The scientific impacts of covid have become more noticeable to me. Last year many theorists used the lockdowns to finish their ongoing projects. This year, the increased isolation and lack of in-person discussions has stymied creativity and new ideas. Last year in Singapore (and Korea) we were able to come to the office all the time without much disruption, the main limitation being that seminars were held online. This year we've had to work from home for about 4 months. The impact of this is worse for newer graduate students - for them this is the sad normal. I appreciate the in-person discussions since returning to the office last week.

2. Despite optimistic claims by conference organisers, in-person international conferences still seem to be a long way off. The killer is the need for pre-departure covid testing and with it the prospect of having your trip extended or delayed by weeks. Domestic conferences will need to fill the gap in the interim.

3. I applied for some grants but was not successful. This might be a blessing in disguise; with the covid restrictions it's hard to bring new hires into Singapore, and apart from conference travel research expenses for theorists are minimal. However, the lack of job security that comes with ongoing grants is a bummer. One big issue for senior postdocs in Singapore is that more permanent positions require a track record of successful grants, while to apply for most grants here you need a permanent position...

4. Science-wise, this year I've learnt a lot about (quantum) machine learning, quantum computing on the cloud, topological (Jackiw-Rossi) defect modes, applications of topological data analysis to physics (review article coming!), and classical shadows of quantum states. I have quite a few works in progress which will hopefully come to fruition next year.

5. Delegation is still a challenge, but I am slowly improving. Apologies to all my collaborators whose projects I've held up.

6. I started blogging. Writing posts was hard at first but has become a lot easier as the year progressed. Thanks to everyone who keeps reading and I hope the material is useful in some way. Comments on posts are always welcome.

Happy 2022!

Thursday, December 9, 2021

Optics in 2022 & Beyond

Posting is a bit less often than usual since I'm on holiday.

My thoughts on what will be big in topological photonics next year have been published in the December issue of Optics & Photonics News. Previously I posted some initial thoughts on hot emerging topics. In the end I chose topological lasers and photonic crystal waveguides as directions likely to see increased attention in the coming year, and indeed interesting preprints on topological photonic crystal waveguides and polariton lasing have appeared since I submitted my paragraph.

In a broad sense, all of the predictions this year highlight advances in light sources and their potential applications in imaging and communication systems.

Wednesday, October 27, 2021

Optics in 2022 and Beyond

I was recently asked to contribute to Optics & Photonics News a short paragraph on predictions on what will be the biggest advances in topological photonics in 2022. Here are my initial thoughts (to be refined):

Prominent trends in 2021 have been:

  • Demonstrations of topological defect modes including Jackiw-Rossi modes and free-form disclination waveguides.
  • Increased interest in topological designs among non-specialists in topology, studying important questions including how to quantify scattering losses of topological waveguides (e.g. due to sidewall roughness), how to efficiently couple between topological and non-topological modes, and the design of functional components such as power splitters, directional couplers, and absorbers using topological modes.
  • Many theoretical and experimental studies of nonlinear and quantum effects in topological systems, including topological lasers and frequency combs.
  • Optical fiber loops and 3D printed waveguide arrays and photonic crystals have emerged as flexible new platforms for implementing topological models. Most prominently, the former has allowed the first demonstration of various non-Hermitian topological effects. At the same time, the surging interest in quantum computing (in particular photonic approaches) mean that equipment for probing topological wave systems with quantum states of light should become cheaper and more accessible in the near future.
  • Links between topological photonics and other hot topics (bound states in the continuum, structured light, singular optics, transformation optics, leaky mode theory) are becoming better-appreciated.

Already a few years ago there were discussions at conferences on how the field is starting to mature, meaning that some useful practical applications of topological photonics need to be demonstrated to sustain interest among researchers, high impact journal editors, and funding agencies. Therefore I think in the coming year there will be a growing focus on demonstrating topological waveguides and cavities with superior performance compared to conventional designs using well-established figures of merit.

I would like to focus on a specific application (e.g. lasers or integrated waveguides) to give my final paragraph more of a punch.

Did I overlook any important lines of research? I welcome comments and criticism.