Friday, December 30, 2022

2022 in review

Quite a lot happened this year:

1. Travel has returned to pre-covid normalcy, and I even had the chance to attend an in-person conference in Korea. Online is no substitute for the discussions that take place in the breaks between talks. I am glad that our students have also had the chance to travel abroad for inspiring conferences (ICOAM and QTML).

2. In academia it is hard to say no - we are always enticed by opportunities to get another paper, get more citations, increase our h-index. In the first half of the year I was incredibly overworked, supervising several PhD students while trying to find time to finish my own projects. After finishing my two overdue review articles in July I decided to cut back on commitments so I would have time to properly supervise students. This was a great success, and it's quite liberating not having to care about getting just one more paper in PRL/Nature/whatever.

3. I have now worked a full year as a remote editor for Physical Review A, handling over 300 submissions. This has been a great learning experience and has given me a better appreciation for how peer review can improve the quality and rigor of research articles. Sadly it is a minority of researchers who are willing to offer their time to provide well-crafted, thoughtful reports. It is promising to see that publishers including APS and Optica are providing more resources for referees, particularly early career researchers. It would be good to see referee training integrated directly into graduate research programs.

4. Machine learning models for image generation (such as Stable Diffusion) and text generation (ChatGPT) are going to change the world. There's no putting the genie back into the bottle now that anyone can download the trained model weights in a few minutes and run them on their own personal computer (InvokeAI doesn't even require a high end GPU!). Some professions such as graphic artists will be irrevocably changed. Still, the models are not perfect and they often fail in subtle and unpredictable ways, requiring human vetting. Thus, at least in the near term they will be primarily used to enhance productivity, not destroy entire professions.

5. In quantum computing, the most exciting developments for me were several groups proposing efficient classical algorithms for spoofing the results of random quantum circuit sampling experiments and debates over quantum supremacy using quantum topological data analysis.

Stay tuned next year for more on flat bands, Weyl semimetals, (quantum) machine learning, quantum scars, and more blogging. Happy 2023!

No comments:

Post a Comment