Friday, July 30, 2021

Quantum simulation with cold atoms and superconducting qubits

 Two interesting preprints on quantum simulation appeared on arXiv today:

1. Thermalization dynamics of a gauge theory on a quantum simulator, by the USTC team and collaborators. This experiment uses a 71-site optical lattice for cold atoms to simulate the quench dynamics of a one-dimensional U(1) gauge theory. I am not very familiar with U(1) gauge theory and found this paper an informative introduction to the topic, by translating it to the more familiar setting of the Bose-Hubbard model. Basically, the study considers a 1D lattice comprising two detuned sublattices. In the limit where the inter-site hopping is much weaker than the detuning, and the local atom-atom interaction is resonant with the detuning, one can obtain an effective Hamiltonian with a hopping interaction term. Single particle hopping between the two sublattices is forbidden and only pairwise hopping can occur. In the limit of strong interactions, the dynamics becomes constrained to a subset of the full Hilbert space: the deeper sublattice can only be occupied by either 0 or 2 particles, while the shallower sublattice can be occupied by only 0 or 1 particles. This realizes an analogy with U(1) gauge theory by labelling the shallower sublattice as the "matter" field, and the deeper sublattice as the "gauge" field. The experiments then study thermalization in this model when an initial state with uniform "matter" density is quenched. Thus, the dynamics of the Bose-Hubbard model in complex lattices can be mapped onto more exotic gauge fields. It will be interesting to consider this direction further, and whether similar ideas can be realized in photonic lattices with weak and/or mean-field interactions.

2. Observation of Time-Crystalline Eigenstate Order on a Quantum Processor, by the Google AI team and collaborators. This work studies time-crystalline order, which was previously observed using other platforms. Quantum processors are natural for studying periodically-driven phases such as these, owning to description of their dynamics in terms of sequences of Floquet operators. The main innovation in this work appears to be the introduction of continuously-tunable two-qubit CPhase gates, which enables the implementation of strong tunable disorder in an effective Ising interaction between neighbouring qubits. The other two terms comprising the periodic driving are simple single qubit rotations. Using the flexible tunability of their quantum processor, the authors are able to study the stability of the time crystalline order with respect to changes in the system parameters, scaling with the system size (considering a linear chain of up to 20 qubits), and the properties of the entire spectrum of the system.


Monday, July 26, 2021

How to set research goals

This is a question that is often asked by PhD students. Research is typically a highly nonlinear process. It is rare to sit down, identify a problem, the appropriate approach to solving it, and succeed on the first attempt. Therefore, rather than focusing on immediately solving some big outstanding problem, you should start by solving smaller specific problems in order to build up your skill set. Let me give a few examples from my PhD thesis:

1. Our collaborators performed experiments using optically-induced photonic lattices, and we had a new postdoc in the group who had worked on photonic graphene-like lattices during his PhD. So we wanted to explore graphene physics using optically-induced photonic lattices. The effect we wanted to see required a ideal conical band structure in the vicinity of graphene's Dirac points, but the experimental platform had strong anisotropy between the vertical and horizontal axes which broke the symmetry. This motivated me to look at other classes of lattices with square symmetry exhibiting similar conical dispersion relations. I stumbled upon the face-centred square lattice (also known as the Lieb lattice) which has a conical dispersion relation with intersecting flat band, and started studying its properties. This is how I first encountered flat band physics, which has now blossomed into a large and active research field!

2. Parity time symmetry and non-Hermitian photonics were topics starting to attract growing interest. As an introduction to this field, I studied how parity time-symmetric perturbations can affect the properties of vortex solitons in simple ring-shaped photonic lattices, building on the research topic of my honours thesis (vortex solitons in Hermitian ring lattices). While published a short letter on our findings, I did not pursue non-Hermitian photonics further during my PhD. But it did end up being background for work I did a few years later on the combination of topology with non-Hermitian systems, which is now also a booming research direction.

3. In the final year of my PhD I was interested in learning more about quantum photonics and topological systems. Spontaneous parametric down conversion in nonlinear waveguide lattices was a topic being studied theoretically and experimentally by colleagues in our department, which motivated me to study the effect of topological edge states on this process. While the calculations were quite elementary in hindsight, it served as a valuable first introduction to topological effects in lattices and quantum photonics.

The important take home message from all of these examples is that we started out with a rather small and simple problem we wanted to solve, typically involving the combination of two distinct topics or sub-fields. We did not anticipate the eventual important applications at the outset. So don't agonize so much about whether your current research project is solving one of the big open problems in your field. What really matters is whether it allows you to pick up new skills and hopefully identify fresh approaches to solving the big problems!

Friday, July 23, 2021

arXiv highlights

Some recent topological preprints which caught my attention:

-Two works on designing lattice models which have a uniform Berry curvature throughout the Brillouin zone. A constant Berry curvature is seen as a means of creating lattice model analogues of the fractional quantum Hall effect. The first preprint by Bruno Mera and Tomoki Ozawa proposes a systematic construction method in which the Berry curvature becomes flatter as more bands (or equivalently sublattices) are added to the tight binding model. The second preprint by Daniel Varjas and collaborators proposes an iterative procedure for decreasing the Berry curvature in tight binding models with at least three bands, but also provides numerical evidence that having a uniform Berry curvature does not necessarily always lead to a faithful reproduction of continuum fractional quantum Hall states.

-Jiangwen Ma and collaborators report a topological laser based on a Dirac vortex cavity. The idea, recently proposed in two Nature Nanotechnology papers, uses a photonic crystal with two independently-tunable effective mass parameters to construct a Jackiw-Rossi zero mode. This midgap zero mode has more favourable scaling properties compared to conventional band edge modes of two-dimensional photonic crystals.

-Sungjoon Park and collaborators report on the unsupervised learning of topological phase diagrams using topological data analysis. Their idea is to first apply a continuous deformation to the Bloch Hamiltonian so that the eigenfunctions to more uniformly distribute the Bloch functions throughout the Hilbert space while preserving their topology. This allows persistence diagrams to more reliably quantify the shape of the Bloch functions. The information encoded in the persistence diagrams can then be fed into clustering algorithms to distinguish different topological phases. Nice to see topological data analysis techniques attracting more attention among physicists! We also have another preprint on topological data analysis submitted to arxiv, but for some reason it is stuck in the moderation queue...

-Optical nonlinearities can be used to induce quantized Thousless pumping of solitons

-3D optical skyrmions generated in the lab

-Nontrivial non-Hermitian topological phases are not limited to discrete tight binding models, but can also occur in continuous photonic crystals

Monday, July 19, 2021

Error correction on Google's superconducting quantum processor

Last week the Google team reported in Nature the exponential suppression of errors using quantum error correction. I heard about this first from all the media coverage, so I won't bother summarising the entire paper, but will just note a few thoughts on this publication:

1. Even small-scale error-corrected quantum computing is still a LONG way away. The demonstration of exponential error suppression was based on a 1D error-correcting code, which only corrects for one of the two dominant error sources on the device. The measured error rates in the proof-of-concept demonstration of the 2D surface code (able to correct both types of errors) are close to, but not yet at the threshold where logic error rate is reduced by adding more qubits. Moreover, to reach a "practical" error rate at which only 1000 physical qubits are required to encode a single logical qubit will require noise rates to be reduced by more than a factor of 10.

2. Error correction is slow. Each round of error correction requires read-out of the ancilla qubits, which on Google's device takes about 1 microsecond. Moreover, this time is limited by the slowest readout time among all of the ancilla qubits. Therefore, efforts will need to be devoted towards reducing both this time, and the variability of the read-out times from qubit to qubit.

3. Fig. 2d illustrates large scale-correlated noise observed in the device, likely due to cosmic rays. This affects about 0.4% of the measurements, which were removed via post-selection. Running useful quantum algorithms such as Shor's algorithm will require hours or days, and therefore schemes to minimize or mitigate these cosmic ray-induced correlated errors will be required.

4. The peer review file for this paper is available and makes for an interesting read, not only for the expert referees' perspectives, but also because the Google team mentions some of their ongoing research directions in their rebuttal letter.

Tuesday, July 13, 2021

A supervised machine learning problem with a provable quantum speed-up

Quantum machine learning is touted as one of the big potential applications of quantum computers. In quantum machine learning kernel methods are a topic attracting increasing interest as a means of circumventing the barren plateau problem that makes training of variational neural network-like quantum circuits challenging if not impossible.

The idea behind the kernel trick is to map data to a high-dimensional feature space. Owing to the curse (or blessing in this case) of dimensionality, complex datasets can become linearly separable in the high-dimensional space, allowing one to perform supervised classification using simple linear models. Quantum circuits generate states belonging to a huge (exponentially large) Hilbert space, suggesting that they may be naturally suited towards computing high dimensional feature maps. At least, this is what many researchers hope.

The challenge is that having a high dimensional feature space is a necessary but not sufficient condition for performing accurate classification. Therefore one needs to come up with a kernel that is not only hard to compute on a classical computer, but is also able to achieve a classification performance superior to any efficiently classically-computable kernel. A daunting task to prove, and a study by researchers at Google published recently suggests quantum kernels for classical machine learning tasks may be hard to find.

Yesterday researchers from IBM published a study in Nature Physics which is the first to my knowledge to propose a machine learning problem for which a quantum kernel can provide a rigorous advantage compared to classical kernel methods. The clever trick used by the authors is to consider a feature map that uses Shor's algorithm as a subroutine, which has a proven quantum advantage. The authors show that this feature map can be used efficiently classify a dataset whose labels are related to the classically-hard discrete logarithm problem. In the original space the data appear to be randomly distributed, such classical algorithms are limited to random guessing. On the other hand, the quantum feature map leads to a clear hyperplane distinguishing the data classes.

Like many recent rigorous demonstrations of quantum speed ups for machine learning, the problem considered by the authors was chosen specifically for its ability to exhibit a quantum speed up, and does not have any known practical applications. It is interesting to note that the hardness of the discrete logarithm problem underlies certain cryptographic protocols. So this is work may be a promising step towards potential useful applications of quantum machine learning. Worth a closer read!

Thursday, July 8, 2021

ILJU POSTECH MINDS Workshop on Topological Data Analysis and Machine Learning

As a newcomer to the field of topological data analysis (TDA), I found the ILJU POSTECH MINDS Workshop on Topological Data Analysis and Machine Learning held this week to be highly informative.

I was not able to attend all the talks live due to other commitments and will miss the last day, but luckily the recorded talks are available for viewing later. Here are summaries of the talks I saw so far:

Paul Rosen (University of South Florida) talked about analyzing and visualizing graphs using TDA. First, TDA can provide optimized initial layouts for spring-mass visualization of graphs, allowing features such as clusters and cycles to be more easily resolved. Second, persistent homology combined with dimensional reduction (manifold learning) can be used to detect events or anomalies in time varying graphs. As an example, he considered the time-varying graph describing email interactions between members of a large research institute. Finally, he showed how Mapper can be used to generate simplified representations of large complex graphs.

Gary Shiu (University of Wisconsin) presented applications of TDA to some problems in physics, including quantifying the structure of the string landscape, measuring the large scale structure of the universe, and identifying phase transitions in spin models. For large datasets in Euclidean space, it's much more efficient to use the alpha complex (which has polynomial scaling with the number of data points) than the Vietoris-Rips complex (which scales exponentially). Persistent homology is useful because it can directly identify nonlocal physical features of interest, such as vortices and filaments.

Bei Wang (University of Utah) gave a talk on how TDA can be used to understand the behaviour of deep neural networks. Focusing on popular neural networks used for image classification, TDA can be used to analyze the shapes of the activation vectors at each layer and construct effective decision trees, showing for example how the neural networks distinguish deer from horses by the presence of antlers. This is clever step towards making deep neural network models more interpretable.

Moo K. Chung (University of Wisconsin - Madison) has used persistent homology to distinguish different configurations of the covid-19 virus's spike proteins. He discussed how the large size of the molecule and large number of topological features requires methods to simplify the information encoded in the persistence diagrams. One method - persistence images - based on coarse-graining of the persistence diagram, is unsuitable because it omits important information carried by a few of the features. It is preferable to use alternate methods such as persistence landscapes, which retain all the information while converting it into a vector space form, enabling different persistence diagrams to be systematically compared or fed into machine learning algorithms.

Naoya Tanabe (Kyoto University Hospital) talked about how persistent homology can be used for detection of lung diseases in 3D CT images. The usual manual approach is based on expert analysis of 2D lung images. Deep neural network-based machine learning can be used to carry out supervised classification of these images, but it requires large sets of training data and underlying model lacks interpretability. Persistent homology is a promising alternative, as it is able to directly recognise the relevant local image features used to identify the diseases of interest, namely localized clusters and voids in the 3D intensity images. This allows accurate automatic diagnosis using small training sets and simple rule-based decision trees.

Kelin Xia (Nanyang Technological University) provided a survey of his work on the use of TDA to characterize the shapes of complex molecules, with applications to drug design. He emphasized that in the case of point clouds constructed from positions of the atoms comprising the molecule, every feature birth and death scale is meaningful, corresponding to characteristics such as bond lengths and ring sizes. Feature sizes are typically divided into two groups, corresponding to short and long range features of the molecule. The former provides a fingerprint of which atoms and bonds are present in the molecule. Changes in the long range features are sensitive to chemical reaction dynamics. The multiscale information captured by persistent homology forms ideal features for use in machine learning-based drug design and discovery. He also mentioned recent work on generalizing persistent homology to hypergraphs describing more complex many-body interactions, which is relevant to collaboration networks.

Monday, July 5, 2021

Creative research takes time

Nowadays funding agencies prioritise research with immediate applications. Grants for early career researchers provide funding for only a few years, requiring funded projects to have a quick and clear path to fruition. However, in basic research the eventual applications are often not those that were first envisaged.

Perhaps my favourite research project was our study of topological effects in "leaky" optical systems, published earlier this year in Nature Physics. See here for a popular summary. I am proud of this work not merely because the results were eventually published in a glossy journal, but rather because it was a pure, curiosity-driven project involving short bursts of inspiration and progress separated by many months.

This project originated from a grant application we prepared in August 2018. The host institution had expertise in photonic crystal fibers, so we were interested in whether the photonic crystal fiber platform could be used to observe interesting topological phenomena. However, photonic crystal fibers are leaky wave systems which continuously radiate energy into their environment, so it was not clear whether our experience in designing topological states for bound modes would be useful in this setting.

We started working on the idea by implementing well-known approaches for numerically computing leaky modes with the help of a tutorial article, studying some simple one-dimensional examples to gain some intuition for the platform. 

By April 2019 we had developed a tight binding-like formalism allowing us to design and study topological wave effects in leaky wave systems, but we did not know what our formalism could be useful for apart from computing the decay rates of leaky topological edge states. 

After a few conferences and many discussions with other researchers, in October 2019 we finally stumbled upon the neat idea of using radiation losses to controllably fill energy bands and measure their bulk topological invariants. 

We finished a draft manuscript by the end of November 2019 and shared with some collaborators. Their feedback was that the idea was interesting, but it wasn't clear how easy it would be to experimentally test our theory. 

We spent a few more months carrying out numerical simulations of possible designs and polishing the manuscript, finally submitting it in May 2020. Our collaborators' impression was echoed by the referees, who requested more detailed and explicit simulations of our designs in a revised manuscript, which was finally accepted in December 2020.

Take home messages:

-Find inspiration for new lines of research in old reviews and tutorials, not the latest papers published in high impact journals.

-If you get stuck don't be afraid to put the project aside for a few months and work on other directions.

-Creative research takes time - 2.5 years between the initial idea and eventual publication in this case (a theory project). Experimental projects can take even longer.