Thursday, September 30, 2021

IPS Meeting Day 1

Today was the first day of the IPS Meeting, an annual conference organised by the Institute of Physics Singapore. For many of us in Singapore, this was the first in-person conference we have been able to attend since the pandemic started. The many covid restrictions mean that it's not the same as the pre-pandemic times (mingling between participants not allowed, must stick to the same session room for the whole day), but is still a lot better than online conferences. Big thanks to the organisers for making it run smoothly despite the recent changes to the covid restrictions.

Here are the slides for my talk on applications of persistent homology to physics. I think for many of us (me included) it will take some time to get used to presenting in person again...

Some highlights from Day 1:

  • Prof. Jie Yan (NUS) talked about his group's work on developing more sensitive covid antigen and antibody testing, sharing data on how his and his team's antibodies are decaying with time following their vaccination doses...
  • Prof. Ranjan Singh (NTU) gave an overview of the importance of terahertz interconnects for 6G communication technologies, including work from last year on terahertz waveguides based on topological edge modes published in Nature Photonics.
  • Weikang Wu (NTU) explained how higher-order band crossings (i.e. parabolic and cubic crossings) can emerge in certain two-dimensional systems. For example, cubic crossings can be induced by spin-orbit coupling. arXiv preprint.
  • Jeremy Lim (SUTD) on how 3D Dirac semimetals can provide orders of magnitude more efficient high harmonic generation compared to 2D materials, limited mainly by propagation-induced dephasing of the light-induced current. arXiv preprint.
  • Xingran Xu (NTU) - Interaction-induced skin effect in exciton-polaritons. The non-Hermitian skin effect (extreme sensitivity of eigenvalues to boundary conditions) most often arises in Hamiltonians with non-reciprocal couplings, which are tricky to realize. By suitably pumping a polariton condensate (such that it exhibits the same nonlinear mode profile for open and periodic boundary conditions), one can also observe the non-Hermitian skin effect in the fluctuation modes of the condensate!
  • Udvas Chattopadhyay (NTU, now NUS) - Mode delocalization in a disordered photonic Chern insulator. The most well-known feature of Chern insulators is their protected edge edge states that traverse the bulk band gaps. Interestingly, nonzero Chern numbers also lead to bulk modes protected against localization in the presence of disorder, which can be observed using arrays of coupled microring resonators.
  • Anna Paterova (A*STAR) talked about quantum imaging, which enables measurement of objects in troublesome spectral regions (such as the moleculer fingerprint region) using cheaper optical or infrared cameras, based on quantum interference between photon pairs generated by nonlinear crystals. Paper in Science Advances.

Looking forward to attending the Quantum Engineering sessions tomorrow!


Wednesday, September 29, 2021

Improved error correction and switching for photonic quantum computers

Error-protected qubits in a silicon photonic chip

This paper just published in Nature Physics reports the experimental implementation of photonic graph and hyper-graph states to carry out some basic quantum information processing tasks (single qubit rotations, state teleportation) using error-protected logical qubits and measurement-based quantum computation.

In the measurement-based quantum computation approach, one starts with a large-scale entangled state of light, which can be represented as a graph encoding the entanglement between the different qubits. Quantum gates are then implemented by applying a sequence of measurements and measurement-dependent (feed-forward) operations on the qubits. 

The combination of measurement and feed-forward operations allows for errors to be detected and compensated for as the computation is carried out. However, the correction of physical errors requires additional qubits. This is a bottleneck limiting the scalability of photonic quantum computers, which are built on the probabilistic generation of few-photon entangled states. Therefore it is important to reduce error rates to minimise this overhead.

In this study, the authors demonstrate how clever encoding schemes (encoding multiple qubits onto a single photon and using optimal graph states) allows one to reduce the logical error rates of basic quantum operations. As an example, the success probability of a small-scale phase estimation algorithm in increased from 63% to 93%.

The article conclusion does stress some ongoing challenges that need to be addressed in order for photonic quantum computers to reach a useful size. Many new challenges emerge when going from proof-of-principle demonstrations based on a few photons to large-scale circuits with thousands or millions of photons, including the need to integrate near-deterministic photon sources and detectors, and the development of ultra-fast low-loss integrated optical switches.

Switch networks for photonic fusion-based quantum computing

Improving the performance of integrated optical switching is the subject of a preprint that just appeared on arXiv by PsiQ. PsiQ is taking an ambitious approach to build a photonic quantum computer, aiming to skip noisy-intermediate scale quantum devices completely and build a million-qubit processor, partnering with GlobalFoundries to develop mass-producible components. 

I like the approach taken by PsiQ because it addresses the elephant in the room plaguing all the companies pursuing devices based on superconducting qubits - how do you fit millions of qubits into a helium refrigerator?. 

For photonic quantum computers to be viable it will be essential to minimise losses. One significant source of losses is in active components such as electrically-controlled fast optical switches. 

This preprint analyzes improved schemes for creating different kinds of large-scale photonic switches using Mach-Zehnder interferomters and spatial and temporal multiplexing.


Monday, September 27, 2021

Working from home - again

Due to a rise in covid cases in Singapore, we are back to compulsory working from home from today, likely for the next month. This is not easy for junior researchers, particularly PhD students. Zoom meetings are no substitute for face-to-face brainstorming of ideas in front of a whiteboard. Here are some tips to keep your research progressing through this period:

  • Maintain fixed "office hours" and even dressed as you would to go in to the office before starting work. A big challenge with the PhD journey is to maintain a work-life balance and avoid your project from taking over every waking moment. This is much harder to do when working from home, since the boundaries between at work and at leisure are blurred. 
  • Start your work day by writing a to-do list. You don't need to get everything done by the end of the day, but being able to tick off items as you go along provides a sense of progress even when you may be stuck on some big problem in your research.
  • Don't neglect brainstorming and speculative thoughts. It's good to jot any ideas in progress you have on paper or in a latex file. Even if the ideas are not polished enough to discuss over an online meeting, you will have them in store for next time we're back in the office.
  • Mix things up - don't spend all day on a single task - or you will get tired or frustrated. Make sure to keep up with the literature (e.g. daily arXiv postings), or even watch an online seminar or lecture (thanks to covid these are now readily available from many sources).
  • Don't hesitate to ask your supervisors any random questions you have. The barrier seems much harder when working from home (e.g. you may need to schedule a time to discuss over zoom), but we would much prefer to answer questions immediately than to have you struggling for hours or days to get unstuck with your problem.

That's all I have for now; it is the end of the work day and time to relax :-)

Wednesday, September 22, 2021

Metanano 2021

Last week I attended the virtual Metanano 2021 conference and presented an invited talk in the "Topological states in classical and quantum systems" session. Here are my talk slides. The conference organisers have kindly made the session recordings available on YouTube until 3rd October (link to schedule of talks).

The topological session allowed me to catch up on some recent developments in topological photonics that I haven't had time to closely follow. Highlights from the Thursday session I attended:

Baile Zhang reported on the observation of antichiral edge states using microwave gyromagnetic photonic crystals. In conventional topological photonic crystals, the chiral states on opposite edges propagate in opposite directions, ensuring local conservation of energy. A 2018 PRL paper showed how to design antichiral edge states that propagate in the same direction on both edges, with energy conservation provided by counter-propagating bulk modes. The antichiral edge states do not support topologically-protected transport (since they do not reside in a complete band gap), but they are nevertheless interesting as a means of extending the possibilities offered by chiral edge states.

Yidong Chong discussed topological defect states in photonic and acoustic lattices. By introducing defects such as dislocations and disclinations into the bulk of topological photonic crystals, one can create free-form waveguides with near-arbitrary trajectories. This enables the flexible creation of topological analogues of regular photonic crystal waveguides. Extending to 3D structures enables the design of topological waveguides for orbital angular momentum modes, with the propagation direction controlled by the handedness of the mode vorticity.

Jian-Hua Jiang reported on a closely related idea - the experimental observation of bulk-disclination correspondence in topological crystalline insulators. What's really interesting about this work is that bulk topological defects can be sensitive to exotic topological invariants (e.g. higher order topological phases associated with fractional charge modes) that cannot be distinguished using the more familiar bulk-edge correspondence. He also mentioned a recent preprint on multi-band topological pumping using magnetic defects in acoustic metamaterials.

Daniel Sievenpiper presented recent work on coupling light between topological and non-topological photonic crystal waveguides, and using terminations of topological waveguides to create collimated far-field radiation. This opens avenues in the direction of topological antennas.

There are other interesting-sounding talks that I missed due to other commitments and time zone differences that I hope to catch up on while the recordings are available.

Monday, September 20, 2021

Variational quantum algorithms are hard to train


Variational quantum algorithms

Variational quantum algorithms are an approach to solving hard optimization problems using quantum computers. Variational quantum algorithms encode the cost function describing the problem to be solved as the expectation value of a quantum Hamiltonian. A parameterized quantum circuit generates a trial solution to the problem, whose cost function is measured. Then, a classical computer trains the circuit parameters to reduce the cost function in order to find the optimal (lowest cost) solution to the problem.

Schematic illustration of variational quantum algorithms

Variational quantum algorithms are attracting a lot of interest owing to their ability to be run on near-term quantum processors. In particular, they have much lower circuit depths compared to algorithms designed for fault-tolerant quantum computers, and the iterative optimisation of the circuit parameters enables a robustness to errors in the individual quantum gates. However, there is no guarantee they will produce a good solution to the problem at hand, and in many cases there exist classical heuristic algorithms with provably better performance. In some cases, the problem encoding leads to a cost function described by a strongly-interacting or non-local quantum Hamiltonian for which finding the ground state (optimal solution) is provably hard to solve, even for a fault-tolerant quantum computer.

Training Variational Quantum Algorithms Is NP-Hard

An article just published in Physical Review Letters proves that, independent of the details of the quantum Hamiltonian encoding the problem, the classical optimisation part of variational quantum algorithms is NP-Hard to solve, meaning that the training step in which the quantum circuit parameters are adjusted exhibits many sub-optimal local minima which can trap commonly-employed gradient-based training methods. To show this, the authors considered a continuous version of the NP-Hard MaxCut problem that treats the quantum computer as an oracle that can efficiently generate and return expectation values of parameterised quantum states. Considering the simplest case where the problem Hamiltonian is diagonal in the computational basis is already sufficient to prove that variationally finding the optimal parameters for the quantum circuit is NP-Hard. Moreover, the optimisation in NP-Hard even for small quantum computers with logarithmically many qubits and non-interacting Hamiltonians; the difficulty does not rely on the hardness of the quantum ground state problem.

Perspectives

While this is definitely a sobering result for the field, rigorous results such as this are important in establishing the capabilities and limits of algorithms for near-term quantum computers and identifying possible methods to improve their performance. In this case, the hardness of the classical optimisation problem rests in the existence of many sub-optimal local minima; therefore developing heuristic strategies for generating good initial guesses that are sufficiently close to good solutions should be a priority. Another promising point the authors raise is that this hardness result is similar to the hardness of the Hartree-Fock method routinely employed in quantum chemistry; even though this related problem is NP-Hard, it is still useful as a starting point for many practical quantum chemistry calculations. We can hope to draw inspiration from methods to obtain good initial guesses that quantum chemists have spent decades perfecting.

Questions to ask before trying to solve your hard classical optimisation problem on cloud quantum computers

  • Do my problem encoding and parameterized circuit design avoid the barren plateau problem?
  • Do I have a method to choose a reasonable initial guess for my circuit parameters, e.g. using perturbation theory or a mean field solution? Random initial guesses are highly unlikely to converge.
  • How many qubits do I need to solve problem instances that are too large for existing classical solvers? Can I use a qubit-efficient problem encoding?


Tuesday, September 14, 2021

Fundamental limitations of quantum error mitigation

Near-term quantum processors are too small to run quantum error correction. Noise is inevitable; each gate executed will be accompanied by small errors which accumulate and grow with the circuit depth. The errors grow exponentially with the number of qubits used, quickly rendering the calculation result meaningless. For this reason, there is growing interest in quantum error mitigation schemes which aim to minimise the influence of errors without requiring extra qubits. Some examples:

Zero noise extrapolation, which scales the duration of the quantum gates comprising the circuit in order to controllably increase the level of noise. By repeating the circuit for varying noise levels, one can obtain an estimate of the circuit output in the zero noise limit.

State purification, which assumes that the output of the circuit is close to a pure state. One performs tomography on the (mixed) output state to find the closest pure state, which provides an estimate for the circuit output in the absence of noise.

Quantum error mitigation techniques typically incur an overhead of additional required measurements. A recent arXiv preprint analyzes a generic model of error mitigation schemes, showing that the measurement overhead grows exponentially with the circuit depth. Moreover, the technique of proabalistic error cancellation is optimal; this method requires a model for the dominant source of noise.

This result poses an interesting question. As the size and complexity of noisy intermediate scale quantum devices increases, will useful applications emerge? Or will we instead encounter a desert in which error mitigation techniques become too time-consuming, but the number of qubits is still too low to run full quantum error correction?

Thursday, September 9, 2021

Even stronger quantum supremacy

A few months ago the group of Jian-Wei Pan at USTC reported a random circuit sampling experiment with 56 qubits, slightly more than the original 53 qubit experiment by the Google team. Theorists have been working on ways to crack the quantum supremacy by predicting probability distribution of bitstrings using classical computers, proposing clever techniques including partitioning the full circuit into tractable sub-circuits, using GPUs to efficiently sample correlated bitstrings, and tensor network-based simulations, reducing the required classical simulation time from thousands of years to days.

The exponential scaling of Hilbert space means that a small increase in the number of qubits can increase the classical computational burden by orders of magnitude. In this vein, today the USTC team released a preprint reporting an upgraded 66 qubit quantum processor, increasing the difficulty of classical simulation by three orders of magnitude compared to their earlier preprint. In addition to the higher qubit count, the readout fidelity is improved enabling sampling from deeper random circuits.


Wednesday, September 8, 2021

Cross-verification of quantum computers

How can we verify that a quantum computer performing a classically-intractable calculation produces the correct result? This is a particularly important question given that current quantum processors are noisy and prone to errors. 

For example, in the recent quantum supremacy experiments by Google and USTC, the quantum circuits essentially functioned as random bitstring generators, making verification of their output extremely challenging. Verification of the circuits' output was based on cross-entropy benchmarking, which compares the sampled bitstrings against ideal probability distributions computed using classical supercomputers. Thus, applying cross-entropy benchmarking in the quantum supremacy regime was not possible; instead, the authors performed the benchmarking on classically-simulable circuits employing either fewer qubits, or the full number of qubits and classically-tractable gate sequences. The benchmarking could not be performed on the exact circuits used to claim quantum supremacy, leaving loopholes for skeptics to attack.

An article by C. Greganti and coauthors just published in Physical Review X proposes a neat scheme for verifying the outputs generated by quantum computers. Their cross-verification scheme is based on generating families of circuits comprising different gate sequences and qubit numbers that, nevertheless, correspond to the same computation and should therefore (ideally) generate the same output distribution. 

The required circuit families can be designed using the paradigm of measurement-based quantum computation, in which gates are replaced by measurements on auxiliary qubits. 

Crucially, this cross-verification scheme does not require a classical computer to verify the circuit outputs, making it scalable to the quantum supremacy regime.

This method is more efficient than standard benchmarking methods thanks to a variant of the birthday paradox: verification is based on estimating the probability to obtain collisions between bitstrings generated by pairs of circuits; estimating collision probabilities turns out to be much much easier  than estimating the probabilities of measuring individual bitstrings.

While this method is an improvement, the required number of measurements still grows exponentially with the number of qubits that are compared during the verification procedure, which is related to how different the two gate sequences are. Nevertheless, with the noise level on current quantum devices this verification scheme can already resolve differences between different hardware platforms using a modest number of qubits (up to 6).

Monday, September 6, 2021

Quantized nonlinear Thouless pumping

I missed that this interesting theoretical and experimental study of quantized Thouless pumping of solitons was published in Nature last month. The authors were kind enough to share their preliminary results with me while the manuscript was still under review.

Conventional Thouless pumping is defined for linear (non-interacting) energy bands. A quantized topological invariant, the Chern number, describes the translation of a wavepacket's centre of mass due to the periodic modulation, assuming the wavepacket uniformly excites a single energy band of the system.

Recent works have considered the generalization of topological pumping to disordered and interacting quantum systems, including the Thouless pumping of interacting photonic Fock states. But even for interacting quantum systems the time evolution remains governed by a linear Hamiltonian (just in a larger Hilbert space), and so well-developed tools of linear band theory remain applicable.

What is surprising about the present study is that quantized topological pumping can persist for nonlinear wave dynamics, governed by the mean field nonlinear Schrodinger equation. It thus demonstrates that this peculiar phenomenon can be observed in a much wider variety of physical systems than was previously envisaged, including water waves and Bose-Einstein condensates.


Friday, September 3, 2021

Error mitigation strategies for noisy quantum processors

This week two arXiv preprints on error mitigation techniques for quantum processors appeared:

Scalable mitigation of measurement errors on quantum computers, by researchers at IBM Quantum. 

All quantum processors suffer from measurement errors: the probability of measuring a bitstring n is not simply the modulus squared of the quantum state amplitude |𝛹(n)|^2. Without correcting for measurement errors the result returned by the quantum processor will be corrupted.

The IBM Qiskit documentation describes a simple calibration procedure for estimating measurement errors: prepare a bitstring and then repeatedly measure it to obtain a distribution of output bitstrings. Repeating for all basis states allows one to construct a calibration matrix A that can be inverted to transform error-biased measurement probability distributions to the ideal error-free measurement probability distribution.

For an N qubit system, A is a 2^N ✕ 2^N matrix; the size of calibration matrix grows exponentially with the number of qubits, so it quickly becomes infeasible to perform a calibration of every individual bitstring. 

Luckily, in many cases the errors at different qubits are uncorrelated to a good approximation, which enables efficient construction of A using O(N) calibration bitstrings. Moreover, only a small additional overhead is required to account for short-range correlations in the measurement errors.

Even given A, the need to estimate 2^N probabilities and then solve a linear system of 2^N equations to obtain the ideal distribution means this standard calibration approach cannot be scaled up to large quantum processors. The preprint by Nation et al. proposes a solution to this problem.

Their approach exploits the fact that measurement errors are relatively small for state-of-the-art quantum processors (a few percent for superconducting quantum circuits, and only 0.4% for IONQ's trapped ion system). Therefore the ideal distribution can be considered as a weak perturbation to the noisy measured distribution, allowing one to restrict attention to the elements of A contained within the subspace of sampled bitstrings. 

Current cloud quantum processors typically allow a few thousand measurements per circuit, which provides an upper bound on the size of the sampled subspace. The authors propose either directly inverting A within this subspace, or (for larger subspace sizes) using matrix-free iterative methods to estimate the ideal measurement distribution.

Testing this improved error mitigation scheme, only a few seconds of calibration time is required for a 42-qubit quantum processor. Previous mitigation schemes were limited to about 10 qubits before the run time starts to blow up!

 

Can Error Mitigation Improve Trainability of Noisy Variational Quantum Algorithms?, by researchers at Los Alamos National Laboratory

This preprint appeared today, with more pessimistic results. Many large scale variational quantum suffer from the barren plateau problem, which makes them exponentially hard to train. Unfortunately, noise in near-term quantum processors induces barren plateaus under fairly generic conditions. Here the authors study whether a variety of leading error mitigation schemes can be used to suppress or even eliminate the barren plateaus. Unfortunately the answer is no, and some schemes even make the barren plateau problem worse...

Wednesday, September 1, 2021

Responding to referees - part 2

In an earlier post I discussed how you should target the journal editor when you respond to referee reports, since ultimately they have the final decision on whether to accept your manuscript.

Of course, having favourable referee reports will make it much easier for them to accept your article. So, how do you convince a reluctant referee to change their mind and recommend publication of your article?

In theory, peer review is an objective process that judges the scientific merits and accuracy of a manuscript. If you address all of the referees' scientific criticisms, then they should have no choice but to recommend publication.
 
Unfortunately, referees are humans with subjective opinions, so whether they like you can be just as important as whether your work is correct. How can you make a hostile referee like you?

Here is an entertaining video of Chris Voss discussing negotiation techniques. While responding to referees is not as high stakes as hostage negotiations, his central idea - showing empathy for your adversary - is equally applicable. In crafting your response, you should keep two important questions in mind:

1. Why did the referee respond?

It's not compulsory to review articles; each referee has chosen to take the time to write a report, even though they probably had more pressing tasks to do. Identifying why the referee responded to the editor's request will help you figure out what they want to see in your response. 

I can think of four reasons why referees choose to review: 

(i) The topic of your paper was of interest

(ii) Felt obliged due to having recently published an article in the journal

(iii) As a favour for the editor, with whom they have some personal connection

(iv) CV building by refereeing for a higher impact journal.

Did I miss any?

2. What does the referee want to see in your response?

Make it obvious to the referee that you read their report and thought carefully about what they had to say. Summarise (label) their thoughts and even copy (mirror) their phrasing at times. Ideally every point they raise should lead to some change (improvement) to the manuscript. 

We all like to be right and hate being proven wrong. Do not ignore or dismiss any of their comments. Try to agree with what they are saying if possible (if not to the letter then at least in spirit).

Note I am not saying that you need to give in to all of their demands. The aim is to show the referee that you empathise with them and understand their perspective.