Monday, March 29, 2021

Photonic band structure design using persistent homology

 

A short summary of our paper recently published in APL Photonics and presented at the APS March Meeting (talk slides available here).

Optimising the design of photonic crystals is challenging due to the large number of available degrees of freedom. We have shown how a machine learning technique known as persistent homology can be applied to classify the shape of photonic band structures and speed up the design process.

Persistent homology studies the shape of datasets over a range of different scales. Shapes are quantified by computing the numbers of topological features, such as holes or loops. Features persisting over a wide range of scales typically represent its true shape, while features with low persistence may be identified as noise and reliably discarded.

In the context of photonic crystals there are different important notions of shape. The shape of constant frequency lines or surfaces in the photonic band structure determines the radiation profile of emitters at that frequency embedded in the photonic crystal. The modes of photonic crystals also have abstract shapes characterised by topological invariants, which currently attract a lot of interest as a means of designing robust cavities and waveguides for light. 

We applied persistent homology to characterise low energy modes in a honeycomb photonic lattice and identify ranges of lattice parameters supporting interesting looped “moat band” and multi-valley dispersion relations. In the future, it will be interesting to apply persistent homology to study the properties of more complex many-body quantum systems.

Friday, March 12, 2021

Quantum chemistry on near-term quantum computers: the measurement problem

Quantum chemistry calculations including computing the ground state energies of complex molecules are touted as a promising application of near-term quantum computers. While numerous small-scale proof-of-concept calculations have been performed for small molecules, whether near-term quantum algorithms will offer any useful advantage over existing classical computers remains an intensely-debated question.

The variational quantum eigensolver is a leading candidate for near-term quantum computers. This algorithm uses a hybrid quantum-classical feedback loop to optimise the parameters of the quantum gates and minimise the required circuit depth. Each iteration of this algorithm requires estimating the energy of the trial solution, which requires O(1/ε^2) measurements to achieve an accuracy of ε. This becomes a severe bottleneck as the problem size is increased. 

For example, Google's recent paper preparing the Hartree-Fock wavefunction required 250,000 measurements to estimate each observable to sufficient accuracy, translating to a few seconds to perform a single iteration of the algorithm with N=12 qubits. They employed clever tricks to obtain the N^2 one-particle reduced density matrix elements required to compute the energy using only N+1 measurements. When electronic correlations are taken into account, N^4 two-particle reduced density matrix elements are needed to estimate energies. This poor scaling must be improved for the variational quantum eigensolver to have any practical applications.

Interestingly, larger error-corrected quantum computers may not solve this problem: Pessimists estimate that millions of qubits and years of runtime will be required to run error-corrected algorithms such as quantum phase estimation on useful problem sizes.

One approach towards tackling this problem is to use more efficient problem encodings. The most-commonly employed second quantization approach uses N qubits to encode N electron orbitals, and can represent states containing up to N electrons. However, typically the number of orbitals needs to be considerably larger than the number of electrons K in order to achieve convergence to accurate ground state energies. Alternatively, using a first quantization encoding can encode the quantum state using K log N qubits. Estimating the energy then requires measuring elements of a Hamiltonian with ~(KN)^2 nonzero elements. However, one needs to be careful to ensure that the required antisymmetry of the trial wavefunction is preserved. Nevertheless, this approach may be superior for achieving chemical accuracy when a large number of basis states is required. Further discussion on this point can be found in this paper.

Some other useful references I read today on this topic:

This preprint estimates runtimes for applying the quantum phase estimation algorithm to compute energies of catalysts.

This preprint proposes a Bayesian approach to estimating energies with runtime interpolating between the 1/ε^2 of the variational quantum eigensolver and the 1/ε of quantum phase estimation, which trades longer circuit depths for fewer required measurements.

This preprint discusses the measurement problem further.

 

Wednesday, March 10, 2021

Xanadu's latest quantum photonic chip

 The photonic quantum computing company Xanadu published an article in Nature last week reporting on the capabilities of their latest programmable Gaussian BosonSampling chip. Their device generates certain eight mode quantum states of light, which are detected (off-chip) using superconducting number-resolving photon detectors. In contrast to the much-publicised Gaussian BosonSampling experiment using 100 modes by the group of Jian-Wei Pan last year, this device is programmable. However, it will need to be scaled up to a much large number of modes in order to solve problems difficult for existing classical computers. Three proof-of-principle quantum algorithms were demonstrated using this chip:

1. Gaussian BosonSampling. BosonSampling is the problem of computing probabilities of obtaining multi-photon coincidences after applying a unitary transformation to quantum squeezed states of light. In the case of (Gaussian) squeezed states, the probabilities of individual detection events can be related to a property of the unitary transformation (Hafnians of its submatrices), which is hard to compute.

2. Molecular vibronic spectra. Here the problem is to compute the absorption or emission spectra of molecules, corresponding to changes in their internal (vibrational) states. These spectra can serve as molecular fingerprints. Although this task is not proven to be computationally hard, no efficient classical algorithms are known. By complementing the Gaussian BosonSampling circuit with an additional transformation (state displacement), the output photon number distribution can be used to simulate molecular vibronic spectra. Unfortunately, the present Xanadu chip does not implement displacements, and therefore was only used to solve a toy problem of this class (without displacements, and limited squeezing). In the case of coherent states, displacements can be implemented using a beamsplitter and an auxiliary mode. Another approach appears employ electro-optic modulators.

3. Graph similarity. This forms a variant of Gaussian BosonSampling. The connectivity of a graph is encoded by its adjacency matrix. This adjacency matrix is encoded into a multimode Gaussian state by suitably choosing the squeezing parameters and unitary matrix. Then, the distribution of output photon counts provides a fingerprint of the graph that can be used to compare the similarity of different graphs. In particular, the probability of a individual combination of output photons can be related to the number of perfect matchings of a subgraph of the graph. This is generally a hard task (related to the Hafnian), but if all the elements of the adjacency matrix have the same sign efficient approximate algorithms exist. I am not sure what important real-world problems involve adjacency matrices with elements with mixed signs...


Wednesday, March 3, 2021

Detecting earthquakes with optical fibres

An interesting study was published in Science last week, showing how transoceanic optical fibres can be used to detect seismic waves. This is important because the vast majority of seismic receivers are located on land, which limits our knowledge of the earth’s interior. There are attempts to address this imbalance using seismic receivers on the ocean floor, however since they are expensive to deploy their coverage remains limited. 

The present study shows that existing transoceanic optical fibre networks which form the backbone of the internet can help fill this gap in seismic receiver coverage. The authors’ idea is to sense seismic waves by detecting changes in the polarization of light transmitted through oceanic fibers. 

Modern optical fibers use the polarization degree of freedom of light to increase bandwidth by encoding signals into both polarization channels. However, the polarization is not fixed during propagation and is perturbed due to various effects including temperature changes and vibrations. A decoder at the end of the fiber is used to measure and correct for these distortions.

On land there are many sources of noise in the polarization, including vibrations caused by passing traffic. The weak signals caused by seismic waves are typically masked by the noise. For sub-sea fibres the noise is orders of magnitude weaker, enabling their use as seismic sensors.

For their study, the authors used a recently-deployed 10,000km fiber link between Los Angeles, USA and Valparaiso, Chile. By measuring changes in the polarization emerging from the fiber they detected signals from several large earthquakes occurring over a period of several months. 

The fiber could also detect ocean waves caused by storms. Therefore, another useful application of this method may be in the detection of tsunamis. Further work will be required to improve the sensitivity of the technique, such as by combining measurements from several nearby fibres. 

Reference: Optical polarization–based seismic and waterwave sensing on transoceanic cables