Showing posts with label variational quantum algorithms. Show all posts
Showing posts with label variational quantum algorithms. Show all posts

Thursday, May 2, 2024

From NISQ to small logical quantum circuits

After six years of huge interest in NISQ (noisy intermediate-scale quantum) circuits there are still no practical applications where a noisy quantum device can outperform the best classical methods. Noise is too detrimental, and classical methods are too powerful. Experts continue to argue that now is not the time for commercial applications: quantum error correction, hundreds of logical qubits, and millions of error-corrected gates are needed.

Then what's next? Circuits of a moderate size with some limited error correction capabilities. LISQ (logical intermediate-scale quantum) or something else, for short.

What can we expect from these up and coming small scale logical circuits?

First, a lot of the tools developed for the NISQ era will become obsolete. For example, variational quantum circuits involving continuously-parameterised quantum gates cannot be easily implemented in a fault-tolerant manner. Instead, post-variational hybrid quantum-classical algorithms for this era will need to offload the continuously-parameterised part of the algorithm to a classical computer, with the quantum circuit used to measure a set of (hopefully classically-intractable) observables that are used as inputs to the classical tunable model.

Second, the hardware, algorithms, and the error correcting code cannot be considered in isolation. Choosing the right error correcting code will be essential to get the most out of the current hardware. Examples of this can be seen in QuEra's logical circuit demonstration from late last year, where the use of a 3D quantum error correction code allowed them to perform random IQP circuit sampling with error detection, and Quantinuum's recent demonstration of repeated error correction. Similar to the NISQ era, different hardware platforms will have different strengths and limitations in what kinds of circuits they will be able to run.

Finally, the most valuable software tools in the NISQ era were for quantum control and state tomography, essential to get the most out of the noisy hardware. These tools will remain important, since fidelities at the physical qubit level directly affect the amount of quantum error correction overhead required. As we move to logical circuits, the new valuable quantum software will be in the form of compilers that will take all the hassle out of hardware and error code selection out of the end-user and translate a given logical circuit into simple, understandable hardware requirements.

Monday, November 14, 2022

Quantum Computing News and Views

In the past month there have been several interesting developments related to quantum computing:

Spoofing random circuit sampling

Claims of quantum supremacy via random circuit sampling are troublesome because it is so difficult to verify that the quantum processor is working correctly. The Google team used cross entropy benchmarking to verify their original experimental demonstration. The cross entropy benchmarking fidelity is basically a measure of the probability of observing a sequence of bitstrings based on the simulated output probabilities of a circuit. One criticism of this approach is that this fidelity cannot be computed for circuits operating in the classically-intractable quantum supremacy regime. The authors were limited to computing the fidelity for smaller, classically-tractable circuits and comparing its scaling to the expected gate error rates. This approach has several issues explained clearly in a blog post by Gil Kalai discussing a preprint from last year, Limitations of Linear Cross-Entropy as a Measure for Quantum Advantage

New attacks of the cross entropy benchmarking fidelity have now appeared as arXiv preprints:

Changhun Oh and collaborators have come up with a method for spoofing the cross entropy benchmarking fidelity in recent Gaussian BosonSampling experiments.

Dorit Aharonov and collaborators show that samples from noisy quantum circuits can be computed classically to a good approximation in polynomial time.

Classical shadows and bosonic modes

Classical shadows continue to be a highly fruitful avenue for a potential near-term quantum advantage. Hsin-Yuan Huang and collaborators have now shown how classical shadows can be used to predict arbitrary quantum processes. To be precise, they give a procedure for estimating observables of a quantum state $\rho$ following application of a completely positive trace-preserving map (describing e.g. evolution of a quantum state in the presence of environmental noise).

When we first become interested in classical shadows almost a year ago a natural question that arose was whether the qubit-based formalism could be translated to bosonic systems. This turns out to be a hard problem because the proof of optimality of shadow methods rests on properties of finite-dimensional vector spaces which do not easily translate to the infinite-dimensional continuous variable quantum systems, discussed in The Curious Nonexistence of Gaussian 2-Designs

A team at NIST/University of Maryland now appears to have solved this problem. First, they show that the non-convergent integrals arising in the continouous variable case can be avoided by considering "rigged t-designs" obtained by expanding the Hilbert space to include non-normalizable states. Experimentally-meaningful predictions using this approach can be obtained by regularizing the designs to have a finite maximum or average energy. In a second study the team recasts existing bosonic mode tomography approaches to the classical shadow language to obtain bounds for estimating unknown bosonic states. In practice, however, homodyne tomography performs significantly better than the bounds obtained using classical shadows. These works open up further studies and applications of shadow-based methods to bosonic systems.

Edit: Another paper on continuous variable shadows appeared today: arXiv:2211.07578

IBM's Osprey 433-qubit quantum processor

Last week IBM made several announcements as part of their annual quantum summit, including their new 433-qubit Osprey device. Olivier Ezratty posted a great hype-free analysis of IBM's marketing announcements. In short, large two-qubit gate errors make such large qubit counts useless at this stage, but the advances in integration of the classical control electronics are an important step forward. IBM is still in the lead with the development of superconducting quantum processors. It is also nice to see that they are working towards integrating error mitigation techniques into their software. At present cloud quantum computing providers release devices with atrocious error rates (coughRigetticough), which requires users to spend significant time and money implementing error mitigation techniques in the hope extracting something useful out of their devices.

Bad news for NISQ?

Sitan Chen and collaborators analyze the capabilities of NISQ devices using tools from complexity theory, obtaining a complexity class lying between classical computation and noiseless quantum computation. Their model rules out quantum speedups for certain algorithms including Grover search that are run on near-term noisy quantum devices.

Variational quantum chemistry requires gate-error probabilities below the fault tolerance threshold. The authors compare different variational quantum eigensolvers. Gate errors severely limit the accuracy of energy estimates, making minimizing circuit depths of prime importance. Thus, hardware-efficient ansatzes are preferred over deeper physically-inspired variational circuits. Regardless, supremely low error rates seem to be needed to reach chemical accuracy.

Exponentially tighter bounds on limitations of quantum error mitigation. Quoting from the abstract, "a noisy device must be sampled exponentially many times to estimate expectation values." At first glance, this is good news for cloud quantum processor providers - simply by making your qubits noisier, you can increase your revenue exponentially! However, "there are classical algorithms that exhibit the same scaling in complexity," so classical cloud computing providers will give you much much much better value for money. "If error mitigation is used, ultimately this can only lead to an exponential time quantum algorithm with a better exponent, putting up a strong obstruction to the hope for exponential quantum speedups in this setting."

Towards fault tolerant quantum computing

Daniel Gottesman has a relatively accessible survey on quantum error correcting codes. I was surprised to see that the problem of quantum error correction has not been reduced to an engineering challenge of building bigger devices with the required fidelities; work on understanding the properties and developing better error correction methods more robust against different error types is still an active area of research.

Quantum error correcting codes typically assume that errors are uncorrelated in time and space. In the presence of correlated errors at best you require a lower error rate for quantum error correction to work, and at worst the accumulation of uncorrectable errors will make the result of the computation useless. The Google team have put out a preprint aimed at reducing correlated errors arising from the excitation of higher-order states in their superconducting qubits.

Tuesday, May 24, 2022

ArXiv catchup

 Deadlines abound so I haven't been following arXiv postings that closely. Some papers of note from the last few weeks:

Dielectric Mie Voids: Confining Light in Air. The authors demonstrate a metasurface analogue of photonic crystal fibers. A neat idea that could enable ultra-low loss flat optics.

Beyond Barren Plateaus: Quantum Variational Algorithms Are Swamped With Traps. "We prove that a wide class of variational quantum models -- which are shallow, and exhibit no barren plateus -- have only a superpolynomially small fraction of local minima within any constant energy from the global minimum, rendering these models untrainable if no good initial guess of the optimal parameters is known." End-users beware: this work adds to evidence that variational quantum algorithms may not be scalable up to useful problem sizes.

Persistent homology analysis of a generalized Aubry-André-Harper model. Our own recent work on applying topological data analysis to study localization transitions in photonic lattices. What we found particularly exciting is that this is our first example of TDA discovering an unanticipated effect - the emergence of disorder-free eigenstates for certain model parameters.

Experimentally realized in situ backpropagation for deep learning in nanophotonic neural networks. Spoiler: the nonlinear activation functions are implemented digitally. Incorporating useful nonlinear optical response into integrated photonic neural networks while outperforming conventional electronic circuits is a big challenge. One promising potential solution is to employ measurement+based nonlinearities + feedforward.

Fabrication-Robust Silicon Photonic Devices in Standard Sub-Micron Silicon-on-Insulator Processes. With the continuing interest (and hype) in topological photonics it's important to keep in mind conventional non-topological approaches for designing photonic devices. For example, wider waveguides are more robust to fabrication imperfections, at the expense of being multimode. This is a problem, because waveguide bends will then induce coupling between the different guided modes, corrupting signals. Here the authors demonstrate how a clever choice of bending profile can suppress unwanted inter-modal coupling while not increasing the device footprint.

Breakdown of quantization in nonlinear Thouless pumping. Quantized adiabatic pumping of solitons attracted a lot of interest last year. This theoretical analysis shows how quantization can break down for moderate nonlinearity strengths due to the emergence of loops in the adiabatic energy spectrum, leading to dead-ends in the adiabatic path resulting in sudden non-adiabatic transitions of the soliton.



Monday, April 11, 2022

Generalised coherent states for variational quantum circuits

One challenge with designing variational quantum algorithms is choosing a good ansatz for the parameterised quantum circuit. One needs to balance multiple competing requirements: 

1. The circuit needs to be complex enough to be able to express a good solution to the problem at hand.

2. The parameterisation should be simple enough to admit analytical calculation of gradients, so that the solution can be found efficiently using gradient descent.

3. The circuit should generate non-trivial quantum entanglement, which is a necessary (but not sufficient) condition for a classical computer to be inefficient at obtaining the same solution.

The most common choices for parameterised circuits either do not meet all three requirements, or give rise to very deep circuits that cannot be run on current quantum processors.

Last week I saw an interesting preprint posted to arXiv:

A Variational Ansatz for the Ground State of the Quantum Sherrington-Kirkpatrick Model

This work builds on an earlier paper by the same group: Generalization of group-theoretic coherent states for variational calculations.

The authors consider variational quantum states based on generalised coherent states, which are formed by applying a single qubit and two qubit (entangling) rotations. Crucially, the form of the ansatz enables expectation values of Pauli operators (required to obtained energies and gradients) to be measured efficiently, with the required number of measurements scaling polynomially in the system size. At the same time, the non-trivial entanglement of the generalised coherent states means that evaluating the overlap between different parameterised states is a classically hard task.

These properties seem quite promising for quantum kernel methods: the analytical gradients would allow one to optimise key properties of the quantum feature map, while the model output (quantum state overlaps with the training data) remains hard to simulate classically.

It is interesting to speculate on whether these parameterised quantum states could be adapted form the basis for a quantum perceptron; they are simple and efficiently trainable, but I am not sure whether they are also modular and can be stacked to form a deep quantum neural network.

Thursday, March 3, 2022

arxiv highlights

Quantum persistent homology

A generalization of the quantum algorithm by Lloyd et al., which provides an exponential speed up for computing Betti numbers, enabling the computation of persistent Betti numbers. Like the original algorithm, however, it assumes the existences of a QRAM allowing the input data to be queried in a quantum superposition. It is an open question whether the exponential speedup remains when the data-encoding overhead is taken into account. See also the related arXiv:2111.00433.

 

Anomalous single-mode lasing induced by nonlinearity and the non-Hermitian skin effect

Highlighting a nice collaboration I was involved in. One limitation of many topological or PT-symmetric models for single mode lasers is that they require a structured pump. Such structured pumping will necessarily lower the device efficiency (in terms of output power / device size). Here we show counterintuitively how nonlinear gain saturation can lead to the emergence of stable single mode lasing in uniformly-pumped systems exhibiting the non-Hermitian skin effect. Due to the non-Hermitian skin effect, most of the linear modes become localized to the boundary of the system. However, a few (non-extensive) delocalized bulk modes remain and can be used as large volume lasing modes.

CAFQA: Clifford Ansatz For Quantum Accuracy

This is a neat approach for solving the barren plateau problem that makes quantum neural networks (and other variational quantum algorithms) expensive to train. The idea is to the initialize the circuit as a set of Clifford gates, which are efficiently simulable using classical computers. Therefore a classical computer can be used to find the best Clifford circuit approximation to the solution of the problem. The gate parameters are then allowed to deviate from those corresponding to Clifford gate, producing classically intractable states, and are optimized by running the quantum circuit. The better Clifford starting ansatz allows the quantum circuit optimization to converge more quickly, minimizing the number of expensive quantum circuit evaluations.


Physics-informed neural networks are a new approach for solving partial differential equations, based on minimizing a cost function measuring the deviation from the equation being fulfilled at a set of points in the bulk and at the edges of the domain of interest. Once trained one can obtain the field value at any desired point x within the domain. Mesh-free solutions of Maxwell's equations are one promising application. This preprint applies the physics-informed neural network approach to solve the time-independent single particle Schrodinger equation. I wonder whether this approach will also be useful for the many-body problem, since it does not require storing all the wave function components in memory, one can instead query the wavefunction at the desired set of points.

Thursday, January 20, 2022

Mapping optimization problems onto boson sampling circuits

Certain properties and applications of shallow bosonic circuits

This arXiv preprint comes from ORCA Computing, an Oxford University spin-off company pursuing fibre optic-based quantum computing. In brief, the fibre optic platform uses trains of time-delayed single photon pulses and number-resolved photon detection.

The authors consider how variational quantum eigensolvers, an immensely popular class of algorithms for near-term qubit-based systems such as superconducting quantum processors, may be mapped to shallow linear interferometers employed for boson sampling experiments. Their idea is:

1. Boson sampling circuits sample from a distribution integer strings (n1,n2,...nM), where ni corresponds to n photons detected in mode i.

2. Taking the parity of the output maps this to a distribution of bitstrings (b1,...,bM), which can be interpreted as a measurement of a qubit-based system in the computational basis.

3. Under certain constraints this parity mapping allows one to sample from a complete basis of the qubit Hilbert space and thereby measure cost functions of binary optimization problems such as QUBO.

4. Output observables obey the parameter shift rule, making gradient-based minimization of the cost function to solve the optimization problem practical.

I think studies such as this on mappings between qubit-based and bosonic NISQ devices are important. While a few general-purpose optimization algorithms for qubit systems have been developed, proposed applications of NISQ bosonic circuits remain largely limited to "hardware native" schemes, i.e. simulation of properties of bosonic Hamiltonians such as molecular vibration spectra. This work provides a general scheme by which one might solve more general optimization problems using boson sampling devices, making them much more valuable for end-users.

However, two caveats I can see:

1. The QAOA algorithm is perhaps the most promising variational quantum algorithm, because it is a structured (problem-inspired) circuit with relatively few free parameters that need to be optimized, with some rigorous performance guarantees. While linear interferometers are described by a manageable number of free parameters, there is no guarantee that this kind of hardware-efficient circuit can always express the optimal solution to the optimization problem at hand.

2. The elephant in the room: Are the proposed circuits hard to simulate on a classical computer? While exact boson sampling is provably hard, hardness of the approximate sampling problem rests on the number of optical modes M being much larger than N^2, where N is the total number of input photons. In this limit there is a very low probability of detecting more than one photon in any output mode and the number resolved detection and parity map are not required. In their proof of concept simulations, however, the authors consider only the case M ~ N...

Friday, October 15, 2021

arxiv papers

A few arXiv preprints that caught my eye this week:

Topology in shallow-water waves: A spectral flow perspective: This intriguing paper demonstrates a novel form of the topological bulk-boundary correspondence for continuous wave systems such as water waves, for which the more conventional bulk-boundary fails. In this case, the edge invariant describes a peculiar form of topological pumping driven by changes in boundary conditions. The authors write: "We are not aware of any physical evidence of such a quantity, but because of its stability there might be some way to observe it in some clever device." Light propagation in a suitable homogeneous medium described by Maxwell's equations seems like a prime candidate. Who will be first?

 
A Qubit-Efficient Encoding Scheme for Quantum Simulations of Electronic Structure: The authors propose a method to map quantum chemistry problems onto a number of qubits that grows only logarithmically with the number of orbitals simulated. Such qubit-efficient mappings are particularly important for solving practical problems on near-term quantum processors supporting a limited number of qubits. The trade-off, however, is that the design of ansatz circuits (for variational quantum algorithms) becomes a lot harder, so far limited to so-called hardware-efficient circuits that may be challenging to optimise to find the ground state of the system.


Topological Wannier cycles. This work was mentioned by Prof. Jiang at the METANANO conference last month. Threading a flux through a single unit cell of higher-order topological insulators induces a novel form of topological pumping that can be used to distinguish certain higher-order topological phases that cannot be distinguished via the bulk-edge correspondence.


Non-randomness of Google's quantum supremacy benchmark. The authors apply various statistical tests to the bitstrings sampled in Google's famous quantum supremacy paper. Due to noise and imperfections in the gates, the circuit does not precisely sample the family of random unitary matrices, and therefore the sampled bitstrings exhibit subtle biases. For example, qubits are 2.8% more likely to be sampled in their ground state. This suggests it may be possible to classically spoof the sampling performed by the quantum processor, challenging the claimed quantum supremacy.

 

The Complexity of Bipartite Gaussian Boson Sampling. Xanadu and collaborators analyze a variant of their programmable quantum photonic chip demonstrated earlier this year, in which different unitary transformations U and V are applied to each group of two-mode squeezed states. In practice, this would likely involve the use of wavelength or polarization-dependent couplers to direct signal and idler photons to different tunable beamsplitter meshes. Then, thanks to the singular value decomposition, the output photon number distribution can sample from distributions given by permanents of arbitrary matrices, enabling the classically-intractable regime to be reached using fewer photons (or in this case less squeezing).


Monday, September 20, 2021

Variational quantum algorithms are hard to train


Variational quantum algorithms

Variational quantum algorithms are an approach to solving hard optimization problems using quantum computers. Variational quantum algorithms encode the cost function describing the problem to be solved as the expectation value of a quantum Hamiltonian. A parameterized quantum circuit generates a trial solution to the problem, whose cost function is measured. Then, a classical computer trains the circuit parameters to reduce the cost function in order to find the optimal (lowest cost) solution to the problem.

Schematic illustration of variational quantum algorithms

Variational quantum algorithms are attracting a lot of interest owing to their ability to be run on near-term quantum processors. In particular, they have much lower circuit depths compared to algorithms designed for fault-tolerant quantum computers, and the iterative optimisation of the circuit parameters enables a robustness to errors in the individual quantum gates. However, there is no guarantee they will produce a good solution to the problem at hand, and in many cases there exist classical heuristic algorithms with provably better performance. In some cases, the problem encoding leads to a cost function described by a strongly-interacting or non-local quantum Hamiltonian for which finding the ground state (optimal solution) is provably hard to solve, even for a fault-tolerant quantum computer.

Training Variational Quantum Algorithms Is NP-Hard

An article just published in Physical Review Letters proves that, independent of the details of the quantum Hamiltonian encoding the problem, the classical optimisation part of variational quantum algorithms is NP-Hard to solve, meaning that the training step in which the quantum circuit parameters are adjusted exhibits many sub-optimal local minima which can trap commonly-employed gradient-based training methods. To show this, the authors considered a continuous version of the NP-Hard MaxCut problem that treats the quantum computer as an oracle that can efficiently generate and return expectation values of parameterised quantum states. Considering the simplest case where the problem Hamiltonian is diagonal in the computational basis is already sufficient to prove that variationally finding the optimal parameters for the quantum circuit is NP-Hard. Moreover, the optimisation in NP-Hard even for small quantum computers with logarithmically many qubits and non-interacting Hamiltonians; the difficulty does not rely on the hardness of the quantum ground state problem.

Perspectives

While this is definitely a sobering result for the field, rigorous results such as this are important in establishing the capabilities and limits of algorithms for near-term quantum computers and identifying possible methods to improve their performance. In this case, the hardness of the classical optimisation problem rests in the existence of many sub-optimal local minima; therefore developing heuristic strategies for generating good initial guesses that are sufficiently close to good solutions should be a priority. Another promising point the authors raise is that this hardness result is similar to the hardness of the Hartree-Fock method routinely employed in quantum chemistry; even though this related problem is NP-Hard, it is still useful as a starting point for many practical quantum chemistry calculations. We can hope to draw inspiration from methods to obtain good initial guesses that quantum chemists have spent decades perfecting.

Questions to ask before trying to solve your hard classical optimisation problem on cloud quantum computers

  • Do my problem encoding and parameterized circuit design avoid the barren plateau problem?
  • Do I have a method to choose a reasonable initial guess for my circuit parameters, e.g. using perturbation theory or a mean field solution? Random initial guesses are highly unlikely to converge.
  • How many qubits do I need to solve problem instances that are too large for existing classical solvers? Can I use a qubit-efficient problem encoding?