Friday, September 3, 2021

Error mitigation strategies for noisy quantum processors

This week two arXiv preprints on error mitigation techniques for quantum processors appeared:

Scalable mitigation of measurement errors on quantum computers, by researchers at IBM Quantum. 

All quantum processors suffer from measurement errors: the probability of measuring a bitstring n is not simply the modulus squared of the quantum state amplitude |𝛹(n)|^2. Without correcting for measurement errors the result returned by the quantum processor will be corrupted.

The IBM Qiskit documentation describes a simple calibration procedure for estimating measurement errors: prepare a bitstring and then repeatedly measure it to obtain a distribution of output bitstrings. Repeating for all basis states allows one to construct a calibration matrix A that can be inverted to transform error-biased measurement probability distributions to the ideal error-free measurement probability distribution.

For an N qubit system, A is a 2^N ✕ 2^N matrix; the size of calibration matrix grows exponentially with the number of qubits, so it quickly becomes infeasible to perform a calibration of every individual bitstring. 

Luckily, in many cases the errors at different qubits are uncorrelated to a good approximation, which enables efficient construction of A using O(N) calibration bitstrings. Moreover, only a small additional overhead is required to account for short-range correlations in the measurement errors.

Even given A, the need to estimate 2^N probabilities and then solve a linear system of 2^N equations to obtain the ideal distribution means this standard calibration approach cannot be scaled up to large quantum processors. The preprint by Nation et al. proposes a solution to this problem.

Their approach exploits the fact that measurement errors are relatively small for state-of-the-art quantum processors (a few percent for superconducting quantum circuits, and only 0.4% for IONQ's trapped ion system). Therefore the ideal distribution can be considered as a weak perturbation to the noisy measured distribution, allowing one to restrict attention to the elements of A contained within the subspace of sampled bitstrings. 

Current cloud quantum processors typically allow a few thousand measurements per circuit, which provides an upper bound on the size of the sampled subspace. The authors propose either directly inverting A within this subspace, or (for larger subspace sizes) using matrix-free iterative methods to estimate the ideal measurement distribution.

Testing this improved error mitigation scheme, only a few seconds of calibration time is required for a 42-qubit quantum processor. Previous mitigation schemes were limited to about 10 qubits before the run time starts to blow up!

 

Can Error Mitigation Improve Trainability of Noisy Variational Quantum Algorithms?, by researchers at Los Alamos National Laboratory

This preprint appeared today, with more pessimistic results. Many large scale variational quantum suffer from the barren plateau problem, which makes them exponentially hard to train. Unfortunately, noise in near-term quantum processors induces barren plateaus under fairly generic conditions. Here the authors study whether a variety of leading error mitigation schemes can be used to suppress or even eliminate the barren plateaus. Unfortunately the answer is no, and some schemes even make the barren plateau problem worse...

No comments:

Post a Comment