Tuesday, September 14, 2021

Fundamental limitations of quantum error mitigation

Near-term quantum processors are too small to run quantum error correction. Noise is inevitable; each gate executed will be accompanied by small errors which accumulate and grow with the circuit depth. The errors grow exponentially with the number of qubits used, quickly rendering the calculation result meaningless. For this reason, there is growing interest in quantum error mitigation schemes which aim to minimise the influence of errors without requiring extra qubits. Some examples:

Zero noise extrapolation, which scales the duration of the quantum gates comprising the circuit in order to controllably increase the level of noise. By repeating the circuit for varying noise levels, one can obtain an estimate of the circuit output in the zero noise limit.

State purification, which assumes that the output of the circuit is close to a pure state. One performs tomography on the (mixed) output state to find the closest pure state, which provides an estimate for the circuit output in the absence of noise.

Quantum error mitigation techniques typically incur an overhead of additional required measurements. A recent arXiv preprint analyzes a generic model of error mitigation schemes, showing that the measurement overhead grows exponentially with the circuit depth. Moreover, the technique of proabalistic error cancellation is optimal; this method requires a model for the dominant source of noise.

This result poses an interesting question. As the size and complexity of noisy intermediate scale quantum devices increases, will useful applications emerge? Or will we instead encounter a desert in which error mitigation techniques become too time-consuming, but the number of qubits is still too low to run full quantum error correction?

No comments:

Post a Comment