Friday, July 15, 2022

Quantum error correction in practice: it's really really hard

Suppressing quantum errors by scaling a surface code logical qubit

Today the Google AI team published a study demonstrating surface code quantum error correction using their superconducting quantum processor, advertised by an impressive-sounding summary on social media:

Fresh on the arxiv, Quantum AI demonstrates lower quantum error by scaling a surface code logical qubit from distance-3 (17 qubits) to distance-5 (49 qubits).

"These results mark the first experimental demonstration where quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation."

In the paper, the "improved performance" is from a 3.0% logical error rate per correction cycle (distance-3 code) to 2.9% logical error per error correction cycle (distance-5), obtained after heroic efforts to improve the performance of their device (see Fig. 3c).

Specifically, setting the qubit and gate parameters (e.g. qubit frequencies and drive pulse parameters) for the physical qubits is an incredibly complicated optimization problem:

It is noisy, non-convex, and all parameters are explicitly or implicitly intertwined due to engineered interactions and/or crosstalk. Furthermore, since each parameter is constrained to ∼ 100 values by the control electronics, processor circuit, and gate parameters, the search-space is ∼ 10^552 . This space is intractable to search exhaustively and traditional global optimizers do not perform well on the objective. Therefore, we invented the Snake optimizer to address it.

The paper notes:

No comments:

Post a Comment