Tuesday, May 17, 2022

Topological data analysis using noisy intermediate-scale quantum processors

Topological data analysis (TDA) is an interesting potential application of future quantum computers. One step of the topological data analysis pipeline is to find the null space of the combinatorial Laplacian of a simplicial complex, a problem whose complexity grows exponential in the simplex dimension k. It is remarkable that the combinatorial Laplacian has a certain structure that makes it efficient to implement using quantum circuits, which enabled Lloyd and collaborators to propose a quantum algorithm for topological data analysis back in 2016: Quantum algorithms for topological and geometric analysis of data. However, being based on quantum phase estimation, this algorithm is not suitable for the noisy intermediate-scale quantum (NISQ) processors current available, apart from proof-of-concept experiments.

Last year Ubaru and collaborators proposed a NISQ-friendly TDA algorithm. What is very nice about this algorithm is that it has depth and qubit requirements linear in the number of input data points n, it potentially offers an exponential speed-up compared to the best classical algorithm, and it is not variational. In my opinion, the variational quantum algorithms for NISQ will have serious issues with scaling up to useful (i.e. classically-intractable) system sizes. Therefore I find this NISQ-QTDA algorithm quite exciting.

The NISQ-TDA algorithm uses a few neat tricks: the quantum phase estimation algorithm (not suitable for NISQ devices) is replaced with random sampling of the moments of the graph Laplacian to provide an estimate of the dimension of its null space, giving a circuit depth linear in the number of input data points n. Additionally, intermediate measurements and reset of ancilla qubits reduce the number of ancillas from O(n^2) to O(n). In a subsequent work the group has shown that the boundary operator used to construct the graph Laplacian can be implemented exactly using 2(n-1) two-qubit rotations plus a single qubit rotation. 

QTDA is one of the few quantum machine learning algorithms that offers an exponential speedup without requiring QRAM tricks, which suggests its speedup will be robust to dequantization approaches (i.e. analysis that takes the data-encoding bottleneck into account).

Still, there are a few limitations. The original algorithm by Lloyd and collaborators only computes the Betti number of a single simplicial complex. On the other hand, TDA generally requires computation of the persistent Betti numbers, using a range of scales to obtain a family of simplicial complices. Recent studies [arXiv:2111.00433, arXiv:2202.12965] have proposed quantum algorithms for computing persistent Betti numbers, but only suitable for fault-tolerant quantum computers. It will be interesting to see whether these approaches can be generalized into NISQ-friendly algorithms. This is definitely a subject to keep an eye on.

No comments:

Post a Comment