Friday, October 14, 2022

Thresholds for quantum advantage for topological data analysis

More quantum topological data analysis (QTDA) preprints which I missed while I was on holiday:

Quantifying quantum advantage in topological data analysis

In contrast to the recent works by IBM focused on QTDA algorithms for near-term quantum processors without quantum error correction, this work presents improvements to the original QTDA algorithms for fault-tolerant quantum computers.

It is conjectured that QTDA may provide an exponential speedup compared to the best possible classical TDA algorithms. Establishing such a speedup rigorously is challenging, however, in part because the run-time of QTDA depends on the spectral properties of the graph Laplacian operator; one needs to estimate the fraction of zero (or approximately zero) eigenalues of the graph Laplacian, which is hard if there if the gap between its zero and nonzero eigenvalues is small. The authors give examples of families of graphs exhibiting both large Betti numbers and large gaps, suggesting that a useful quantum speedup is in principle possible for certain problems.

On the other hand, the authors also propose a "dequantized" randomized classical algorithm for estimating the fraction of zero eigenvalues using imaginary time propagation generated by the graph Laplacian. The scaling of their classical algorithm is polynomial for certain classes of graphs. This means that simple arguments for an exponential quantum advantage based on the exponential scaling of the size of the graph Laplacian are insufficient.

The authors conclude that "while the exact threshold for quantum advantage depends on constant factors in the classical algorithms, it seems likely that this application will fall somewhere in between quantum chemistry applications and Shor’s algorithm in terms of the resources required for quantum advantage."

Another recent work by Schmidhuber and Lloyd, Complexity-Theoretic Limitations on Quantum Algorithms for Topological Data Analysis, tackles the question of quantum advantage for TDA using computational complexity theory. They argue that the complexity class of the Betti number calculation (estimation) problem is #P-hard (NP-hard), implying the QTDA will exhibit an exponential scaling for (almost all) inputs and enable at a best a polynomial speedup compared to the bet classical algorithms. They speculate that an exponential speedup may be possible by using simplices as input data rather than points and vertices. 

A few thoughts:

  • One approach used by existing classical TDA libraries to study large intractable datasets is randomized sampling of the input data, exactly computing the Betti numbers of random subsets of the data. I wonder how this compares to the dequantized TDA algorithm, which computes (approximately) the Betti number of the full dataset using Monte Carlo sampling.
  • In many practical applications one does not need just the Betti number of a single graph or simplicial complex, but rather one studies the persistent homology of families of graphs or simplicial complices. An important question is whether the approach here (and also the linear-depth algorithms developed by IBM) can be generalized to quantum algorithms for computing persistent homology.

I am looking forward to reading these papers more carefully over the coming weeks.

No comments:

Post a Comment