Friday, August 6, 2021

A setback for quantum machine learning?

Previously I wrote about how the input and output of data are bottlenecks for quantum machine learning. The measurement problem of efficiently obtaining outputs is more an issue for NISQ quantum machine learning algorithms such as variational methods. The efficient input of large datasets is a problem even for algorithms for eventual fault-tolerant quantum computers, which typically assume the data can be quickly queried in quantum superpositions, i.e. using quantum RAM (QRAM).

A paper by Ewin Tang recently published in Physical Review Letters rigorously shows that caution is needed when analyzing potential speedups of quantum machine learning algorithms. Using a model for classical data input that is comparable to QRAM (based on classical sampling of data elements), it is shown that quantum principal component analysis and clustering only deliver a polynomial speedup compared to comparable classical algorithms. Unfortunately, polynomial speedups are not likely to be useful in practice due to the massive overhead of quantum error correction.

Interestingly, this work first appeared on arXiv in 2018 and is a follow-up to the author's undergraduate dissertation!! Inspiring!

No comments:

Post a Comment