Tuesday, May 3, 2022

Resource-efficient quantum machine learning using matrix product states

Practical applications of noisy intermediate-scale quantum (NISQ) processors will require algorithms that combine qubit-efficiency and gate-efficiency with quantum speedups. This is especially challenging because qubit-efficient algorithms are typically based on highly-entangled qubits that require deep circuits to generate and manipulate.

For example, while optimally qubit-efficient encodings require only log(N) qubits, where N is the input data size, the encoding circuit depth will be polynomial in N. This is bad because NISQ processors are practically limited to circuit depths linear in N. At the other extreme, classical data can be encoded as an N-qubit product state using single qubit rotations, but this kind of entanglement-free encoding will not out-perform a classical computer. What is needed for practical applications is a way to control the level of compression so that one can make full use of the available qubit numbers and circuit depth.

A new preprint posted to arXiv shows how matrix product state-based data encodings allow one to control the trade-off between qubit and gate resources: Data compression for quantum machine learning. Specifically, the bond dimension $\chi$ of a matrix product state controls its degree of entanglement; data can be encoded into a matrix product state using a circuit depth of O(poly($\chi)\log N)$. Qubit compression is achieved by segmenting the data into a product state of patches, where each patch is described by a matrix product state. Larger patches enable stronger compression, at the cost of requiring a larger bond dimension to accurately encode the input data.

As an application of the approach, the authors combine their matrix product state data encoding with a matrix product state-based quantum classification algorithm. Using shallow hardware-efficient circuits the authors demonstrate modest accuracy at classifying images from the Fashion-MNIST dataset. The matrix product state classifier does however require variational optimization of the gate parameters, requiring on the order of 100 gradient-free optimizer iterations to converge for the small problem sizes considered. This will increase the cost of running the classifier on real quantum hardware. It would be timely to study the combination of matrix product state-based data encodings with non-variational shallow quantum algorithms, such as quantum kernel methods.

No comments:

Post a Comment