Showing posts with label quantum computing. Show all posts
Showing posts with label quantum computing. Show all posts

Wednesday, June 18, 2025

International Conference on Quantum Science & Technology (6-9th October, 2025) - call for abstracts

The main aim of the conference, to be held in Quy Nhon, Vietnam, is to develop links between physicists in Vietnam and those in France and around the world who are contributing to the advances of quantum physics. The scientific programme features eminent invited speakers including Serge Haroche. The following themes are envisaged:

  • quantum optics, quantum communication and quantum computation
  • topics where condensed matter, atomic physics and chemical physics overlap
  • high precision experiments involving spectroscopy and metrology
  • cold atoms and simulation of materials
  • theory and methods in quantum mechanics
  • quantum high energy physics and cosmology
  • quantum technologies and energy production
A focus on inter-generational exchanges will be planned between top level invited senior physicists and young students, opening new scientific horizons to them. Tutorials will be given (half a day before the colloquium) to provide the basis of the fields which will be covered by the speakers. Time will be given to young PhDs and postdocs to present their work. Round tables will allow informal discussions raised by the presentations and identify opportunities to develop scientific cooperative projects between Vietnamese and foreign laboratories. 

For more details and registration information, please visit the conference website. The abstract submission and registration deadline is September 7th, 2025. Registration is free, but participants must cover their own travel and accommodation expenses.

Wednesday, June 11, 2025

What's next for applied quantum computing?

NISQ (noisy intermediate-scale quantum) algorithms generated a lot of excitement and a lot of publications - the 2022 review has amassed almost 2000 citations! Nowadays the tone is more subdued, with many experts believing any useful practical applications of quantum processors will need quantum error correction. The new hot topics are understanding how to make useful error correction a reality, and what might be done with a few hundred logical qubits

What then should a new student interested in applied quantum computing focus on?

Ryan Babbush and collaborators already argued in 2021 that algorithms with quadratic speedups won't be useful in practice. So sorry, but we won't be able to solve complex industry optimization problems using Grover search. However, their analysis indicated that quartic speedups and beyond could be practically useful. Which quantum algorithms have this property?

Consulting the excellent review article Quantum algorithms: A survey of applications and end-to-end complexities, there are only a few examples of known or suspected quartic or beyond end-to-end quantum speedups! They are:

Tensor principal component analysis (PCA). Ordinary PCA is a data reduction step widely used in data analysis and machine learning. It's not yet clear what tensor PCA might be useful for, but if an application can be found quantum computers will probably give a useful speedup.

Topological data analysis (TDA). This is another promising direction where a useful speedup for certain problems is possible. Following an initial buzz of excitement in 2022, it's unclear whether there are practical applications for where such a speedup can be useful. Recently-developed quantum-inspired classical algorithms will be useful to identify potential use-cases for quantum TDA.

On the classical computing side, quantum-inspired tensor network methods are very promising for near-term applications.  

There are also other approaches (QAOA, quantum machine learning) which attracted a lot of interest since 2020 and are still being explored theoretically, but at least in their present formulations they seem unable to provide a useful speedup for classical problems, with their most promising applications related to directly studying or simulating certain quantum systems. Thus, interest has shifted from "beating" classical methods on carefully-selected problems to better understanding the foundations of quantum machine learning. While this is a fascinating topic, it is at this stage it is more theoretical than applied research.

Wednesday, January 22, 2025

Michael Berry on the next century of quantum mechanics

Prof. Michael Berry talked about his work and the future of quantum mechanics in an interview during his recent visit to ICTS-TIFR for the ‘A Hundred Years of Quantum Mechanics’ program. Some excerpts:

Q: What is the status of the foundational questions in quantum mechanics now?

A: I have no idea, I don’t work on them. [...] Transport the question back to classical mechanics. Two points. Is Newton’s equation more fundamental than Hamiltonian’s? Philosophers could argue about it. In fact, Newton’s equations are more general, that’s another matter. 

This refers to work by Berry and others on curl forces: position-dependent forces that cannot be written as the gradient of a potential. Curl forces have many peculiar properties - symmetries do not imply conservation laws, the dynamics are non-conservative yet non-dissipative, and in many cases they cannot be generated by a Hamiltonian. I first heard about this fascinating topic when Berry gave a colloquium at NTU in 2016. There has been quite a bit of work on this topic since then, including a recent generalization to quantum curl force dynamics.

Q: Do you have any advice for people who work in this field or who aspire to work in this field?

A: Yes. I have two contradictory pieces of advice for people who ask me for career advice.

The first piece of advice is: don’t take advice.

But, if pressed, I would say that if I were starting out, I would probably work on quantum information. Probably, though I can’t tell — this is what philosophers call counterfactual history. So I would say: work on quantum information. There are so many riches to be uncovered there to do with these big Hilbert spaces, even with a modest number of particles. So that’s what I would say.

For context, Berry's main contributions to physics relate to the "simple" case of linear wave equations and single particle quantum mechanics - well-established theories that nevertheless held numerous surprises and emergent behaviour in their singular limits and asymptotic phenomena. We've only scratched the surface when it comes to exploring these effects in complex many-body quantum systems.

The full text of the interview can be found here.

 

 

 

Monday, September 16, 2024

From classical to quantum HodgeRank

This is a rather late summary of a cool preprint I saw a few months ago: Quantum HodgeRank: Topology-Based Rank Aggregation on Quantum Computers 

This work is inspired by and builds on quantum subroutines developed for efficiently solving high-dimensional topological data analysis problems, offering superpolynomial speedups for ranking higher-order network data by developing a quantum version of the classical HodgeRank algorithm.

What is HodgeRank? It was originally proposed in 2011 as a better way of ranking incomplete or skewed datasets, for example based on user ratings or scores.

The basic idea is to apply an analogue of the Helmholtz decomposition (used routinely in electromagnetics) to graph data, enabling one to assign a ranking based on incomplete pairwise preferences. Importantly, HodgeRank outputs not just a raw ranking, but also an estimate of the quality of the ranking via the construction of local and global cycles present in the optimal ranking. To be specific, the returned optimal ranking is unique and fully consistent if the preference matrix can be written as the gradient of some scalar ranking function. If it cannot, then there are inevitable ambiguities present in the preference data due to the existence of global or local cycles. 

An example of a local ranking cycle is the following: B is preferred over A, C is preferred over B, and yet A is preferred over C. This leads to the ranking A < C < B < A, thus forming a cycle. It is better to identify cycles such as these and acknowledge that a traditional ranking does not make sense for these items. This is what HodgeRank does! User preference data is rarely consistent, so cycles such as these routinely occur in the wild, for example in user rankings of movies on online platforms such as Netflix. 

As a generalization of HodgeRank, Quantum HodgeRank promises the ability to perform ranking tasks on preference data forming higher-order networks, avoiding the exponential scaling with network dimension faced by classical algorithms. Moreover, the authors of the preprint argue that HodgeRank cannot be dequantized (i.e. implemented efficiently using a randomized classical algorithm) in the same manner as quantum TDA algorithms for the Betti number problem. Moreover, while applications of high-dimensional Betti numbers (and even their occurrence in real datasets) remain unclear, HodgeRank represents a ranking problem with more likely concrete applications. Thus, this looks like an exciting area to keep an eye on. 

It is also interesting to speculate on whether (classical) HodgeRank or HodgeRank-inspired methods can be useful for understanding the behaviour of interacting many-body quantum systems, where it is typically intractable to sample all of the pairwise interaction elements of Hamiltonians as the system size increases, but incomplete or skewed sampling is readily available. Watch this space!

Wednesday, July 24, 2024

Part-time Associate Editor position in quantum science at Physical Review A

Physical Review A (PRA) is looking for a new part-time Associate Editor in the area of quantum science to join our team.

For more than 50 years, PRA has been publishing important developments in the rapidly evolving areas of AMO physics, quantum science, and related fundamental concepts. The journal is growing, and we are looking for someone working in the area of quantum science to join our team of editors on a part-time basis. The candidate would be expected to maintain their current academic position while serving as an editor for PRA.

The advertisement for the position can be found here, including further details about the expectations for this role, time commitment, how to apply, etc. We look forward to applications from qualified candidates. The deadline to apply is August 10th, 2024.
 
A few notes about the selection criteria:
 
(1) Current active involvement and stature in the relevant field of research. 
 
You should be publishing, and in good journals. Invited talks at or involvement in organisation of reputable conferences or awards can also serve as evidence of active involvement and stature in the field. Within quantum science there is an enormous breadth of sub-topics ranging from fundations to applications, so someone with experience in a wider variety of topics is likely to be preferred over someone with more narrow expertise. This is another reason why you should work on something a bit different after your PhD.
 
(2) An outstanding record as a referee and a demonstrated commitment to peer review. 
 
Serve as a good referee on papers when asked, obviously. If you haven't reviewed for APS, you can express your interest here. Write useful, constructive reports including suggestions on how the manuscript can be improved (even if you don't think it meets the standards of the journal). Return reports quickly and/or within the timeframe you promise. It's OK to decline if a paper is outside your expertise or you are too busy - we really appreciate fast responses. We don't have access to your referee record at other journals, so it is better to concentrate your refereeing service at a few publishers that you submit your own manuscripts too, rather than spreading your efforts across dozens of different publishers. Particularly for the case of for-profit publishers - if you don't publish with them why should you volunteer your time for their benefit?
 
(3) The ability to work within the editorial team and the desire to maintain the quality and reputation of the journal. 
 
Evidence of desire to maintain the quality and reputation of the journal can include service as a good referee and submitting your own good papers to the journal.
 

Thursday, May 2, 2024

From NISQ to small logical quantum circuits

After six years of huge interest in NISQ (noisy intermediate-scale quantum) circuits there are still no practical applications where a noisy quantum device can outperform the best classical methods. Noise is too detrimental, and classical methods are too powerful. Experts continue to argue that now is not the time for commercial applications: quantum error correction, hundreds of logical qubits, and millions of error-corrected gates are needed.

Then what's next? Circuits of a moderate size with some limited error correction capabilities. LISQ (logical intermediate-scale quantum) or something else, for short.

What can we expect from these up and coming small scale logical circuits?

First, a lot of the tools developed for the NISQ era will become obsolete. For example, variational quantum circuits involving continuously-parameterised quantum gates cannot be easily implemented in a fault-tolerant manner. Instead, post-variational hybrid quantum-classical algorithms for this era will need to offload the continuously-parameterised part of the algorithm to a classical computer, with the quantum circuit used to measure a set of (hopefully classically-intractable) observables that are used as inputs to the classical tunable model.

Second, the hardware, algorithms, and the error correcting code cannot be considered in isolation. Choosing the right error correcting code will be essential to get the most out of the current hardware. Examples of this can be seen in QuEra's logical circuit demonstration from late last year, where the use of a 3D quantum error correction code allowed them to perform random IQP circuit sampling with error detection, and Quantinuum's recent demonstration of repeated error correction. Similar to the NISQ era, different hardware platforms will have different strengths and limitations in what kinds of circuits they will be able to run.

Finally, the most valuable software tools in the NISQ era were for quantum control and state tomography, essential to get the most out of the noisy hardware. These tools will remain important, since fidelities at the physical qubit level directly affect the amount of quantum error correction overhead required. As we move to logical circuits, the new valuable quantum software will be in the form of compilers that will take all the hassle out of hardware and error code selection out of the end-user and translate a given logical circuit into simple, understandable hardware requirements.

Thursday, February 1, 2024

A busy January

There's been a lot going on here...

Machine Learning & Physics

Unsupervised learning of quantum many-body scars using intrinsic dimension - Now available on arXiv! We applied manifold learning techniques to identify scar states in the PXP model The take-home message: manifold learning techniques are a powerful alternative to more popular deep learning methods, especially in physics problems where you might not have access to enough training data for deep learning to work well.

Identifying topology of leaky photonic lattices with machine learning - Just published in Nanophotonics! We apply various machine learning methods to distinguish different topological phases in a photonic lattice, assuming one only has access to intensity measurements. This can serve as an alternative to full state tomography or phase retrieval methods, but one needs to be careful when training the models on ideal / pristine systems and then applying them to disordered systems. The journal also published a press release on WeChat!

Quantum Computing

Computing electronic correlation energies using linear depth quantum circuits - Finally published in Quantum Science & Technology, after more than a year and a half working through the peer review system. We use perturbation theory to determine electronic correlation energies in small molecular systems (hydrogen, lithium hydride, etc.) using a large set of shallow circuits, giving an alternative to existing methods which require deeper circuits infeasible for current quantum processors. We also tested the algorithm on cloud quantum processors, observing the detrimental impacts of noise. It would be interesting to run this again now to see how much (or how little) the performance from the different cloud providers has improved!

Landscape approximation of low-energy solutions to binary optimization problems - Published in Physical Review A. We present a method to obtain approximate solutions to binary optimization problems using the localization landscape, a function which is able to place bounds on the regions of Anderson localized eigenstates in disordered media without solving the underlying eigenvalue problem. We lay out the conditions required for these bounds to hold, outline how a quadratic unconstrained binary optimization problem can be transformed to fit these conditions, and provide details on how the quantum state representing the landscape function can be produced and sampled using techniques developed for near-term quantum devices.
 
On a related note, I was interested to see this month a new arXiv preprint in which the localization landscape was used to engineer multifractal resonances in SiN membranes!

Photonic Flatband Resonances

Photonic Flatband Resonances in Multiple Light Scattering - Published in Physical Review Letters. We reveal that flatbands can emerge as collective resonances in fine-tuned arrays of Mie-resonant nanoparticles, leading to giant values of the Purcell factor for dipolar emitters. The article was also highlighted with a Synopsis in Physics Magazine!

Tuesday, January 23, 2024

Quantum Jobs

PRX Quantum seeks an Associate Editor, Quantum Information. Physical Review is home to the most Nobel Prize-winning physics papers in the world. This is an opportunity to be at the forefront of the most exciting breakthroughs in quantum science!

Many postdoctoral openings at the Centre for Quantum Technologies, Singapore, ranging from experimental integrated photonics to applying quantum-inspired algorithms to bioinformatics!

Coming soon: ARC Centre of Excellence in Quantum Biotechnology. This newly-funded centre aims to pioneer paradigm-shifting quantum technologies to observe biological processes and transform our understanding of life. Stay tuned for openings in this exciting new field...

Wednesday, December 20, 2023

Towards fault-tolerant quantum computing with Rydberg atoms

 I'm a bit late to the party, but finally managed to get a chance to read the paper "Logical quantum processor based on reconfigurable atom arrays" by Harvard, QuEra, and collaborators, which hit the headlines a few weeks ago. My thoughts:

  • Sadly many articles covering the paper gloss over the important distinction between error detection and error correction: QuEra's press release, The Harvard Gazette, EurekaAlert, and others. Optics & Photonics News provides more balanced coverage. The impressively high (above break-even) fidelities demonstrated in the paper require post-selection, discarding experimental runs where errors were detected. The post-selection probability is as low as 0.04% for the largest system sizes studied, and will get exponentially smaller for bigger circuits. The bottom line: scaling up to a useful size needs integration of error correction.
  • How to integrate error correction? One needs to process the error detection measurements in real-time and then apply correcting gates to the qubits while the circuit is being run. Figure 4 of the paper does demonstrate implementation of measurement-dependent feedforward operations, but not yet integrated with error decoding and correction operations. This seems to be in principle an engineering challenge that can be solved with more hard work.
  • Scaling up to more qubits and deeper circuits will require continuous pumping and replenishment of Rydberg atoms. Otherwise, the circuit width will be limited by the finite success probability for trapping each atom, and the depth by the ~10s trapping lifetime.
  • The quantum processor architecture, involving separate storage, processing, and readout zones, as well as the ability to execute gates with arbitrary connectivity and in parallel using just a few structured laser beams, looks much more promising for scalability compared to superconducting quantum processors.

There is more discussion over at Shtetl-Optimized.

Wednesday, November 29, 2023

Updates

Infrequent posting due to other commitments. Here are a few brief items of note from the past month:

  • Beng Yee uploaded his second paper from his PhD research to arXiv: A Unified Framework for Trace-induced Quantum Kernels. This project tackled the problem of how to choose the best quantum kernel for a given learning task using tools from classical multiple kernel learning theory. The bottom line: the optimal problem formulation (e.g. as a kernel model, projected kernel model, or quantum neural network) depends on the relative amount of training and test data, whether one wants to impose constraints to the trained model, and whether one has many qubits with low-fidelity gates or a fewer qubits with high fidelity gates. Read to find out more!
  • The December issue of Optics & Photonics News highlights some of the most exciting peer-reviewed research in optics and photonics published over the past year. There is also an accompanying perspective on areas to watch in 2024 and beyond by selected summary authors.
  • Two papers recently published in PRL caught my eye: Universal Sampling Lower Bounds for Quantum Error Mitigation suggests the quantum error mitigation being pushed by IBM and others as a means of getting useful applications out of current noisy quantum processors may be foiled by an exponentially growing measurement overhead, and Classifying Topology in Photonic Heterostructures with Gapless Environments shows how a recently-developed real space formulation of topological invariants may be a more useful tool for quantifying the robustness of topological states in photonic systems, particularly those exhibiting radiation losses of optical nonlinearities.
  • The 7th International Conference on Optical Angular Momentum will be held 24 - 28 June 2024 in South Africa. The abstract submission deadline is 7 January 2024.
  • The next edition of the Quantum Techniques in Machine Learning conference will be held in Melbourne, 25-29 November 2024. The abstract submission deadline is 5 July 2024. 
  • In the news headlines: Alibaba shuts quantum computing lab. Seems to be part of a wider trend of industry funding shifting from quantum to generative AI - see also Zapata and Normal Computing.

Thursday, September 7, 2023

What I've been reading lately

Continuity Equation for the Flow of Fisher Information in Wave Scattering

We can get an intuitive understanding of a wide variety of wave systems ranging including photonics, acoustics, and electronic condensed matter by visualizing the flow of intensity, energy, or probability density through them. These flows are useful for understanding the behaviour of conserved quantities, since they can be decomposed into sources, sinks, and solenoidal components. This paper shows that the Fisher information, a measure which bounds the precision with which parameters of interest can be measured, similarly obeys a conservation law enabling its visualization in terms of information flow. Remarkably, the Fisher information flow gives distinct insights into wave propagation in complex media and is complementary to more standard analysis methods based on the energy flow. This work raises many interesting questions and opens new possibilities!

Energy and Power requirements for alteration of the refractive index

This is another paper in a series of perspectives on estimating the capabilities and potential limits to the performance of photonic devices using relatively simple classical oscillator models and sharp physical insights. The take home message is that the power required to achieve a given level of optical modulation depends primarily on the interaction time, which depends on the device geometry (e.g. resonator vs travelling wave), without substantial variation among different materials. This suggests that improvements in power efficiency are more likely to come from improvements in fabrication methods and device design, rather than the discovery of some new material with substantially better physical properties.

Quantum Algorithm for Computing Distances Between Subspaces

There's growing evidence that the best place to look for a quantum advantage for classical machine learning will be geometrical or topological problems that have a natural connection to quantum systems. One example is the Betti number problem, which maps to computing the ground state of supersymmetric many-body Hamiltonians. This work shows that computing distances between k-dimensional subspaces of an n-dimensional space can be done exponentially faster using a fault-tolerant quantum computer. The algorithm exploits the ability to efficiently encode subspaces into quantum states combined with quantum signal processing. Subspace distances have to large scale machine learning and computer vision problems, suggesting the asymptotic exponential advantage promised by a fault-tolerant quantum computer could lead to practical speedups.

Tuesday, August 22, 2023

Quantum chemistry with subspace states: the conclusion

 Just over a year ago I wrote about a paper on quantum machine learning using subspace states, which inspired a project we undertook on applications of similar quantum states to variational quantum circuits for quantum chemistry and condensed matter physics. Over the weekend our manuscript was published in Physical Review A!

We were fortunate to have three knowledgeable referees who gave constructive and insightful comments on the original manuscript. We heavily revised the manuscript compared to the original arXiv preprint to not only improve the presentation, but also emphasize the broader applicability of the subspace space approach, specifically the ability to prepare correlated fermionic ansatz states beyond pairwise correlations. Our approach can yield substantially shallower quantum circuits for solving problems where the electron density (number of electrons d / number of orbitals used N) is small, for example when trying to extrapolate finite basis set calculations to the complete basis set limit. This is illustrated in the figure below, taken from the paper:

Estimated two-qubit gate depth per occupied mode d to prepare an N-mode Slater determinant and pairwise-correlated ansatz states using subspace states, compared to existing d-independent and linear in N approaches.


Wednesday, August 16, 2023

Will there be a useful quantum advantage for topological data analysis?

 

We don't know yet. 

Prominent applications of topological data analysis (TDA) including Mapper-based visualisation are based on fast and interpretable heuristics. While quantum TDA may speed up the calculation of high-dimensional Betti numbers, it doesn't help with understanding when and why high-dimensional topological features might be important, and whether they need to be computed to high precision or classical Monte Carlo methods can give a sufficient accuracy in practice. What high-dimensional Betti numbers might be good for needs to be determined empirically using machine learning benchmark datasets and the best classical algorithms.

Beyond the Betti number problem, for which quantum algorithms have already been proposed, it will be interesting to explore what other TDA methods could be sped up using quantum subroutines. For example, the nonzero eigenvalues of the persistent Laplacian also seem to be useful as features for machine learning algorithms, in contrast to traditional persistent homology methods that focus only on the zero or near-zero eigenvalues of the persistent Laplacian. The speedup for quantum persistent homology comes from being able to construct the persistent Laplacian exponentially faster the best-known classical methods. If there is useful information that can be extracted from the persistent Laplacian without requiring the quantum singular value transformation or rejection sampling, the resource requirements for a quantum advantage would be reduced enormously.

Thanks to the the team at QCWare for inviting me to give a seminar on this topic and the thought-provoking discussions afterwards!

Thursday, August 10, 2023

arXiv highlights

Quantum-noise-limited optical neural networks operating at a few quanta per activation

Suitably-trained optical neural networks can still perform classification tasks accurately using low intensity light with a low signal to noise ratio. This suggests that specialized light-based analogue hardware for machine learning may offer a route towards reducing the enormous energy consumption of neural networks!

Dissipative mean-field theory of IBM utility experiment

Another approach towards reproducing the results of IBM's kicked Ising model quantum simulation experiment, this time using mean field theory. The Appendix gives a simple rule of thumb for estimating the quantum volume of specific devices based on their two-qubit gate and readout fidelities and compares some different hardware providers.

Maximally-Localized Exciton Wannier Functions for Solids

Wannier functions - localized states constructed as a superposition of Bloch waves from an energy band of interest - are an important tool of the condensed matter physicists' trade. This work presents a method for constructing maximally-localized Wannier functions for multi-particle states, focusing on applications to excitons (electron-hole pairs).

Tensorized orbitals for computational chemistry

This work presents a tensor network-based compression of the matrix elements that need to be computed and stored when performing quantum chemistry calculations, based on Tensor Cross Interpolation. This is yet another example of how tools from quantum many-body physics can be used to speed up time-consuming computational tasks - no working quantum computer needed!

Friday, July 14, 2023

Seeking quantum speedups using supersymmetric systems

There is a neat correspondence between the question of whether a simplicial complex has a k-dimensional hole and whether the ground state of a related supersymmetric (SUSY) quantum many-body Hamiltonian is at zero energy:

Complexity of Supersymmetric Systems and the Cohomology Problem


Clique Homology is QMA1-hard

A less technical presentation of the latter paper at QIP2023 and can be viewed here.

Both problems are QMA1-hard, meaning that the correctness of a trial solution can be efficiently checked by a quantum computer (but finding the correct solution remains hard even for the quantum computer). In contrast, recently-proposed quantum algorithms for TDA consider relaxations of the homology problem that can be solved efficiently using quantum algorithms, such as estimating the normalized number of k-cycles to some finite precision.

What other seemingly classical or purely mathematical problems can be naturally framed in the language of supersymmetric quantum mechanics? This promises to be fertile ground for exponential quantum speedups, and you don't need to be an expert in quantum algorithms to join the hunt!

Tuesday, June 27, 2023

Tensor network simulations challenge claims of quantum advantage...again!

Hot off the arXiv today: Efficient tensor network simulation of IBM's kicked Ising experiment

The authors report efficient classical simulations of the experiments by the IBM quantum computing team reported in Nature last week: Evidence for the utility of quantum computing before fault tolerance

What's going on here?

Tensor network methods are proving to be extremely powerful for computations related to quantum systems and large-scale neural networks. They work best for simulations of 1-dimensional or tree-like quantum systems (corresponding to the special case of matrix product states). Higher-dimensional systems or those with long range coupling containing looped paths, however, incur increasing overheads.

The Eagle quantum processor used in IBM's recent experiments is based on a two-dimensional network of qubits on a "heavy hexagon" grid. Thus, even though it is two-dimensional (harder for tensor network methods), its loops are longer than that of a more compact square lattice. The time required to traverse a single loop is comparable to the circuit depths probed in the experiment, meaning that by applying some clever factorization tricks the dynamics can be reproduced by efficiently-simulable tree-like tensor networks!

 

This is not the first time tensor networks have challenged claims of supremacy - they have also been used to simulate Google's original quantum supremacy experiments. What is particularly striking here is that the time between the publication of the quantum experiment and publication of the classical reproduction has dropped from years to weeks!

Here are some libraries for trying out tensor network simulations of quantum systems:

tensorcircuit: Python library developed by Tencent Quantum Lab - can handle shallow circuits involving hundreds of qubits.

ITensorNetworks: Julia library developed by the Flatiron Institute, which was used to reproduce the IBM experiments.

For theorists, getting familiar with these simulation tools that can also be applied to other important areas (such as large-scale machine learning or numerical simulations) seems to be a better use of time than getting to grips with the intricacies of ever-changing device-specific error models and quantum error mitigation schemes!

Monday, June 12, 2023

Tomography, topology, and more

Some preprints that caught my attention over the last few weeks:

Attention-based transformer networks for quantum state tomography. The tremendous surge in popularity of transformer-based large language models means that there is a lot of effort towards developing efficient hardware and algorithms for implementing transformer-based neural networks. It is thus timely to understand how this architecture may be useful for solving problems in physics. This preprint proposes a transformer-based model for density matrix reconstruction.

A discrete formulation of the Kane-Mele Z2 invariant. Newcomers to topological materials are often stumped on how to efficiently implement gauge-invariant formulas for topological invariants in numerical calculations; analytical formulas assume a smooth choice of gauge for the eigenfunctions, whereas numerical calculations will return a non-smooth random gauge. The method reported here for calculating Chern numbers without requiring any gauge-fixing greatly simplifies numerical calculations. The present preprint concisely presents a numerical-friendly formulation of the Z2 invariant describing quantum spin Hall phases.

Valley photonic crystal waveguides fabricated with CMOS-compatible process. This work presents valley Hall photonic crystals based on an improved mask design that yields more triangular-shaped holes, improving their performance as valley Hall waveguides. It will be interesting to see measurements of the absolute propagation loss and how it compares to the strong backscattering reported earlier this year.

Photonic Landau Levels. Two groups (from the Netherlands and from the USA) report experiments with strained photonic crystals that emulate Landau levels formed by electrons subjected to uniform magnetic fields. These works show how previous theory and experiments based on weakly-coupled waveguide arrays can be generalized beyond the tight-binding approximation and may serve as a novel platform for achieving high quality factor modes and enhanced light-matter interactions. 

Questions and concerns about Google's quantum supremacy claim. The lead author Gil Kalai is one of the most prominent skeptics of quantum computing. This preprint summarizes efforts to rigorously analyze the raw data behind Google's 2019 quantum supremacy experiments. Since there now exist efficient classical algorithms for reproducing the output of the quantum supremacy circuits, the most important outstanding result from the 2019 paper is that the errors in large scale quantum circuits are uncorrelated to a good approximation, suggesting that quantum error correction can work in principle. This preprint argues that the data underlying this claim is flawed and that more effort should be devoted to understanding noise sources present in NISQ devices.

Thursday, May 25, 2023

Scaling up quantum processors

 Last November IBM announced with much fanfare their new 433-qubit superconducting quantum processor, named Osprey. Skeptics wanted to see the technical specifications before deciding whether this represented an important breakthrough or not. A few weeks ago the device (with 413 working qubits) finally became available for cloud users. Some technical specifications can be found here

Disappointingly, the quantum volume proposed by IBM themselves as a better measure of quantum processor performance than the raw qubit count is not yet available for this device. Presumably the slightly lower gate fidelities reported mean that the quantum volume does not exceed that achieved on their smaller devices with higher gate fidelity.

Meanwhile, Quantinuum announced their new trapped ion quantum processor with 32 fully-connected qubits and a whopping quantum volume of 65,536 (for reference, the best reported quantum volume from a cloud-accessible IBM device is 128). The announcement coincided with the upload of preprints to arXiv using the device to study quantum states with topological order and benchmarking its performance using various metrics.

metriq is a great resource for keeping track of all the different quantum processor platforms and devices and comparing their reported fidelities. Raw qubit counts are not meaningful without knowing the gate fidelities and device connectivity!

Thursday, March 30, 2023

arXiv highlights

Here are some papers that caught my eye over the past month:


Germain Curvature: The Case for Naming the Mean Curvature of a Surface after Sophie Germain

This essay argues that the intrinsic curvature of a surface, aka the Gaussian curvature, should be named instead the Germain curvature, since Gauss was not the first to study it.

I remember attending a lecture by Sir Michael Berry (of Berry phase fame) where he made a compelling argument against naming new objects or effects after people, on account of the three "Laws of Discovery":

"1. Discoveries are rarely attributed to the correct person

2.Nothing is ever discovered for the first time

3. To come near to a true theory, and to grasp its precise application, are two very different things, as the history of science teaches us. Everything of importance has been said before by someone who did not discover it."

Indeed, versions of the Berry phase had been previously decades before Berry, by Pancharatnam, Rytov, and others. For this reason he prefers the name "geometric phase." Similarly, intrinsic curvature is perhaps a more suitable alternative to Gaussian curvature.

The problem with naming effects after people is that the nature of the effect becomes opaque unless one already knows what it means. The situation becomes even worse when different groups decide to name the same effect after different people. On the other hand, simple yet descriptive names including geometric phase and intrinsic curvature reveal some sense of what is meant to the outsider. The absence of a simple-sounding name may indicate that we don't really understand the effect.

An Aperiodic Monotile

The authors discover a family of shapes that can tile the 2D plane, but only aperiodically. The shapes are non-convex mirror-asymmetric polygons. Tiling the plane involves placing a mixture of the polygon and its reflection, but the two can never be arranged to form a regular pattern. Can this kind of aperiodic tiling lead to novel physical properties of some system or model? For example, tight binding lattices can be obtained from tilings by identifying corners as "sites", with coupling between sites linked by edges of the tiling shapes.

Spectral localizer for line-gapped non-Hermitian systems

The localizer theory I have discussed previously (here and here) is now generalized to non-Hermitian systems! This is relevant to understanding the properties and robustness of certain topological laser models.

A quantum spectral method for simulating stochastic processes, with applications to Monte Carlo

This preprint shows that the quantum Fourier transform can be used to efficiently simulate random processes such as Brownian motion. In contrast to previous "digital" quantum Monte-Carlo approaches, here the authors consider an encoding in which the value of the random variable is encoded in the amplitude of the quantum state, with different basis vectors corresponding to different time steps. Since Prakash's earlier work on quantum machine learning using subspace states was the inspiration of our recent quantum chemistry work I think this paper is well worth a closer read!

Photonic quantum computing with probabilistic single photon sources but
without coherent switches

 If you want to learn more about the photonic approach for building a fault tolerant quantum computer (being pursued by PsiQ), you should read Terry Rudolph's always-entertaining papers. Even though the approaches presented in this manuscript (first written in 2016-2018) are now obsolete this is still well worth a read as a resource on the key ingredients of potentially-scalable methods for linear optical quantum computing.

An Improved Classical Singular Value Transformation for Quantum Machine Learning

The field of quantum machine learning has seen two phases. The first phase was sparked by the discovery of the HHL algorithm. HHL and its descendants promised an exponential speedup for certain linear algebra operations appearing widely-used machine learning techniques, arguably triggering the current boom in quantum technologies. However, running these algorithms on any useful problem will require a full fault-tolerant quantum computer.

Consequently, novel quantum algorithms for machine learning have attracted interest as a possible setting for achieving useful quantum speedups before a large scale fault-tolerant quantum computer can be developed. The power of these newer algorithms is much less certain and still under intense debate. Nevertheless, researchers could find solace in the hope that, even if these NISQ-friendly algorithms do not end up being useful, eventually we will achieve a quantum advantage using HHL-based algorithms.

The dequantization techniques pioneered by Ewin Tang and collaborators are starting to suggest that even a quantum advantage based on fault-tolerant algorithms such as HHL may turn out to be a mirage. This paper presents a new efficient classical sampling-based algorithm that reduces the potential quantum speedup for singular value transformations from exponential to polynomial. This affects a variety of quantum machine learning algorithms, including those for topological data analysis, recommendation systems, and linear regression.

 

 

Thursday, March 9, 2023

Hype and anti-hype

Claims of high temperature superconductivity were yesterday published in Nature and presented at the APS March Meeting. Given the history of the group, discussed in detail during a workshop on reproducibility in condensed matter physics, no doubt this should be taken with a pinch of salt.

On arXiv yesterday: Russians tear down claims of QAOA-accelerated factorization algorithms which hit news headlines last December. The comments on Scott Aaronson's blog on the original paper have some amusing (or depressing) background on the group behind this work.

Similarly, a few weeks ago claims of quantum simulation of wormhole dynamics using superconducting processors were heavily criticized.

These examples are all high profile works which have been (and will be) carefully scrutinized. The vast majority of preprints and publications do not attract as much interest. If you're having trouble reproducing a result in a paper, keep in mind that the paper may have errors that went undetected through peer review. The real peer review begins after the paper is published.