Tuesday, October 28, 2025

GenQ Hackathon: Quantum for Finance

Last weekend I had the pleasure to attend the GenQ Hackathon: Quantum for Finance, joining as a mentor for the teams. Events such as this are important as a means of building familiarity with quantum processors amongst the participants from diverse backgrounds, from physics to finance majors and from high school students to veteran software engineers. Applications of quantum processors will not just need PhD-level quantum algorithm specialists, but also people with a broader range of skills able to make sense of where quantum algorithms may be practically useful.

The overall winning team had the, in my opinion, crucial insight that whatever fancy new solution you come up with, be it AI or quantum-designed, it had better be interpretable. Particularly in the high-stakes world of finance, someone will ultimately be responsible for decisions made based on the quantitative model. End-users won't trust a black box model. A model that spits out a single number - such as an F-score or correlation coefficient - will never be as trustworthy as a model that can clearly show all the relevant variables. Because of this, the team incorporated Mapper into their solution for detecting anomalies in the form of fraudulent credit card transactions.

One thing I was surprised by was how few of the teams took into account the clear advice given in the opening statement from Hongbin Liu (from Microsoft Quantum): In future practical use-cases of quantum processors, the cross-over point at which a quantum processor is expected to out-perform existing (very powerful) classical algorithms and high performance computers will involve days to weeks of wall-clock runtime. One on the judging criteria specifically focused on the scalability of the proposed solution. Despite this, in their final pitches many of the (unsuccessful) teams focused on quantum circuits limited to several qubits with second-scale run-times, claiming apparent speedups compared to selected classical benchmarks. However, such small-scale quantum circuits are trivially classically simulable.

I observed almost all the teams using ChatGPT or some other favourite large language model, both for background research on the chosen problem as well as rapid code generation. It was also striking to see how much easier it is now to write, compile, and execute quantum circuits on a cloud quantum processor by making use of quantum middleware providers, who now sell this as a convenient service. 

 

Monday, October 6, 2025

Cusp solitons mediated by a topological nonlinearity

Harvey just finished what should be the last paper of his PhD studies: Cusp solitons mediated by a topological nonlinearity

Harvey's PhD project studied the intersection between topological data analysis (TDA) techniques and nonlinear and many-body quantum dynamics. His first paper devised a TDA-based pipeline for detecting the emergence of quantum chaos in a periodically-driven nonlinear Kerr cavity. He followed this up with a demonstration of many-body quantum scar detection using topology-based dimensional reduction.

These works, while very nice, were ultimately using TDA to recover known physics. We really want to find examples where TDA can unveil new physics. This is a hard problem. Where to look? And what counts as "new"?

The easier solution for us was to insert TDA "by hand" into a nonlinear model, and see what came out of it.

For our testbed we took the nonlinear Schrodinger equation, frequently used to model nonlinear waves in various platforms. In the usual nonlinear Schrodinger equation, the conserved energy is the Hamiltonian,

$$ H = \int dx \left[ \frac{1}{2} |\partial_x \psi |^2 - \frac{g}{2} |\psi|^4 \right] $$

The second term, responsible for the nonlinear dynamics, can be interpreted as an intensity-dependent potential of depth $\frac{g}{2}|\psi|^2$. We looked at what would happen if we replaced this term with a quantity obtained using TDA. When dealing with one-dimensional functions, such as intensity profiles $|\psi(x)|^2$, TDA frequently uses sublevel set persistent homology, characterizing shape in terms of the persistence of local maxima and minima. We used the total persistence of these features as an energy penalty term, leading to

$$ H^{\prime} = \int dx \left[ \frac{1}{2} |\partial_x \psi|^2 - \alpha \mathrm{sgn}( \partial_x |\psi|^2 ) (\partial_x |\psi|^2) \right]  $$

Deriving the equations of motion, we found that this topological energy penalty gives rise to effective $\delta$ function potentials at the local maxima and minima of intensity, which act to enhance or suppress local maxima, depending on the sign of the nonlinear coefficient $\alpha$. We then studied the resulting nonlinear dynamics, including the focusing of Gaussian and flat-top beams.

The dynamics are very different from the regular nonlinear Schrodinger equation with focusing nonlinearity, where such a flat top beam would quickly break up into a collection of tightly-focused bright solitons. In this case, since the nonlinearity is proportional to the intensity gradient, its influence is mainly limited to the edges of the flat-top beam. 

We also uncovered some interesting connections to the physics of nonlocal nonlinear systems. Specifically, our "topological nonlinearity", when regularized, resembles a weakly nonlocal nonlinearity with a vanishing local part. Such nonlinearity leads to cusp solitons, as was previously studied in the context of plasma physics!

We hope to follow up this study with investigations of similar "topological" nonlinearities and potential experimental realizations. In the present work we speculated that similar nonlinearities may arise in the context of fluid-mediated nonlinearities and lattices undergoing Floquet modulation, but demonstrating such implementations explicitly remains an open problem for us.