Friday, October 11, 2024

IPS Meeting 2024 and Nobel Week

Last week I attended the IPS Meeting 2024, held this year at Nanyang Technological University, and gave a plenary talk on flatband lattices, covering material we recently published in an invited review in Nanophotonics. Among the many interesting talks this year, the plenary presentation by Antonio Castro Neto on the creation of carbon thin films and liquid crystals via oxidation of graphene (see e.g. this paper).

This week the Nobel Prizes were announced, with machine learning and AI dominating the Physics and Chemistry prizes. It's quite remarkable that one of the Physics laureates (Hopfield) published his prize-winning work as a single author theory paper in 1982, when he was already 49 years old! It's never too late to do your most impactful work!

Tuesday, October 1, 2024

Singapore Academies South-East Asia Fellowship (SASEAF) Programme 2024

The SG Academies South-East Asia Fellowship (SASEAF) Programme aims to attract bright postgraduate researchers from South-East Asia to Singapore research institutions for a 2-year long fellowship. It is administered by three academies in Singapore - the Singapore National Academy of Science (SNAS), together with the Academy of Medicine Singapore (AMS), and the Academy of Engineering Singapore (SAEng) - and funded by the National Research Foundation (NRF).

Applications should prepare a short (up to 2 pages) research proposal in consultation with a host faculty member. While applicants may propose any field of research done in Singapore’s research institutions, the topics of Infectious Diseases, Population Health/Public Health, and Sustainability, including Urban Agriculture will be favoured in the current round.

The successful applicant will receive a monthly stipend of up to SG$6,500, relocation allowance, and access to professional development and networking opportunities during their time in Singapore.

For more information including the application procedure and documents required, please check out the official website. If you are interested in having me as a host PI, please contact me well in advance of the application deadline of 30th November so that we have sufficient time to prepare a competitive proposal. A list of past awardees can be found here.

 

Tuesday, September 24, 2024

From large language models to local language models

Last week Nature published a feature on local AI: Forget ChatGPT: why researchers now run small AIs on their laptops

This article discusses developments in large language models (LLMs) leading to the proliferation of language models that can be run locally on your own device without requiring top of the line hardware. There are four driving motivations behind this:

Privacy: Cloud-based LLMs such as ChatGPT do not offer any user privacy. This is a no-go when wanting to use them to analyze any kind of proprietary or confidential data. The only way to guarantee privacy is if you have a model that doesn't need to communicate with some cloud server to run.

Reliability: LLMs are constantly evolving. With commercial providers, there is a tug-of-war between the providers and the users, many of whom explore methods to "jailbreak" a model using finely crafted inputs to escape hard-coded restrictions on the possible outputs. Even when the underlying LLM might stay the same, preprocessing applied to a user's input before querying the LLM might change as the provider aims to improve the model performance or accuracy. This makes LLMs inherently unreliable - a prompt that works today might fail hopelessly the next day. With a local LLM the user is in control and will not be surprised by sudden changes to the model performance. Note that running a LLM locally does not completely solve this issue, since there is always some randomness to their output.

Reconfigurability: With the advent of efficient LLM fine-tuning methods such as low rank adaptation (LoRA), users can take an off-the-shelf open source LLM and augment it with their own specialized or proprietary data to solve problems of interest. For example, for the first year maths course I'm currently teaching the course convenor has augmented a LLM with the lecture notes and problem sets, creating a chatbot that is able to answer students' questions about the course and also refer them to the relevant parts of the lecture points. For the students, this combines the ease of use provided by a chatbot with the reliability of the source materials.

Cost: For heavy users cloud-based LLMs are not cheap. Moreover, academics need to make the decision between paying for access out of their own pocket, or wading through their institution's bureaucracy to find some funding source that will cover a subscription. Local LLMs avoid these hassles.

The feature article also lists popular platforms for installing and using local LLMs, both command line-based (for power users) and GUI-based (for ease of use). As a backend, many of these packages rely on fast execution of LLMs provided by llama.cpp, which I covered previously here and here.

It's been a while since I tinkered with these packages, but clearly there have been quite significant developments in their performance and usability since I last used them more than a year ago!

Monday, September 16, 2024

From classical to quantum HodgeRank

This is a rather late summary of a cool preprint I saw a few months ago: Quantum HodgeRank: Topology-Based Rank Aggregation on Quantum Computers 

This work is inspired by and builds on quantum subroutines developed for efficiently solving high-dimensional topological data analysis problems, offering superpolynomial speedups for ranking higher-order network data by developing a quantum version of the classical HodgeRank algorithm.

What is HodgeRank? It was originally proposed in 2011 as a better way of ranking incomplete or skewed datasets, for example based on user ratings or scores.

The basic idea is to apply an analogue of the Helmholtz decomposition (used routinely in electromagnetics) to graph data, enabling one to assign a ranking based on incomplete pairwise preferences. Importantly, HodgeRank outputs not just a raw ranking, but also an estimate of the quality of the ranking via the construction of local and global cycles present in the optimal ranking. To be specific, the returned optimal ranking is unique and fully consistent if the preference matrix can be written as the gradient of some scalar ranking function. If it cannot, then there are inevitable ambiguities present in the preference data due to the existence of global or local cycles. 

An example of a local ranking cycle is the following: B is preferred over A, C is preferred over B, and yet A is preferred over C. This leads to the ranking A < C < B < A, thus forming a cycle. It is better to identify cycles such as these and acknowledge that a traditional ranking does not make sense for these items. This is what HodgeRank does! User preference data is rarely consistent, so cycles such as these routinely occur in the wild, for example in user rankings of movies on online platforms such as Netflix. 

As a generalization of HodgeRank, Quantum HodgeRank promises the ability to perform ranking tasks on preference data forming higher-order networks, avoiding the exponential scaling with network dimension faced by classical algorithms. Moreover, the authors of the preprint argue that HodgeRank cannot be dequantized (i.e. implemented efficiently using a randomized classical algorithm) in the same manner as quantum TDA algorithms for the Betti number problem. Moreover, while applications of high-dimensional Betti numbers (and even their occurrence in real datasets) remain unclear, HodgeRank represents a ranking problem with more likely concrete applications. Thus, this looks like an exciting area to keep an eye on. 

It is also interesting to speculate on whether (classical) HodgeRank or HodgeRank-inspired methods can be useful for understanding the behaviour of interacting many-body quantum systems, where it is typically intractable to sample all of the pairwise interaction elements of Hamiltonians as the system size increases, but incomplete or skewed sampling is readily available. Watch this space!

Wednesday, September 11, 2024

Asian Network Mini-School on Quantum Materials 2024

Last week I visited the University of Indonesia to present two lectures on topological photonics at the Asian Network Mini-School on Quantum Materials 2024. This is one of a series of events held in South East Asian countries held by the ICTP Asian Network. The school attracted 95 participants from Indonesian universities, the majority being advanced undergraduates or graduate students. Meetings such as these provide valuable opportunities for early career scientists to learn about cutting-edge research areas and build collaborations with others in the region. I was impressed by the level of engagement from the audience - even though I ended my first lecture 20 minutes early, the remaining time was fully occupied by questions! Many thanks to the local organizers for putting together such an enjoyable meeting! Two more schools will be held this year, both in Thailand, on complex condensed matter systems and magnetism and spectroscopy, with more planned for next year.



Wednesday, August 7, 2024

Flatbands: then and now

We published a review article on flatband fine-tuning and its photonic applications in Nanophotonics last week! This follows up on our earlier perspective on photonic flatbands published in APL Photonics in 2018.

How has the field changed in 6 years?

In 2018, we identified promising areas for future research where flatbands had not yet been extensively explored yet: coupled resonator lattices, circuit QED, and photonic crystals.

For the case of coupled resonators, the idea of synthetic dimensions (considering coupling in the frequency domain rather than space) has since emerged as a new direction for non-Hermitian and topological photonics, with the ability to fine-tune short- and long-range hoppings to realize flat band lattices using coupled optical fiber loops.

Circuit QED now sees broad interest as a platform for quantum simulation, especially for studying lattices on hyperbolic space

Flatband photonic crystals have received a great amount of attention, driven especially by the rise of moire materials which exhibit flat bands at "magic" twist angles. This breakthrough in condensed matter physics inspired the development of theory (see Phys. Rev. Lett. 126, 136101 (2021), Phys. Rev. Lett. 126, 223601 (2021), and Phys. Rev. Research 4, L032031 (2022), for example), with applications to photonic crystal lasers and shaping free electron radiation being actively explored. 

The huge growth of interest in flat bands in photonic crystals and related platforms such as metasurfaces has been quite remarkable. It is driven by the realization that one does not need to carefully control symmetries or suppress long range couplings, guided by simple tight binding models for flat bands, to design them. Rather, a sufficiently complex system supporting parameter fine-tuning is all that you need to realize flat bands! Equipped with this knowledge, our latest review is timely in that it covers novel phenomena that can emerge in fine-tuned flat band systems. 

Tuesday, July 30, 2024

Mistakes to avoid when writing the introduction to your paper

As a journal editor I read a lot of manuscripts. The introduction is often the hardest part of a paper to write, particularly for high impact journals where one must tread a fine line between shameless self-promotion and clearly explaining the importance of one's work in a manner that is appreciated by both specialists and non-specialists. Two mistakes crop up time and again:

Mass citations

"Extremely niche topic x has become a hot topic due to its potential applications [1-26]. Many novel effects have been reported [27-48]. These works have been extended to unprecedented directions [49-68], paving the way to..."
Yes, it is important to acknowledge relevant prior work on your topic. But when you cite papers en masse it gives the impression that you don't understand which papers in your research paper are really important!

 One sentence citations

 "Smith et al. explored applications of extremely niche hot topic x [1]. Brown et al. innovatively demonstrated a novel effect [2]. Newton et al. paved the way to...[3]."

 The opposite extreme of explaining each reference individually (but in a single sentence only, otherwise the introduction will be too long) has the same effect, suggesting you have merely skimmed the works you have cited without really understanding how they fit together and what the bigger picture is.

 

Don't do this! Cite one or two review articles instead, along with the specific works you are building on. Don't make the reader have to do a literature review just to tell whether your paper might be worth reading!