Wednesday, August 31, 2022

Open access in physics

In the news last week: The US government will require federally-funded research to be immediately free to read upon publication

The Brief provides a detailed discussion of the planned changes

A perspective from physics publishers including the American Physical Society can be found in a recently prepared a white paper on open access publishing. The white paper was prepared in light of growing interest in open science, including open publishing.

In Europe, one model of open publishing being promoted by several funding agencies is Plan S (see coverage in Nature last year here). Plan S remains controversial and has attracted criticism for several reasons, including lack of academic freedom for researchers to choose the most appropriate publishing venue, penalising junior researchers who may not have the funds to pay the (large) open access publication fees, and rules that adversely impact non-profit scholarly societies.

Some thoughts on open publishing as a grant-starved researcher and part-time editor for the American Physical Society:

Academic freedom in where we choose to publish is important - any research article will have a particular audience in mind, and we should be free to (try to) publish in a journal with the best visibility to the intended audience. Many activists argue that publicly-funded research out to be free to read for any members of the public to read. This goal is already largely served by preprint servers and most existing publishing agreements, which allow author-prepared versions of the article to be made freely available on the arXiv, a repository provided by their institution, or even their personal website. Even in the case of publishers strictly enforcing an embargo, one is free to contact the authors directly via email if one really needs the journal-published version. Most authors would be more than happy to share their work.

Under open access plans the author is forced to pay to publish. Open access mandates severely curtail academic freedom - publication charges will restrict the choice of journal, potentially forcing authors to publish in a cheaper, low-visibility journal. This will adversely impact junior researchers, smaller and less well-funded institutions, and researchers from lower-income countries.

Why not just force open access journals to have lower publication fees? Aren't they simply charging to ask referees to reports for free and to upload a PDF on a web server? Why does this typically cost thousands of dollars?

In defense of seemingly-high open access article publication charges, one major contribution to the high costs is (and/or should) support the salaries of the journal editorial staff, who have an important role in selecting and vetting reviewers. In the case of many newly-established for-profit open access publishers, this important task is not handled by qualified scientists. However, unless one has a reasonable knowledge of the research subject, one will not have an idea on which referees are credible.

Another important difference between the subscription and open access models is that the open access publication charges need to also cover all manuscripts that are not published by the journal! The more selective the journal, the more articles will be considered and eventually rejected. The published articles need to cover the cost of processing the rejected articles. Under the subscription model the subscription fee can be tailored towards that institution's volume of manuscript submissions.

Since rejected articles generate no revenue, for-profit publishers have an incentive to publish everything and it quickly becomes a race to the bottom. So, why not just publish everything, and let researchers decide which works are most important? For a start, this is already handled (without the author paying) by preprint venues such as arXiv. We publish in journals to make our work visible. Visibility requires selectivity so that the most important research is highlighted. We don't have time to read everything that appears in our research area. If journals no longer enforce selectivity, we will end up focusing on reading works from authors we are familiar with - those from high profile, well-established groups. This will end up penalising junior researchers and those from institutions without a well-established brand.

It should be emphasized that open access is just one small part of the open science movement. Rather than penalising established scholarly societies that have a good track record of fostering excellent science, I think funding agencies should focus on broadening the support and dissemination of other styles of academic writing - the academic grey literature. For example, particularly in the life and medical sciences there is a bias towards publishing significant results, resulting in the replication crisis.

Another form of academic writing that is not widely made available for the public to read are white papers and grant proposals. I think this is one area where openness could be valuable not just for working researchers, but also for historians of science as a means of tracking the evolution of different ideas and research fields. 

Many funding agencies still do not even provide basic statistics on their grant programmes, such as the number of applications in a funding round and the success rate. It would be very interesting to see more detailed statistics (e.g. success rates versus research areas, perhaps aggregated over multiple years) as a way to track changes in the interests of the applicants and funding agencies.

It would also be valuable to see the full proposals funded by the grant agencies, most likely after am embargo period - perhaps the length of the project. This would be more beneficial to the public for getting an idea on the kind of research they are funding through their taxes - journal articles are aimed at a highly specialised audience, whereas grant proposals are usually aimed at a much broader audience. Moreover, early career researchers would be able to see what kinds of proposals are funded by a given agency, helping them to judge whether their applications will be competitive or a waste of time. Under the current system we need to ask professors directly for examples of successful applications.

Thursday, August 25, 2022

The 2nd POSTECH MINDS Workshop on Topological Data Analysis and Machine Learning

Resuming (hopefully) semi-regular posting after a few weeks finishing revisions to a few manuscripts:

POSTECH in Korea is hosting another workshop on the intersection between topological data analysis (TDA) and machine learning at the end of September [Sep. 26 (Monday) ~ Sep. 29 (Thursday)]. From the conference website:

This workshop will bring together researchers and students working on TDA and machine learning and provide an opportunity where they present their recent research and share ideas. Further, this workshop will also provide tutorial sessions that will introduce various TDA computational tools and provide practical hands-on tutorials. This is a sequel to the workshop of the same name held in 2021 - ILJU POSTECH MINDS Workshop on Topological Data Analysis & Machine Learning, 2021.

I attended (virtually) last year's workshop and found it quite interesting. As a newcomer to the field it gave a good picture of some of the cutting-edge questions being pursued in TDA. There is no registration fee, but registration is required for access to the workshop live stream.

Thursday, August 11, 2022

More on quantum error correction

Hot on the heels of the Google team's recent demonstration of quantum error correction, last week Quantinuum released a heavily-promoted preprint: Implementing Fault-tolerant Entangling Gates on the Five-qubit Code and the Color Code. This work studies the performance of quantum error correcting codes using trapped ion quantum processors.

The authors compare the performance of logical gates implemented using two different error-correcting codes (5 qubit code and the colour code), without running error correction cycles. Logical CNOT gates were performed with higher fidelity compared to physical CNOT gates, however "the inclusion of QEC cycles along with more careful measurements will be crucial components in a “fair” comparison between the performance of physical and logical qubits."

Error rates are still too high for the 5 qubit code to be useful; even with a 1000-fold reduction in the physical two-qubit gate errors, simulations indicate the error correction using the 5 qubit code will not give an improvement compared to non error-corrected circuits! The authors speculate that this code might still be useful as a quantum memory.

On the other hand, "the color code CNOT with an added FT QEC cycle should eventually outperform the standalone gate, but requires somewhat lower error rates than we currently achieve. In contrast, adding a non-FT QEC to the end of the gate operation causes the simulated logical gate to always perform worse than the physical operation in the error regimes we probed." 

One challenge with the ion trap architecture is that adding additional qubits (ions) generally reduces the gate fidelities due to effects such as cross-talk. 

Another challenge identified in this work is that different error correction codes can have different performance depending on the quantum computing platform used and relative strengths of different noise sources. "It is currently difficult to predict which codes and implementations of those codes may perform the best in general scenarios, and when considering anything but the simplest error models, one is usually forced to resort to numerical studies. Additionally, the exploration space is vast."

Useful error-corrected quantum circuits are still a long way off, requiring difficult improvements to the performance of physical gates.