Tyson Jones
  • Home
  • Research
  • Resume
  • Blog
Unprofessional ramblings.

[rant] Diligence in quantum computing (your nudes are safe)

5/1/2023

Comments

 
This is an academic rant about a recent provocative paper unlikely to interest the typical audience of my unprofessional blog. It's uploaded here so it can be linked when inevitable future papers show as profound a negligence in their presentation. Shitpost-seekers, seek elsewhere!
To view, click Read More​ in the bottom right.
You may have heard that quantum computer scientists are coming to break your encryption and leak your nudes. This bastardised narrative has floated since Shor's algorithm was devised to use far-future fault-tolerant quantum computers for prime factorisation. While we might one day build quantum computers so massive and precise to run Shor's and break RSA encryption, that day is not today. Or tomorrow. Frankly, few expect Peter Shor to be alive to see it. The experimental gap between today's quantum experiments and those necessary to run Shor's cannot be exaggerated.
This is not the impression imparted by the media. A quick google search yields a sickening amount of pop-science articles, corporate blog-posts and similarly insincere bullshit suggesting RSA is on its deathbed. What has created this muck? Some of the hype is knowingly disingenuous; the threat of industrial or intergovernmental espionage committed by a malevolent agent with a shiny quantum computer in their garage is a compelling sales pitch to throw at banks and funders when selling two-qubit space-heaters or a closed-source GPU rack. But some of the disproportionate hype is (err, more) earnest. Scientific papers reporting excruciatingly incremental progress are often sexed-up a little to earn their Nature publication which is then digested in the best possible faith by unquestioning journalists into headlines like "phYSicIsTs cREaTe wORmHoLe". To the dismay of classical crytographers, this ritual is especially evident when a researcher infinitesimally shrinks the gargantuan resource costs of Shor's algorithm, sending media outlets into a doomsday frenzy akin to the excitement that the Large Hadron Collider might destroy the Earth, and that COVID-19 vaccines might cause your blood to clot and your penis to fall off.
What should scientists do about it? I do not address the challenges of public communication; the dilemmas created by scrutinous funding bodies; the sensationalism imposed by high-impact journals, and the persuasion by academic institutions to publish in them. I have an orthogonal instruction to researchers; be diligent. Achieve the minimum rigour and comprehensiveness necessary for your work to be of any utility (hell, even interpretability) to the scientific community - that's the whole point, after all.
Pop-science outlets recently received an early Christmas gift; a controversial arXiv preprint claiming a competitor to Shor's algorithm, and the subsequent breaking of RSA encryption, could be achieved with stupefyingly meagre quantum resources:
Picture
They claim a quantum computer with "372 physical  qubits" could "challenge RSA-2048 using [their] algorithm". That's many more reliable qubits than we can prepare today, but still a massive improvement from previous state-of-the-art estimates of needing twenty million. Unsurprisingly, the media went bananas, many indulging in a Red Scare traditional of reports following scientific advances in the Indo-Pacific. But was this a scientific advance? Not according to Shor himself. It's as inconsequential (my words) as the torrents of preprints spewed onto the arXiv daily reporting tepid quantum speedups in predicting stock markets and synthesising slam poetry. I hardly care; after a PhD in NISQ algorithms, I'm numb to results of limited feasibility. My irritation is that the question of this work's significance need be asked at all, not by understandably clueless newspapers, but by other researchers.
This rant will now become slightly more technical, and more interesting to a quantum computing audience. I will speak on measures of a quantum algorithm's performance, and some fallacies frequently wielded by quantum researchers. I will demonstrate that, under the only metrics sensible to assess RSA-breaking proposals,  the main result of the aforementioned paper is: nothing.

Big-O measures intrinsic scaling, not comparative performance

Computational complexity measures expressed in Big-O communicate the scaling of a measure of a particular algorithm as the input size increases. For instance, f = O(x^2) communicates that if x doubles, f quadruples (approximately, and when x is sufficiently large). This is a property intrinsic to a specific algorithm, and is not a metric for comparing the relative performance of different algorithms. For instance, if Algorithm 1 has runtime = O(x^2) while Algorithm 2 has runtime = O(x), then the latter scales better than the former but is not necessarily faster. Their ultimate runtimes may include prefactors and overheads such that Algorithm 1 is a thousand times faster in our domain of interest (e.g. x ~ 10). An over-reliance on runtime scaling as a performance measure is how we end up with galactic algorithms. 
In classical computer science, it is typically easy to reach input sizes where the scaling of an algorithm does dominate over prefactors and constants. Ergo, Big-O (of runtime and memory) is often a sufficient comparative measure of algorithms. But is easily misused.

Absolute runtime is the only relevant measure to classically breaking RSA

To break RSA, you must factorise numbers fast enough before a key change. It is irrelevant how your factorisation algorithm scales across its full domain of inputs - after all, the input size x is bounded (x < 2048 bits). The only sensible comparative measure between two proposed RSA-breaking algorithms is how fast they run in absolute time (and perhaps which uses tractable memory, whatever) at practical sizes. Of course, if you can't simply execute and time your algorithm (because it's too slow, or requires computers of the future), producing concrete expected runtimes can be very hard, and must be substituted for another measure. In classical cryptography, our computing platforms are so similar in their order-of-magnitude operation speeds, overheads and constraints, that scaling of runtimes across key sizes can make an insightful first comparison.

Be cautious when comparing classical and quantum algorithms

How can one insightfully compare a quantum algorithm to a classical one? The pitfalls of such an endeavour could fill a book. Quantum computers have very different elementary operations with extremely different constraints, absolute runtimes, overheads and performance nuances - and of course, obscenely different effects on the computational state. They have different memory capacities, different conceptions of what memory even is, and different magnitudes of money-burned-per-operation. This means comparing quantum and classical algorithms by the absolute number of operations they perform can be grossly misleading. It is even more senseless to compare the scaling of their prescribed number of operations, unless one makes a strong argument that the scaling outweighs factors at practical problem sizes.

​Even blindly trusting scaling as a measure, we often find incorporating all the limitations of a quantum device, like the need for expensive error correction, into the scaling analysis will actually destroy its Big-O advantage over its classical competitor. We know quadratic speedup is never enough. Some even believe any polynomially superior scaling will never achieve quantum advantage.
Beliefs aside, it is evident that:
  • comparing the absolute number of quantum vs classical operations can be uninsightful.
  • comparing the scaling of quantum vs classical algorithms is worse.
  • comparing the scaling of quantum vs classical fixed-size-RSA-breakers is ergo absolutely pointless.

Absolute runtime is the main measure to quantumly breaking RSA - qubits are just a deal-breaker

The number of needed qubits can of course disqualify a proposed quantum RSA-breaker. Need twenty million fully-connected qubits to even run your algorithm at a practical scale? That's an algorithm for a distant future. But clearly an algorithm compatible with an experimentally tractable (inshallah) number of qubits doesn't mean much. If it would take a longer duration than the age of the universe to deploy those few qubits to break RSA, then it is not practical, not a contender for RSA-breaking, and not an exciting arXiv upload. Shrinking the qubit costs of a quantum algorithm is a noble exercise but does not at all ensure its ultimate feasibility.

Predicting the absolute runtime of a quantum algorithm is hard

If you think predicting the absolute runtime of a classical algorithm is hard, don't go anywhere near a quantum one. Given a fixed quantum circuit for a specific quantum computer, and an observable to measure upon the output state, the total runtime is influenced by:
  • the number and type of operations in the circuit.
  • the speed of those operations (read: highly heterogeneous) on the computer.
  • which operations can be performed in parallel.
  • device overheads like cooling the computer and preparing the initial state.
  • the device's noise characteristics (does the circuit prescribe our noisiest operations?)
  • meta-algorithms necessary to mitigate or correct errors.
  • the number of repetitions of the circuit necessary, as informed by noise, the observable variance, and the desired precision.
and more! If you're unlucky, it'll also depend on whether the vending machine outside the laboratory is operating.
​
Attempting to predict the runtime of a quantum algorithm on generic quantum hardware can be nightmarish!

Predicting the absolute runtime of a variational quantum algorithm is harder​

Variational algorithms involve a parameterised quantum circuit which must be repeatedly re-run with different parameter values. If you can forgive a shameless self-plug, see Ch1.6 of my thesis. In essence, variational algorithms shrink the number of necessary qubits to solve a problem by requiring many more circuit evaluations; trading memory for runtime. In addition to the above, their runtime is also determined by:
  • the number of observables informing the next parameter set.
  • the number of iterations to converge into the solution state.
  • the structure of the parameterised observable landscape.
  • the sensitivity of the algorithm to deviations from ideal parameter evolution.
These are non-trivially influenced by properties of the parameterised circuit and the classical solver updating the parameters.

Unless one executes and times their quantum algorithm (usually impossible), the expected runtimes of variational algorithms come with staggering uncertainties. Only profoundly meticulous studies, like the work of my colleague Zhenyu Cai, will attempt to predict an absolute runtime on candidate hardware. If we want to remain hardware agnostic, the most practical measure under which to qualify or compare variational algorithms boils down to:
  • the expected total number of prescribed measurements.
  • the expected total number of operations.
Measurements are typically so relatively costly and error-inducing that we neglect consideration of operations all together.

Louder for the people in the back: a variational algorithm's runtime is best predicted by the number of shots! This is how they ought be compared; absolutely if possible, and otherwise by scaling.

It is no secret that this is also how many quantum variational algorithms are exposed as inefficient, inferior to classical schemes, or even intractable. They require many, many shots, even when not derailed by local minima and barren plateaus. On many occassions, a variational algorithm has been overhyped and its performance exaggerated because faithful accounting of shots was not performed. It's practically a meme.

How did Yan et al's paper measure the performance of their so-called RSA-breaker?

The paper presents a quantum variational algorithm using QAOA (ruh roh) to speedup the slow part of Schnorr's classical algorithm for factorising integers, in order to break RSA2048 (ruh roh). If you've survived to here, you by now know the authors must be super-duper extra-wextra careful and precise about their resource accounting. They should strive for absolute time, but failing that, the number of measurements, and failing that, the scaling of the number of measurements or operations. So how exactly did they predict the runtime? Let's search together with our trusty ctrl-F tool.
  • 27 matches for "time"! Alas, no runtimes are estimated - they discuss "time-consuming" subroutines, or that their 10-qubit test evaluated the "QAOA circuit 30000 times". If only we knew how the necessary number of those evaluations scaled!
  • 13 matches for "scale"! Alas, they speak of the number of qubits, which we argued a meaningless indication of the algorithm's feasibility.
  • 79 matches for "depth", bespeaking a strong attention to keeping the circuit small and ergo the severity of errors low. Alas, the total number of prescribed operations (a step toward runtime) is a product of the circuit depth and the number of repetitions (obviously). What's the latter?
  • 1 match for "repetition" - oh? They report 30k necessary for the 10-qubit test, but nothing of the 372-qubit case to break RSA - ah. Well it's the total number of measurements we really want.
  • 1 match for "measurement" - to explain a symbol in a figure.
  • 0 matches for "shot"- oh dear.
There's no suggestion whatsoever that their algorithm can break RSA - nor even that they can compete with Shor's!

The abstract triumphantly celebrates:
"We estimate that a quantum circuit with 372 physical qubits and a depth of thousands is necessary to challenge RSA-2048 using our algorithm. Our study shows great promise in expediting the application of current noisy quantum computers, and paves the way to factor large integers of realistic cryptographic significance"

The conclusion modestly cautions:
"It should be pointed out that the quantum speedup of the algorithm is unclear due to the ambiguous convergence of QAOA".

​So what was all the fuss about?

The utility of scientific publication

All in all, I do not think this work is a diabolical ruse to scare the CCP into funding post-quantum cryptography. The paradigm of taking a hard classical problem, throwing away all structure, encoding it into a cost function evaluable by a quantum computer, unleashing QAOA upon it, and being sloppy with the resource accounting, is certainly an established ritual in quantum computing literature. The paper presents an interesting idea for repurposing variational minimisation for integer factorisation. But to do so, it has invoked a subroutine with a wildly unpredictable performance, ergo producing an RSA-breaker of wildly unpredictable feasibility. And yet, that alone is okay. I too am guilty of producing uncharacterisable variational algorithms. Science does not always take giant, rigorous leaps forward. It more often takes many incremental steps, or steps to the side, or small steps backwards - there is time to waste, and plenty of quantum computing grant money to burn.
My qualm is that such a sensational tone was adopted in a manuscript with so glaring an omission in a notoriously contentious context. I would humbly propose that a manuscript...
  • with only ~5 pages outside the appendix
  • with a section called "quantum resource estimation"
  • about the most prolific and controversial application of quantum computers
  • claiming to achieve something monumental with an astonishingly low qubit count
  • threatening the fabric of classical cryptography
  • which is known will cause a media whirlwind and hyperbolic doomsday proclamations
  • prepared years after the hype-wave of variational algorithms has scaffolded measures of their performance...
should make an honest attempt to predict the runtime, or the number of shots!

​If a diligent shot accounting reveals QAOA to be anomalously performant for this cost function, and the author's algorithm a legitimate threat to RSA2048, I will eat my hat - or feed it to Aaronson.
Comments

Location

Links

QTechTheory
QuEST
QuESTlink

Contact

  • Home
  • Research
  • Resume
  • Blog