This is an academic rant about a recent provocative paper unlikely to interest the typical audience of my unprofessional blog. It's uploaded here so it can be linked when inevitable future papers show as profound a negligence in their presentation. Shitpost-seekers, seek elsewhere! To view, click Read More in the bottom right.You may have heard that quantum computer scientists are coming to break your encryption and leak your nudes. This bastardised narrative has floated since Shor's algorithm was devised to use far-future fault-tolerant quantum computers for prime factorisation. While we might one day build quantum computers so massive and precise to run Shor's and break RSA encryption, that day is not today. Or tomorrow. Frankly, few expect Peter Shor to be alive to see it. The experimental gap between today's quantum experiments and those necessary to run Shor's cannot be exaggerated.This is not the impression imparted by the media. A quick google search yields a sickening amount of pop-science articles, corporate blog-posts and similarly insincere bullshit suggesting RSA is on its deathbed. What has created this muck? Some of the hype is knowingly disingenuous; the threat of industrial or intergovernmental espionage committed by a malevolent agent with a shiny quantum computer in their garage is a compelling sales pitch to throw at banks and funders when selling two-qubit space-heaters or a closed-source GPU rack. But some of the disproportionate hype is (err, more) earnest. Scientific papers reporting excruciatingly incremental progress are often sexed-up a little to earn their Nature publication which is then digested in the best possible faith by unquestioning journalists into headlines like "phYSicIsTs cREaTe wORmHoLe". To the dismay of classical crytographers, this ritual is especially evident when a researcher infinitesimally shrinks the gargantuan resource costs of Shor's algorithm, sending media outlets into a doomsday frenzy akin to the excitement that the Large Hadron Collider might destroy the Earth, and that COVID-19 vaccines might cause your blood to clot and your penis to fall off.What should scientists do about it? I do not address the challenges of public communication; the dilemmas created by scrutinous funding bodies; the sensationalism imposed by high-impact journals, and the persuasion by academic institutions to publish in them. I have an orthogonal instruction to researchers; be diligent. Achieve the minimum rigour and comprehensiveness necessary for your work to be of any utility (hell, even interpretability) to the scientific community - that's the whole point, after all.Pop-science outlets recently received an early Christmas gift; a controversial arXiv preprint claiming a competitor to Shor's algorithm, and the subsequent breaking of RSA encryption, could be achieved with stupefyingly meagre quantum resources: They claim a quantum computer with " 372 physical qubits" could "challenge RSA-2048 using [their] algorithm". That's many more reliable qubits than we can prepare today, but still a massive improvement from previous state-of-the-art estimates of needing twenty million. Unsurprisingly, the media went bananas, many indulging in a Red Scare traditional of reports following scientific advances in the Indo-Pacific. But was this a scientific advance? Not according to Shor himself. It's as inconsequential (my words) as the torrents of preprints spewed onto the arXiv daily reporting tepid quantum speedups in predicting stock markets and synthesising slam poetry. I hardly care; after a PhD in NISQ algorithms, I'm numb to results of limited feasibility. My irritation is that the question of this work's significance need be asked at all, not by understandably clueless newspapers, but by other researchers.This rant will now become slightly more technical, and more interesting to a quantum computing audience. I will speak on measures of a quantum algorithm's performance, and some fallacies frequently wielded by quantum researchers. I will demonstrate that, under the only metrics sensible to assess RSA-breaking proposals, the main result of the aforementioned paper is: nothing. ## Big-O measures intrinsic scaling, not comparative performanceComputational complexity measures expressed in Big-O communicate the scaling of a measure of a particular algorithm as the input size increases. For instance, f = O(x^2) communicates that if x doubles, f quadruples (approximately, and when x is sufficiently large). This is a property intrinsic to a specific algorithm, and is not a metric for comparing the relative performance of different algorithms. For instance, if Algorithm 1 has runtime = O(x^2) while Algorithm 2 has runtime = O(x), then the latter scales better than the former but is not necessarily faster. Their ultimate runtimes may include prefactors and overheads such that Algorithm 1 is a thousand times faster in our domain of interest (e.g. x ~ 10). An over-reliance on runtime scaling as a performance measure is how we end up with galactic algorithms. In classical computer science, it is typically easy to reach input sizes where the scaling of an algorithm does dominate over prefactors and constants. Ergo, Big-O (of runtime and memory) is often a sufficient comparative measure of algorithms. But is easily misused.## Absolute runtime is the only relevant measure to classically breaking RSATo break RSA, you must factorise numbers fast enough before a key change. It is irrelevant how your factorisation algorithm scales across its full domain of inputs - after all, the input size x is bounded (x < 2048 bits). The only sensible comparative measure between two proposed RSA-breaking algorithms is how fast they run in absolute time (and perhaps which uses tractable memory, whatever) at practical sizes. Of course, if you can't simply execute and time your algorithm (because it's too slow, or requires computers of the future), producing concrete expected runtimes can be very hard, and must be substituted for another measure. In classical cryptography, our computing platforms are so similar in their order-of-magnitude operation speeds, overheads and constraints, that scaling of runtimes across key sizes can make an insightful first comparison.## Be cautious when comparing classical and quantum algorithmsHow can one insightfully compare a quantum algorithm to a classical one? The pitfalls of such an endeavour could fill a book. Quantum computers have very different elementary operations with extremely different constraints, absolute runtimes, overheads and performance nuances - and of course, obscenely different effects on the computational state. They have different memory capacities, different conceptions of what memory even is, and different magnitudes of money-burned-per-operation. This means comparing quantum and classical algorithms by the absolute number of operations they perform can be grossly misleading. It is even more senseless to compare the scaling of their prescribed number of operations, unless one makes a strong argument that the scaling outweighs factors at practical problem sizes.Even blindly trusting scaling as a measure, we often find incorporating all the limitations of a quantum device, like the need for expensive error correction, into the scaling analysis will actually destroy its Big-O advantage over its classical competitor. We know quadratic speedup is never enough. Some even believe any polynomially superior scaling will never achieve quantum advantage.Beliefs aside, it is evident that: - comparing the absolute number of quantum vs classical operations can be uninsightful.
- comparing the
*scaling*of quantum vs classical algorithms is worse. - comparing the scaling of quantum vs classical fixed-size-RSA-breakers is ergo absolutely pointless.
## Absolute runtime is the main measure to quantumly breaking RSA - qubits are just a deal-breakerThe number of needed qubits can of course disqualify a proposed quantum RSA-breaker. Need twenty million fully-connected qubits to even run your algorithm at a practical scale? That's an algorithm for a distant future. But clearly an algorithm compatible with an experimentally tractable (inshallah) number of qubits doesn't mean much. If it would take a longer duration than the age of the universe to deploy those few qubits to break RSA, then it is not practical, not a contender for RSA-breaking, and not an exciting arXiv upload. Shrinking the qubit costs of a quantum algorithm is a noble exercise but does not at all ensure its ultimate feasibility. ## Predicting the absolute runtime of a quantum algorithm is hardIf you think predicting the absolute runtime of a classical algorithm is hard, don't go anywhere near a quantum one. Given a fixed quantum circuit for a specific quantum computer, and an observable to measure upon the output state, the total runtime is influenced by: - the number and type of operations in the circuit.
- the speed of those operations (read: highly heterogeneous) on the computer.
- which operations can be performed in parallel.
- device overheads like cooling the computer and preparing the initial state.
- the device's noise characteristics (does the circuit prescribe our noisiest operations?)
- meta-algorithms necessary to mitigate or correct errors.
- the number of repetitions of the circuit necessary, as informed by noise, the observable variance, and the desired precision.
Attempting to predict the runtime of a quantum algorithm on generic quantum hardware can be nightmarish!## Predicting the absolute runtime of a |