Thoughts on PQC
Robert J. Hansen
rjh at sixdemonbag.org
Mon Apr 13 18:47:47 CEST 2026
(Ethan Siegel, a Ph.D in astrophysics and science communicator who does
a lot of explanations of quantum computing, is included here on the bcc
list so as to not reveal his email address. If he has any thoughts to
contribute to the discussion, I'll relay them on to the list.)
Some disconnected thoughts on post-quantum cryptography. For some reason
many people on this list seem to think I have an opinion worth listening
to on the subject: I really don't. Don't confuse someone who is
significantly less lost than you are with someone who knows how to get
home. I am, at best, the helpful stranger at the bus terminal warning
you you're getting on the wrong bus.
1. MOST OF THE DISCUSSION IS IN DEEPLY BAD FAITH
Professionally speaking, I have a reputation as being a little bit of a
cold bastard. I genuinely try to be supportive of my co-workers and
encourage them to try things and make mistakes, but when outside vendors
come in and use all kinds of jargon and newfangled words to describe
things, I have been known to be, well...
"I have a Master's in computer science, decades of work
experience, I've spoken at DEF CON and Black Hat, I
invented language-theoretic security. I am not an idiot.
I have absolutely no idea what you're talking about here.
I don't think anyone else in the room does, either.
Please explain that again."
I have to break that out for the good of my engineering team about, oh,
one time in twenty vendor show-and-tells. When it comes to anything
involving the word 'quantum', it's closer to half the time.
Last year I was asked to weigh in on the claims of a quantum
cryptography startup that had a stock price of $970/share. The CEO went
on the record saying he expected RSA-2048 to fall to quantum
cryptanalysis within two years. When I saw this YouTube clip I did some
back of the envelope math and pointed out to everyone within earshot
that under optimistic assumptions about ensemble size and error
correction, we'd need to see doublings of quantum computational
capability every 38 days from here on out for the next two years. I then
wondered aloud what it was he knew that made him so optimistic 38-day
doublings would happen for two years. Extraordinary claims require
extraordinary evidence, and all that.
Their stock price is now $12.66/share. The market got wise to them.
2. MOST OF THE DISCUSSION IGNORES HOW SCIENCE ACTUALLY WORKS.
During the Cold War there was a rush to develop high-yield nuclear
weapons ("H-bombs": contrary to popular opinion, the "H-bomb"
designation was never meant to apply to fusion weapons, but any
high-yield nuclear device).
There were several different competing ideas for how to get there. The
Russians pursued an architecture called sloika; we pursued a couple of
different routes before settling on staged detonation. It was a fairly
straightforward piece of engineering, complicated by the fact we needed
some parameters we just didn't have and didn't yet have the computing
power (or the quantum theory!) to compute.
So, the United States government went off to the South Pacific and built
the world's largest, most expensive science experiment up to that date.
It was called IVY MIKE. It was not, as some people think, the first
H-bomb. It wasn't a bomb because it was about the same size as a fairly
large industrial plant. The Soviets mocked it as a "thermonuclear
installation".
IVY MIKE went boom. We collected the necessary data. We very quickly
used the experimental results to start building staged detonation bombs
small enough to fit onto ICBMs.
A lot of people like to tell you that's what's happening here: yes, the
existing quantum computers are IVY MIKE-like monstrosities completely
unsuited for real work, but the real devices are just around the corner...
... and it's all complete hokum. Each new quantum computer is its own
IVY MIKE installation. We are not at the stage where we know how to make
quantum computers, we just need a couple of data points to finalize our
design. We are at the stage where we're struggling just to do things
that aren't trivial.
Scott Aaronson, a quantum computational nerd of immense nerditude -- mad
respect to him -- says that the proper analogy is someone in 1943
saying, "wow, those guys are close to a nuclear chain reaction." Okay,
great, big deal, what does that get you? -- well, if Scott is to be
believed, it gets you a Trinity test two years later in the summer of 1945.
No. That's not how it works.
Trinity depended on massive advances in plutonium breeding and
enrichment. We had to invent new electrical switches (krytrons) to very
precisely send electrical signals to entirely new explosive detonators
(exploding bridgewires) embedded in entirely new explosives (castables)
that formed complex geometries never before done in explosives
(explosive lenses) that required entirely new branches of mathematics to
be invented (hydrodynamics). The engineering infrastructure to support
the Bomb was *huge*.
For right now, we're building that infrastructure once per quantum
experiment. It's ... expensive, to say the least. It does not scale.
Some ideas may carry over from one installation to the next, but *how
those ideas are executed* are essentially reinvented de novo.
Everyone remembers how awesome it was when astronomers released the
first visible-light images of a black hole, right? As difficult as it
was to gain the raw data -- a heroic work of astronomy if ever there was
one -- just as difficult, if not moreso, was assembling the teams of
software engineers needed to tease information out of these petabytes of
collected data.
That's where we are with quantum computing right now. For every new
installation it's a massive investment of time and money, with very
little carrying over. We build one, we say we learned something, we move
on, reinventing and reimplementing all over again.
Sooner or later the economics are going to catch up. This is not
sustainable unless an actual economic case can be made, unless real
return-on-investment can be demonstrated.
3. THERE IS THOROUGHLY TOO MUCH SPECIAL PLEADING GOING ON.
Asking an undergraduate computer science student to enumerate the
characteristics of an algorithm is a quick way to discover what textbook
they learned algorithms from. Maybe they used CLRS or maybe Knuth's
TAOCP or ... whatever. It's like identifying Christians by asking them
to enumerate the Ten Commandments; by listening carefully to how they're
enumerated you can guess at the denomination.
But, like the Ten Commandments, the good textbooks all agree on the
important bits. One of the defining rules of an algorithm is IT MUST BE
EFFECTIVE -- put in screaming caps because this must never be forgotten.
If you forget that requirement, well, hey, I have an algorithm that
breaks RSA-4096 keys in seconds. It always gives you the wrong answer,
but why should that be an obstacle?
That's why we require effectiveness in algorithms. If a process is not
effective at its task, it is not an algorithm. With me?
Last year (or maybe in 2024) Google made this massive hue and cry over
how they had finally proved quantum supremacy -- or maybe it was quantum
advantage -- their press releases seemed a bit ambivalent before
settling on quantum supremacy. (Probably because it sounds cooler.) But
when I dove into exactly what algorithm they'd demonstrated quantum
supremacy for, I was underwhelmed.
They proved they could wire up random quantum circuits.
That's it. That's analogous to me saying I can get random values in
memory by reading them on right after power-on. Okay, great, but *why*?
Why would anyone be interested in that? How is this in any way a
breakthrough?
When I realized they were saying "we can't wire up a circuit of our own
specification, but we can wire up a completely random one faster than a
classical computer can!", wow. I remember the moment vividly, because I
threw the paper across my cubicle and bitterly quoted the television
show _Firefly_:
"Well, Google, my days of not taking you seriously are certainly coming
to a middle."
If you dig deep into a lot of these papers showing quantum supremacy,
you'll quickly discover you're not entirely sure whether the thing
they're doing is even an algorithm. And if it's not an algorithm, wow,
doesn't that just put their wild claims in a new light?
4. PRESS RELEASES AREN'T PEER REVIEW, DAMN IT.
This one should be self-explanatory.
Nobody at the Trinity test site had to go find reporters and persuade
them they'd done something amazing. Yet, these people pushing quantum
computing have astonishingly well-funded press offices and they're very
effective at convincing journalists to pay attention.
I am sick and tired of people saying "... but the paper's on arXiv!".
That's the science way of saying you self-published your fanfic on
Archive Of Our Own. arXiv is AO3 for science. I'm very happy there's a
preprint of a paper on arXiv: now let me know as soon as Peter Deutsch
or Scott Aaronson or Lee Smolin notices it and comments.
5. WHAT DOES THIS FEEL LIKE?
It feels like string theory all over again.
I realized string theory was an unscientific bunch of hokum back in
2005, when I realized string theory could not tell us which possible
universe we were in. It predicted a landscape of some 10^500 universes,
ours was in there somewhere, they were just sure of it.
I thought about it some and decided I put the likelihood of the
Judeo-Christian concept of God being accurate at about 10^-20. But
believing God said "deus vult!" is unscientific, but waxing ecstatic
about an accuracy of 10^-500 was somehow the cutting edge of science?
It was then I knew string theory was an interesting idea wrapped up in a
spectacular PR campaign in pursuit of all the grant money everywhere in
the world. Wasn't for me, and I looked skeptically on it afterwards.
I get the same feeling from quantum computation. It's not hokum: it's
based on some really interesting physical possibilities wrapped up in
engineering problems we have no idea how to solve yet. It's like if
someone gave Napoleon the plutonium pit from the Trinity bomb: cool
beans, dude, but he's got a lot of catching up to do before anything
useful can be done with it. He would find more utility from its fifteen
watts of waste heat than from anything else.
6. THE BOTTOM LINE
Quantum computing is not a fraud on the public, but there are a lot of
businesses using quantum computing claims to perpetuate frauds on the
public. The infrastructure is simply *not there* and *we have no idea
how to make it there*.
My own personal guess -- and it's a wild guess, please don't mistake
this for a scientific estimate -- is there's a 1% chance of a really
major advance in real-world quantum computing by 2035.
That's still an uncomfortable number. It's worth some long, calm, sober
deliberation.
But don't forget there's a 99% chance this current hype train isn't
going to pan out. That, too, should enter your deliberations.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 236 bytes
Desc: OpenPGP digital signature
URL: <https://lists.gnupg.org/pipermail/gnupg-users/attachments/20260413/234b1b76/attachment.sig>
More information about the Gnupg-users
mailing list