Post-Quantum Cryptography: Why You Should Start Before Quantum Computers Arrive
Post-quantum cryptography is no longer future theory—it’s a present-day migration challenge. Learn why organizations must start now, how PQC works, and what standards like ML-KEM and ML-DSA mean in practice.
Post-Quantum Cryptography Before Quantum Arrives
Assumptions used here: mixed technical and non-technical readers; public-source status checked as of 29 April 2026; conversational, blog-style voice, with a few small human rough edges left in on purpose.
Executive summary
Post-Quantum Cryptography, or PQC, is the big cryptographic replacement programme for the parts of modern security that a powerful quantum computer would hurt the most: public-key encryption, key exchange, and digital signatures. The core reason to care now is not that a cryptographically relevant quantum computer definitely exists today; it is that encrypted data can be collected now and cracked later, and NIST says full migration in real systems can easily take 10–20 years. That combination makes delay expensive, even if “Q-day” itself stays fuzzy. NIST finalised the first three U.S. PQC standards in August 2024 and explicitly urged organisations to start integrating them immediately rather than waiting for perfect certainty.
The short version is: start now, not after teh perfect standard arrives. For most organisations, the first practical wave is ML-KEM for key establishment and ML-DSA for signatures, with SLH-DSA kept in reserve where diversity matters. HQC was selected in March 2025 as a backup KEM based on different maths, and FN-DSA, the FALCON-derived signature standard, is still in development. The sensible migration path is inventory first, hybrid pilots second, phased cutover third. That is less glamorous than quantum headlines, but it is the part that actually changes risk.
Why PQC matters and how the threat actually works
The threat model is pretty straightforward, even if the maths underneath it is not. Shor’s algorithm showed that a quantum computer could solve integer factorisation and discrete logarithms in polynomial time, which is devastating for RSA, Diffie–Hellman, and elliptic-curve cryptography. That hits the public-key layer used in TLS handshakes, certificates, VPNs, software signing, document signing, and plenty of identity systems. NIST’s public explainer adds the uncomfortable operational twist: even before such machines exist, adversaries can harvest encrypted traffic today and keep it for future decryption. If a secret needs to stay secret for many years, the risk has already started.
Grover’s algorithm is different. It gives a quadratic, not exponential, speed-up for brute force search. So quantum computers do not “break everything” evenly. NIST’s FAQ is actually quite clear on this point: symmetric cryptography and hashes are not in the same trouble as RSA or ECC, and AES-192/AES-256 are expected to remain comfortable for a very long time; even AES-128 is not suddenly useless because Grover is hard to parallelise and expensive in realistic hardware terms. So, no, the first job is usually not “replace AES”. The weak point is usually the public-key wrapper around the symmetric crypto you already trust.
That is why PQC standardisation has focused on KEMs and signatures. A KEM, in plain English, is a way for two parties to agree a shared secret over a public channel; they then use ordinary symmetric algorithms for the heavy lifting. So when people say “Kyber” or now “ML-KEM”, what they often mean in practice is “the new quantum-resistant way to do the handshake before AES takes over”. It sounds a bit abstract, sorry, but operationally it is very concrete.
The algorithm families in plain English
The field has a few main families, each with its own personality. Lattice-based systems are the current winners because they hit a workable middle point on speed, security evidence, and wire sizes; ML-KEM and ML-DSA both live here. Code-based systems lean on the hardness of decoding noisy codewords and have a long history dating back to McEliece in 1978; they can be very conservative, but often with ugly public-key sizes. Hash-based signatures are built from hash functions plus Merkle-tree machinery; they are wonderfully boring in the best way, but signatures can get huge. Multivariate schemes hide trapdoors inside systems of quadratic equations; they often chase short signatures and fast operations, but the family has had a rough time, especially after Rainbow was broken. Isogeny-based schemes use hard problems over supersingular elliptic curves; they are mathematically elegant and can be very compact, but SIKE’s collapse in 2022 badly dented confidence, even though signature research such as SQIsign is still alive.
| Family | Security basis | Typical key or ciphertext profile | Performance feel | Maturity in April 2026 | Typical use-cases |
|---|---|---|---|---|---|
| Lattice-based | LWE, module-LWE, SIS, NTRU-type lattice problems | ML-KEM public keys are 800–1568 bytes and ciphertexts 768–1568 bytes; ML-DSA public keys are 1312–2592 bytes and signatures 2420–4627 bytes | Usually strong software performance and balanced sizes | High: first-wave deployment path, with ML-KEM and ML-DSA standardised; FN-DSA still pending | General-purpose key exchange and signatures |
| Code-based | Syndrome decoding and related coding-theory problems | Ranges are wide: Classic McEliece public keys are about 261 KB to 1.36 MB with tiny ciphertexts; HQC public keys are about 2.2–7.2 KB with ciphertexts about 4.4–14.4 KB | Math is mature; network overhead can be the jhamela | Medium-high: long security history, but only now entering standards as backup KEM territory | Backup KEMs, high-assurance use, cases where large public keys can be cached |
| Hash-based | Hash preimage and collision resistance plus tree constructions | Public keys are tiny, around 32–64 bytes, but SLH-DSA signatures range from 7,856 to 49,856 bytes | Conservative, often slower and bulkier on the wire | High for assumptions; medium for everyday deployment because of size pain | Conservative signatures, firmware, roots of trust, “backup” signature posture |
| Multivariate | Trapdoored multivariate quadratic maps | Highly variable; classic UOV-style schemes often have large public keys, while newer variants like MAYO try to shrink them sharply; signatures are usually more compact than hash-based ones | Often attractive on paper, but cryptanalysis churn is real | Low-medium: still in NIST’s additional-signatures track, but Rainbow’s 2022 break hurt confidence | Candidate signatures where compactness or speed matters, not mainstream deployment yet |
| Isogeny-based | Finding hard isogenies between supersingular elliptic curves | Main attraction is compact artefacts; current viable work is mostly on signatures, not KEMs | Typically slower and implementation-heavy | Low: SIKE was broken, though SQIsign remains a round-two signature candidate | Advanced research, compact signature candidates, niche future possibilities |
What this table really says, underneath the bytes and buzzwords, is that lattices won the first round because they are the least inconvenient. Hash-based schemes are the conservative spare tyre. Code-based schemes provide diversity insurance. Multivariate and isogeny-based schemes are still important academically and may yet matter commercially, but betting your near-term migration on them would be, well, adventurous.
Where NIST stands right now
NIST’s public PQC process started in 2016, received 82 submissions in 2017, accepted 69 as complete first-round candidates, narrowed those through multiple rounds, and published the first three final Federal Information Processing Standards on 13 August 2024: FIPS 203 for ML-KEM, FIPS 204 for ML-DSA, and FIPS 205 for SLH-DSA. NIST characterises FIPS 203 as the primary standard for general encryption, FIPS 204 as the primary standard for signatures, and FIPS 205 as a backup signature option based on different maths.
As of public information available on 29 April 2026, HQC is the key extra move in the KEM track. NIST selected it on 11 March 2025 as a backup to ML-KEM and said at the time that a draft standard was planned in about a year, with finalisation expected in 2027. FN-DSA, the FALCON-derived signature standard, is still listed on NIST’s PQC page as “in development”; a NIST status update from September 2025 said the initial public draft was basically written and awaiting approval. In parallel, NIST’s separate “additional digital signature schemes” process moved 14 candidates to a second round in October 2024, including non-lattice options such as MAYO, UOV, and SQIsign.
One underappreciated detail: “final standard” does not mean “never touched again”. Both FIPS 203 and FIPS 204 now carry errata notes for future updates, and NIST is still filling in the surrounding guidance stack with documents such as SP 800-227 for KEMs and the NCCoE migration work in SP 1800-38. So the standards are real, usable, and ready — but the operational playbook is still maturing in public, which is normal and a little messy.
What makes deployment hard in the real world
The pure maths is only half the story. The annoying half is sizes, speed, and protocol plumbing. ML-KEM looks quite reasonable on the wire, which is one reason it became the default KEM choice. But signatures can get noticeably larger, SLH-DSA signatures can become very large, and code-based choices can swing from “tiny ciphertext, massive public key” in Classic McEliece to “modest-ish public key, chunky ciphertext” in HQC. Once you drop these into TLS, X.509, SSH, CMS, firmware signing, or HSM workflows, the overhead stops being academic and starts showing up in packet traces and procurement spreadsheets.
Interoperability is the next headache. Google’s Chrome started offering hybrid X25519+Kyber handshakes in 2023 precisely to flush out ecosystem problems, and the problems were real: post-quantum ClientHello messages are larger, can spill across packets, and some buggy servers or middleboxes fail when that happens. Cloudflare points directly to this class of issue, and public testing work has reported measurable incompatibility at Internet scale. NIST’s own FAQ is pragmatic here: hybrid modes are allowed and useful, but they bring implementation cost, performance reduction, engineering complexity, and the need for proper security review. This is not a flip-a-switch migration, unfortunately.
Then there is implementation quality. Decapsulation failures can matter for security in KEMs. Constant-time coding still matters. Stack usage still matters. And FN-DSA’s slower path through standardisation is a nice case study in why deployment teams should not treat “promising algorithm” and “easy to implement safely” as the same sentence. NIST has openly discussed Falcon/FN-DSA’s floating-point challenges, while Open Quantum Safe’s implementation notes continue to show that some PQC implementations are clean and constant-time-looking, while others carry caveats around memory leaks, stack use, or timing behaviour. The maths may be post-quantum; the bugs are stubbornly classical.
Still, it is not all bad news. Real deployments are already happening. Google pushed hybrid PQ key agreement into Chrome traffic; Apple rebuilt iMessage around its hybrid PQ3 design and ongoing rekeying; and Cloudflare said that by the last week of October 2025, a majority of human-initiated traffic on its network was using post-quantum encryption. So the migration is no longer just papers, slides, and conference coffee. It is in browsers, messaging, and edge networks already — which is encouraging, and also means laggards will start to feel wierd compatibility issues sooner rather than later.
How organisations should migrate and what history teaches us
The cleanest migration model is boring in the best way: discover where cryptography lives, prioritise the systems whose secrets have long shelf lives or whose trust chains are hardest to replace, run hybrid pilots in a non-production enviroment, then phase in new standards while preserving rollback and interoperability. NIST’s migration work explicitly emphasises cryptographic discovery, interoperability and performance testing, and risk-based prioritisation across hardware, firmware, software, protocols, and services. In other words: do not start with a heroic full rewrite. Start by finding the crypto you already have.
For engineers, the practical next step is to map every use of RSA, ECDH, ECDSA, EdDSA, certificate issuance, code signing, and VPN or TLS termination; separate “bulk encryption” from “public-key bootstrap”; then pilot hybrid key establishment so you can measure handshake size, latency, certificate-chain effects, library support, HSM support, and failure modes before production users do it for you. For managers, the next step is governance more than glamour: assign ownership, ask vendors for PQC roadmaps and interoperability evidence, prioritise data that must stay confidential for many years, and fund crypto-agility work so the next algorithm swap is less painful than this one.
If this all feels familiar, that is because cryptography transitions usually look slow until they suddenly don’t. DES looked entrenched until the EFF’s Deep Crack showed in 1998 that brute-force cracking was practical enough to end the argument, and NIST’s open AES competition then replaced the aging standard with Rijndael. Years later, the SHAttered result from Google and CWI turned SHA-1 deprecation from a nagging to-do into a much sharper operational demand. PQC has the same vibe, honestly: a long warning period, then one day a lot of people act surprised that the warning period existed.
My slightly opinionated conclusion is this: organisations should stop treating PQC as speculative futurism and start treating it as infrastructure maintenance with a complicated supply chain. NIST’s own message is not “wait for every backup standard”; it is “use the finished ones now”. That seems right. You probably do not need to migrate every last thing this quarter. But you do need a plan, a pilot, and an inventory. Otherwise the long tail of hidden crypto will come back to bite later, and later is usually when budgets, patience, and vendor attention are at their worst.
Further reading, if you want the good stuff rather than hot takes: NIST’s public explainer on PQC and the FIPS 203/204/205 standards; NIST IR 8545 on HQC; NIST IR 8528 on the additional-signature track; Peter Shor’s and Lov Grover’s foundational papers; Cloudflare’s 2025 deployment write-up; Apple’s PQ3 design note; and Google’s Chrome hybrid-TLS post.
Conversation
Comments
Reply, like, report abuse, and keep the discussion constructive.
No comments yet. Be the first to start the conversation.
You need an account to write comments, replies, and likes in this thread.