Blog

Post-quantum Cryptography: Maybe We Should Just Use Symmetric Keys

A security lock symbol

Security conscious individuals and organizations have become increasingly nervous about the possible development of a realistic “gate-model” quantum computer. Such a computer could run the well known Shor’s Algorithm, an attack that quickly solves what is known as “the hidden subgroup problem.” All of our current methods of performing public key operations (namely digitally signing data or coming to cryptographic key agreement) rely on the belief that a particular version of the hidden-subgroup problem is hard. Particularly, finding discrete logarithms (ECDH) and factoring large numbers (RSA) are versions of finding a hidden subgroup.

Post-quantum cryptography is an effort to find cryptographic algorithms where the underlying “hard problem” is not subject to this particular attack. As a result, NIST has published FIPS 203 and 204, establishing the ML-KEM and ML-DS standards that make use of “module lattices,” where the underlying problem (called “module ring learning with errors” or MLWE) doesn’t make use of a hidden subgroup, and so cannot be obviously broken if someone were to build a quantum computer that could run Shor’s algorithm.

I am dubious.

There are a lot of reasons to be a bit concerned about the choices of post-quantum algorithms. While some of these require a deeper understanding of cryptanalysis and the details of lattices, more than I can assume for the readers of this blog, one reason is quite simple. While, humans have tried for millennia to find efficient ways to factor large numbers, and the discrete log problem was studied by Gauss over 200 years ago, MLWE dates back only to 2005. Are we confident that we should be protecting our most important matters of national security by assuming the hardness something that has been studied for so little time? In fact, since NIST started considering these schemes, there have been numerous publications showing ideas that could conceivably weaken them against a quantum capable adversary. Other similar schemes, such as NTRU have been realistically attacked.

Another important reason to be reluctant to adapt the PQC schemes is the difficulties in secure implementation. Incredible amounts of work, both at Two Six, and elsewhere has gone into formally verifying the correctness of classical cryptographic schemes and protocols. In Two Six’s High Assurance Solutions group, we have tools that are capable of quickly performing mathematical proofs that implementations of classical algorithms (to include symmetric key and hash functions) and protocols. The underlying primitives for the approved post quantum schemes are much more difficult to verify, and indeed there are many unknowns with respect to the susceptibility of implementations to side-channel attacks. In fact, there is circumstantial evidence of undiagnosed bugs in implementations. For example NIST had to provide a late “hot fix” to their test vector suite.

For me, this adds up to saying that before we dive headfirst into the world of new, asymmetric key algorithms, maybe we should see just how far we can take the parts of cryptography that we aren’t worried about. Symmetric key algorithms are not particularly subject to any good quantum attacks. The best known approach to breaking symmetric key schemes, Grover’s algorithm, reduces the security of a symmetric key algorithm to the square root of its classical security. This means that instead of AES-256 giving us 256-bits of security, it gives us 128-bits of security. Well, if every human on planet earth had 10 computers each capable of attempting one key every nanosecond, it would still take roughly 70 billion years to break 128-bits of security. I think we’re gonna be fine. But, can we base our communications on symmetric key algorithms, or do we need things like asymmetric key agreement and signatures?

In the case of signatures, NIST’s new FIPS 205, which establishes SLH-DH, a hash-based algorithm is really good. The security there is actually based on hash functions which have the same strong properties of AES, and lack the unknowns associated with lattice based schemes. To me, this seems like a good option in a post-quantum world. For key agreement, the situation is less clear. Alternative, quantum-resistant algorithms that have fewer security worries than ML-KEM, such as Classic McEliece, have efficiency issues that make them impractical at this time. Without these primitives, there are still ideas that can work in many deployment scenarios. If we allow for out-of-band communication for pre-positioned keys, there exist ways of making sure the security of our communications depend only on the strong guarantees of symmetric key algorithms.

Recently, researchers at Two Six, working with colleagues at Arqit, performed a full formal verification of the security of one such symmetric key agreement protocols, described in RFC8784. This protocol is an quantum resistant extension to the Internet Key Exchange version 2 protocol. Our researchers did prove the existence some weaknesses in the proposal. None of these were critical for the majority of use cases. The most interesting weakness found was a key-integrity attack, whereby an eavesdropper may force the participants to use a different key than the one that the protocol calls for, but still not one that they know and thus can use to decrypt. In many scenarios, this may be an acceptable risk. The important thing is that, because RFC8784’s security is based on only very strong primitives, we can understand these weaknesses and determine if they are relevant to our deployment scenario, something we cannot do if we rely on the uncertain security of ML-KEM.

There is still much work to do in the area of post-quantum cryptography. Maybe we will be able to find some strong proof of the security of lattices, and then ML-KEM will feel safer. Maybe we can develop new, better symmetric key based protocols that obviate the need for asymmetric key agreement entirely. Maybe we can make the alternative post-quantum key agreement schemes efficient enough to be practical. Maybe we will discover that the laws of physics prohibit us from building a large enough gate-model quantum computer to run Shor’s algorithm against things like our elliptic curve based systems. Regardless, I will push us to do as much as we can in a way that is known to have provable, strong guarantees. The security of our communications depends on it.