It was end of 2017 when I found that people alien to the crypto world or physics started to show interest in quantum computing. Looks like the next big hype, with people making different kinds of fabulous expectations and large IT companies fighting a battle to lead the progress both in the scientific and the marketing arenas.
Quantum computing is a very promising breakthrough in computing science that was initially suggested by Richard Feynman in his 1959 lecture “There’s plenty of room at the bottom”. He noted that “Atoms on a small scale behave like nothing on a large scale, for they satisfy the laws of quantum mechanics. So, as we go down and fiddle around with the atoms down there, we are working with different laws, and we can expect to do different things.”
Feynman came back to quantum computing in his 1981 paper “Simulating physics with computers” where he considered quantum computers as universal quantum simulators.
Many other people have put their brains and work on quantum computing. For a detailed list you can see a nice timeline in Wikipedia.
But it was in 1994 when the world knew about a quantum computing application that would attract an unprecedented interest in the technology. Peter Shor found an algorithm that would allow a quantum computer to solve the discrete log and the factoring problem in polynomial time.
In other words, if a practical quantum computer existed, our current public key cryptography would be easily cracked.
This article will explain you some basics of public key cryptography and quantum computing to understand why Shor’s work is relevant. It will also show you why Shor’s algorithm will not kill public key cryptography, the crypto star.
- Public key cryptography
- Mathematical foundations
- Quantum computing
- Postquantum cryptography
- Annex I: Some math details of public key cryptography
- Annex II: Some further details on quantum computing
Intro to public key cryptography ??
Take Alice and Bob. They used to send private messages to each other, but suddenly Eve appeared eavesdropping on any conversation around. Now, Bob wants to send a new message to Alice. How would he be able to securely transmit his message?
With traditional, symmetric cryptography, Alice and Bob need to agree on a key that would allow them to encrypt and decrypt the message. However, Bob will have a hard time trying to find a way to send Alice the key he used to encrypt his message without Eve learning the key as well. And if Eve knows the key… ¡she’ll be able to read the letter!
Enter public cryptography. It was initially designed to solve the need for agreement on a secret key. This was achieved by Whitfield Diffie and Martin Hellman in 1976. Later, in 1978 Rivest, Shamir and Adleman (RSA) found a way to both encrypt and sign messages.
At a very high level, you can think of public key cryptography this way: While a symmetric cryptosystem has one key that allows for encryption and decryption, a public key cryptosystem is based on a pair of mathematically related keys:
What one key encrypts can only be decrypted by the other, and vice versa.
Take this example:
- Alice creates a pair of keys: A1 and A2
- Alice keeps A1 secret. We’ll call this her private key (Pv).
- Alice makes A2 public by sending it to Bob or posting it somewhere. There is no problem in distributing it openly since it doesn’t disclose anything about A1. We’ll call this her public key (Pb).
- Bob wants to send a private message to Alice.
- Bob encrypts the message using Alice’s public key, Pb.
- Alice receives the encrypted message and decrypts it with her private key, Pv (remember: What one key encrypts can only be decrypted by the other, and vice versa).
- Eve can’t read the cleartext message since she doesn’t have access to Alice’s private key, Pv.
Public key cryptography also enables an incredibly useful application: Digital signatures. It works like this:
- Alice wants to sign a message so that anyone can verify it was prepared by her and it has not been tampered.
- Alice encrypts the message using her private key, Pv. We’ll call this the Signature.
- Alice sends to Bob the cleartext message and the Signature.
- Bob decrypts the signature using Alice’s public key, Pb.
- If the cleartext message and the decryption from the Signature are identical Bob can be sure that it could only be prepared by Alice using her private key, Pv.
Real life protocols make use of different techniques to improve efficiency and security, but in summary they work like this example.
Mathematical foundations ?
So far, practical implementations of public key cryptography are based on a couple of known hard problems: Discrete logarithms and factorization.
These problems get exponentially harder as the size of the numbers involved increase. For instance, while factoring 35 is trivial (5×7), factoring a 1024-bit number as used in RSA (called RSA modulus) has not been achieved yet. This is why they are sometimes called “One-Way functions”.
Logarithm is the operation that gives as result the exponent that needs to be applied to a certain number, called base, to get some number. For instance, since 10^2=100 we have that logarithm in base 10 of 100 is 2.
In this example 10 is called the base, 100 is called the anti-logarithm and 2 is called the logarithm.
The word “discrete” before logarithm means that the operation is defined in a number field with a finite size, instead of the field of real numbers, which is infinite. There are different number fields to define the discrete logarithm operation. The ones most commonly used are “Integers modulo a prime” and “Elliptic curves over integers modulo a prime”.
Calculating a discrete logarithm in certain number fields becomes exponentially harder as the anti-logarithm grows.
Refer to Annex I, Diffie-Hellman for a detailed explanation of Diffie-Hellman’s key exchange protocol, which is based in discrete logs.
Factoring means obtaining the prime factors that form any number. All numbers can be obtained by multiplying one or several prime numbers. Hence, factoring 15 is getting to know that it can be built by multiplying 3×5.
As we said before, factoring becomes exponentially harder as the number to be factored grows.
What does “exponentially harder” mean?
Figure 1 graphs the time required on a sample test to factor a number built by multiplying two random prime numbers. You can easily see that as the number to be factored is increased, the time required to factor it gets larger in an accelerated way. The two series show the effort using two different factoring algorithms.
We use logarithmic scales (or simply log scales) to represent exponential graphs in a way that allows us to perceive details all along the data set, like in Figure 2, which shows the same data in a (maybe) less visually impacting format but allowing us to do some estimations.
Note how the vertical axis differs from a normal graph. In a normal graph we would have a proportional growth of values like in Figure 1 (0, 500, 1000, 1500, …), where every 500 seconds are represented with the same vertical space. In Figure 2, on the contrary, the values increment according to the increment of an exponent (10-2, 10-1, 100, 101, 102, 103, …) and so a values in the milliseconds range are represented using the same space as values in the thousands of seconds or millions of seconds range.
We can roughly guess how hard it is to factor a large number by using a log scale and some data:
- The largest RSA modulus publicly factored (2009) at the date of this writing (2020) was 768 bit long and required an effort equivalent to 1500 years on a single core 2,2GHz AMD Opteron processor.
- The authors of this achievement state that factoring a 1024-bit RSA modulus would require 1000 times that effort.
- The public FLOPS of the AMD Opteron 2427 2,2GHz core and different supercomputers
- The estimated effort to factor a 2048bit RSA modulus is 2^32 times harder than a 1024bit modulus.
If we bundle all these into a log scale, we get Figure 3.
The combined power of the 500 most powerful supercomputers in the world would need in the order of 10,000 years to factor a 2048-bit RSA modulus. So, would need to launch your factoring program during the end of the Ice Age to get the result today.
The current default key size used by OpenSSH is 3072 bits, which would require, roughly the age of the universe to be factored by the same cluster of supercomputers. Imagine yourself with that cluster besides the big bang.
A 4096-bit RSA modulus would need roughly one million times the age of the universe to be factored by the same cluster.
This is how hard it is currently to break public key cryptography, when properly used. Alternative ways to attack public key cryptography exist and are based mainly on poor implementations. But this is out of the scope of this article.
Quantum computing ?
The core feature of quantum computers is that they are based on quantum bits, qubits. In classical computing we operate with bits and they have anytime a single value, either 0 or 1. On the contrary qubits are special in that they make use of certain quantum phenomena like superposition and entanglement.
Maybe you have heard about Schrödinger’s cat, an imaginary “quantum-cat” that is dead and alive simultaneously until someone observes it. Quantum particles in superposition behave as if they are in different states simultaneously and only get a final, single state when observed or measured. They are said to “collapse” to a specific state.
Qubits are just like Schrödinger’s cat: While in superposition they behave as if they are in state 0 AND 1 SIMULTANEOUSLY. Only when we measure it, the qubit collapses to a single state. Before that, we can calculate the probability of the result, but we can’t be sure what it will be.
Thanks to entanglement, two quantum particles will always have a correlation between their states. Take, for instance, two entangled photons. While in superposition we can’t know what spin (↑ or↓) they will show when measured. But once we measure one of the photons, we know that the other will have collapsed to the opposite spin. If photon A is measured with spin ↑, we know that photon B will have collapsed to spin ↓.
Going back to Schrödinger’s cat, one of the interpretations of the weirdness of quantum physics is the “many-worlds” model. This interpretation considers that, when the cat in superposition is observed, the world “splits” with each possible state of the cat. Hence the previous world generates two branches, one world where the observer sees a living cat and another world where the observer sees a dead cat.
There’s a nice short film that explores this idea of entanglement to multiple universes: One Minute Time Machine. I want spoil it. Just take a look at it and laugh.
Why quantum computers are different?
One way to think about quantum computers is that leveraging on superposition and entanglement they can manipulate several states at once. Consider a classical computer operating with bits. It can only be in a single state at any point in time and its n bits will have a single value. On the contrary, a quantum computer with n qubits will be in states simultaneously and will be able to operate on all those states AT ONCE.
This is sometimes compared to parallel computing. This is not very accurate and there are some arguments against this comparison. Anyway, the basic idea is that a quantum computer can provide an exponential speedup when attacking some complex problems.
The goal of a quantum algorithm is to carry operations in quantum states in such a way that the measurement of the qubits will likely provide us with an answer to our problem. And this “likeliness” is managed by playing around with superposition, entanglement, interference and probabilities.
Refer to Annex II for a few further details on the basics of quantum computing.
Peter Shor found a way to calculate discrete logarithms and factor integers in a way that does not grow exponentially with the size of the variable. The algorithm he designed is able to solve those problems in what is called “polynomial time”. This is, if the size in bits of the anti-logarithm or the RSA key is n , the complexity of calculating the discrete logarithm or factoring the key would be in the range of n elevated to 3 using Shor’s algorithm, instead of x elevated to n using classic algorithms, which is an exponential function that grows much, much faster.
This means, for instance, that breaking a 4096-bit RSA key might take minutes, hours or days instead of several times the age of the universe.
Refer to Annex I, Shor’s algorithm for a more detailed explanation on how it works.
Postquantum cryptography (or why quantum will not kill the crypto star) ⭐️
Although Shor’s algorithm can break discrete log and factoring-problem based cryptosystems, its implementation requires a quantum computer with a large number of qubits. There are no accurate predictions on when a quantum computer large enough to break current public key cryptosystems will exist, but the closest expectations estimate the 2030 decade for that event.
Meanwhile, researchers and standardization bodies are working on the so called “post-quantum cryptography”. That is, public key cryptosystems that can be run on classic computers but can’t be broken by quantum computers. You can find more information on postquantum cryptography and the standardization efforts in the following links:
- Post-Quantum Cryptography Standardization by NIST
- Industry Specification Group on Quantum Safe Cryptography (ISG QSC) by ETSI
- PQCrypto community and conference.
- EU’s PQCrypto project
The PQ efforts are expected to provide new standards early in the 2020 decade, so expect that we’ll get few years to adapt and migrate from the current public key cryptosystems to new postquantum public key cryptography.
Meanwhile you should take care about how long your encrypted data and signatures are expected to be secure. This will greatly depend on the use case.
Thanks to María José Ruiz Félez for the illustrations! ?