Peter Shor has carried out pioneering work in combinatorial analysis and the theory of quantum computing. He received worldwide recognition in 1994 when he presented a computational method for "factoring large numbers" which, theoretically, could be used to break many of the existing coding systems. The discovery of Shor’s algorithm along with his work on quantum error correction, helped boost the field of quantum computers to a new level. For his work he has received numerous awards including the Goedel Prize in 1999 by the European Association for Theoretical Computer Science and a MacArthur Fellowship. In 2017 he received the Dirac Medal of the ICTP and for 2019 the BBVA Foundation Frontiers of Knowledge Award in Basic Sciences. In this interview we discuss his first steps and later developments in quantum information theory, the future of quantum computing as well as the skills and qualities needed to tackle the challenges that the new quantum era brings to our societies.
Do you recall what motivated you to work on quantum information theory?
The first time that I distinctly remember listening about quantum computing and quantum information theory was during a presentation by Charlie Bennett at Bell Labs. The topic was the quantum key distribution algorithm that he had developed together with Francois Bessette, Gilles Brassard and Louis Salvail. During his talk, he showed us a device that he built/developed together with John Smolin. It was a working prototype of a quantum key distribution system and the first demonstrator of quantum cryptography!
Quantum cryptography, was considered for many years as science fiction beyond the reach of our technological capabilities. Despite this rather gloomy perception, in October 1989, the first experimental quantum key distribution system was successfully implemented by Charli Bennett, Francois Bessette and their colleagues. A few years later, in 1992, they published a paper  describing the design of the apparatus that they build and the software.
This ushered the experimental era of quantum cryptography and a marking moment in the development of quantum computing. However, I was still not thinking about entering/working in this field. The turning point for me was when Umesh Vazirani visited the Bell Labs to give a talk on quantum computing based on his work  with Ethan Bernstein. They had proven a number of interesting things about quantum algorithms; highlighting the speed of quantum computers and the fact that they are more powerful compared to classical probabilistic computers.
What was your first steps in quantum information theory?
I have to admit that I didn’t get very far until I read Dan Simon's paper  on the problem that was later named after him (i.e. Simon’s problem). Imagine that you feed a string of bits, like 001100, into a black box and get as output another string, for example 110011. Simon’s problem asks: does the black box, which is your quantum computer, give a unique output for every possible input, or do some inputs give a common output? Invoking the use of a quantum computer as a black box, Simon’s posed a computational problem that is answered exponentially faster on a quantum computer than on a classical one.
What Dan Simon did was to use the Fourier transform over the binary field to find the period of a function over a binary vector space. It was the first quantum algorithm showing an exponential speed-up compared to the best classical algorithm that exists to solve the same problem. We expected that quantum computers would be faster but it was the first time that someone proved that they would be exponentially faster.
Looking back to Simon’s work, the transformation that he used was a Fourier transform and that made me think how Fourier transformations could be applied in a useful way to allow us to solve the discrete logarithmic problem. Fourier transforms and the discrete logarithm problem are both concerned with periodicity, which was the key to developing my algorithm. I succeeded to solve the discrete logarithm problem over a number “n” for the special case of “n” being a smooth number. A “smooth” number is one with no large prime factors. A classical solution already existed. However, the quantum algorithm used completely different techniques than the classical one and thus allowed a generalisation to non-smooth numbers.
The next step was to think how this algorithm would work for arbitrary numbers and generalize it for prime factorization. The resulting quantum algorithm calculates the prime factors of a large number, vastly more efficiently than a classical computer. A digital computer is often believed to be able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. Things change when quantum mechanics comes into play. The new quantum algorithm could be used for calculations with quantum states, exploiting the fact that we can have superpositions of quantum states.
One way of thinking about quantum computing, famously introduced by David Deutch, claims that computations happen in the superposition of many worlds and it is this fact that gives you the computational advantage. An extension of this argument is that quantum computing proves the existence of many worlds as the only way to explain where these computations take place.
The problem with this approach is that if you assume that the computation is taking place in many parallel worlds then your quantum computer model is pretty much resembling a massive parallel computer. However, this is not how quantum computers will work. In a quantum computer you have to arrange your computation so that all the paths leading to the right answer interfere while all the paths leading to wrong answers destructively interfere. In this way you enhance the probability of getting the right answer. This is essentially Feynman’s path integral applied in the case of computational paths though this wasn’t either the approach that I followed in my work.
Coming back to your question, I knew that Fourier transformations could be used to find periodicity. Then I figured out how to do a Fourier transformation in a quantum computer and used the Fourier transform to find the periodicity which solved the problem.
I think different communities found the factoring result surprising for different reasons.
Computer scientists thought that it was surprising because it violated the extended Church-Turing thesis. One of the foundational principles of the field is the Church-Turing thesis, which says that any function that can be computed can be computed by a Turing machine. The extended Church-Turing thesis says that anything that can be computed efficiently (i.e., in polynomial time) can be computed efficiently on a Turing machine. If quantum computers can factor large numbers efficiently, then they can do something that is not believed to be possible efficiently on a Turing machine. Even today, some computer scientists find the extended Church-Turing thesis so compelling that they don’t believe that quantum computers will be able to factor.
Cryptographers found the factoring results surprising because that meant that if you had a quantum computer, you could break the RSA  and Diffie-Hellman encryption schemes, which are based on factoring and discrete log, and on which much of Internet security currently depends.
Physicists found quantum computing surprising because it was a new and completely unsuspected use for quantum mechanics.
PC. Did you foresee that your algorithm for integer factorization could have any practical applications.
At that time, I didn’t think that there would be any practical applications of it due to the open challenges. It was demonstrated that you would need incredible accuracy to deal with the computational errors and the community believed that it would be impossible to build a quantum computer.
The first reason is related to the fact that redundancy, the main way in which classical computers correct errors, doesn’t apply on quantum computers. The second reason stems from Heisenberg’s uncertainty principle according to which every measurement disturbs a quantum system and as a consequence you cannot measure whether an error happened without disturbing the state of your computational system.
Following the introduction of my algorithm, I was confronted with these lines of objection which however led to the invention of the quantum error correcting codes.
The correct interpretation of QM was not of primary concern to me and I think it would be fair to say for many of my peers at that time. I took Quantum Mechanics courses as an undergraduate student which means that there was a time distance of about two decades between that time and the start of my work on quantum algorithms. Hopefully, I remembered enough Quantum Mechanics (laughs) and learned what was essential to develop quantum algorithms that were later demonstrated to work.
From left: David Deutsch, Charles Bennett and Peter Shor during the 2017 Dirac Medal Ceremony.
Not being able to measure the properties of the quantum system means that there are certain actions that you cannot take on a quantum computer as you could do on a classical computer. For example, in a classical computer, you can record the state of the system over time and if there is a mistake happening later in time you can always go back to the piece of the computation that you recorded and proceed from there. There are other ways in which you can introduce fault tolerance on classical computers that do not apply on quantum computers.
While going through the literature I came across a paper by Asher Peres on simple quantum error correcting codes that protect against bit flip errors. It is based on repetition by replacing 0 with let’s say 000 and 1 with 111. The problem though is that if you try to use this code to recover from phase errors then it makes phase errors three times more likely. At that time I realised that there was a transformation in quantum mechanics that takes bit errors to phase errors. So you could use this to produce a code that protects against phase errors but not bit errors.
There is a technique in classical coding that is called concatenation where you can take two codes, one of size A and one of size B and stick them together to get a code of size AB. I realized that something similar could be applied on these quantum error correcting codes. So by combining these two 3 qubit codes using concatenation, you can get a 9 qubit code that protects both against bit and phase errors. That was the start of quantum error correction. This approach allowed to formulate a quantum error correcting code by storing the information of one qubit onto an entangled state of nine qubits.
The ability to use quantum-error correcting codes, adds to the robustness of quantum information and offered a response to the criticism that quantum computers would never be able to perform large computations because you couldn’t correct errors. Quantum error correction allows us to maintain information stored in qubits (two-state particles) despite the environmental noise and without interfering with the computation. They can be seen as subspaces of the qubits' state space with the property that an initial state in this subspace can be recovered if sufficiently few of the qubits have experienced errors.
On one hand, there are a lot of physics experiments that classical computers cannot do as simulating a physical system of n particles requires 2n memory and is computationally very intense and expensive. This touches on Feynman's famous question, while he was teaching around 1981 a course at Caltech on the “Physics of Computation” about how one could simulate quantum mechanics. Richard Feynman’s observation that certain quantum mechanical effects cannot be simulated efficiently on a computer led to speculation that computation in general could be done more efficiently if it used these quantum effects. A quantum computer offers a big advantage as it could simulate this system with only a handful of qubits, maybe 10n or nlogn. In that sense, quantum computers would allow us to run simulations that are currently prohibited with classical computers and thus gain deeper insights on the basic theories describing nature.
On the other hand, If your question is whether we can learn something new about fundamental physics through the process of building a quantum computer then I am afraid that the answer is no. I believe that the only thing that quantum computers can tell us is that quantum mechanics works; something that we already know.
I hold a certain optimism, though it is still not entirely clear whether we will run against another fundamental obstacle in building quantum computers because we can’t figure out all the details of their architecture. In addition, we know that we would need much better quantum gates allowing us to do anything useful with quantum computers. If we keep progressing at the same speed, and I don’t see why this shouldn’t be the case, we should have one within the next two to three decades.
Today, we have superconducting technology which supports 60 qubits, but for which there is no inherent scaling limit, so companies are already planning 100-qubit quantum computers, and thinking about building quantum computers with thousands of qubits further in the future.
Experimental system used to realize demonstration of Shor's quantum factoring algorithm with a seven-qubit NMR-controlled molecule, at the IBM Almaden Research Labs, circa 1999. Photo taken during the course of the experiment, showing Lieven Vandersypen (left) and two summer intern students.
One of these questions is how can we improve the quantum error correction algorithms based on the existing technology? One potential answer is to think of codes that do not work at the scale of qubits but at the scale of devices. A quantum device is a specific state of an harmonic oscillator and this state can be protected against errors that you can induce and test the outcome - or at least come closer to the ability of testing. Theoretical developments on quantum error correction also drives those designing the devices and how they think about the next implementation of such a system. The history of the field shows that theory development and instrumentation (in the form of quantum computer device building) go hand in hand. In that sense, it is quite possible that some future theoretical discovery will give us a new, better route to quantum fault tolerance that will be taken into account in designing future generation devices.
Quantum computers could also compromise popular public key cryptography schemes that are based on prime factorization of large numbers - that can be surpassed by quantum algorithms. The RSA, the public-key cryptosystem widely used for data transmision which is what keeps all the information on the internet safe could be easily broken. I should clarify that we are still a long way from being able to break codes. Even when the first quantum computer arrives it will be able to break something like 10 RSA keys per day so it shouldn’t be a concern for your private data. Quantum mechanics change the nature of cryptography fundamentally and this means new ways to break protocols but also change the design of protocols that shall be different from what we could do with classical coding. One of them is quantum money. This has nothing to do with real money as you may imagine. It refers to quantum states that can be both verifiable and not copyable. Of course quantum states are unique by their nature but most of them cannot be verified. Moreover, if you had a procedure allowing you to verify something it would mean that most probably you are also able to copy it. This is the challenge I am currently working on: ways of constructing quantum states that can be verified but cannot be copied. I think that I have a good idea for a technique based on learning with errors which could be used for quantum cryptography and is reasonably secure.
I understand that cryptography is one of the key applications of quantum computers but I am wondering if you envision other areas beyond that?