Quantum computing remains an area fraught with challenges, but key breakthroughs this month show it is not out of reach.
Quantum computing will change everything – or so the hype would have you believe. In fiction, they are often portrayed as machines with esoteric capabilities such as showing real-life videos of Jesus (streaming platform Hulu’s limited series Devs) or communicating with dark matter (broadcaster BBC’s adaptation of Philip Pullman’s His Dark Materials novels).
These scenarios are obviously fantastical – if highly enjoyable – concepts and they are what science fiction often does best: take almost incomprehensibly complex technologies that remain out of reach for now and have them make sense in imaginary universes to great effect.
And while no sane person would expect to actually see a video of Jesus’ last words on the cross, it all speaks to this idea that quantum computing could theoretically achieve anything we set our minds to.
But quantum computing is not a magical machine. Its core advantage is to solve complex calculations at scale that would take a traditional computer an essentially infinite amount of time.
An example of this is Shor’s algorithm, discovered in 1994 by Peter Shor, the Morss professor of applied mathematics at Massachusetts Institute of Technology, and occasional poet with a penchant for limericks and musings about quantum mechanics. In essence, the algorithm proves that a quantum computer is exponentially faster at factoring large numbers – a fact that means quantum computers could break modern-day encryption such as that employed by banks or online transactions.
Breaking cryptographic keys in the same way is not technically impossible with binary computing. It would just take much, much longer – potentially thousands of years even with supercomputers – as far as we currently know.
Emphasis on “currently” because therein lies the rub: much of quantum research is actually concerned with proving that a classical computer cannot solve the same problem in a comparable amount of time and that a quantum computer would not be overkill. If this sounds ludicrous – your assumption might be that, surely, a quantum computer would always be faster – it really is not.
Ewin Tang, now a PhD student with the University of Washington’s theoretical computer science group, shocked many when he published a paper in 2018 – aged just 18 – that proved that the recommendation problem could be solved at roughly the same speed on a classical computer.
The recommendation problem is exactly what it sounds like: if you watch a movie on a streaming platform or make an online purchase, an algorithm will try and find other content or products that you may also want. This sounds simple enough but can involve thousands of data points that take a long time to process. Quantum computing had largely been seen as an obvious way to exponentially speed up this process and, in fact, just two years earlier a pair of researchers had claimed as much in their paper.
With one stroke of genius, a teenager had made that real-world application for quantum computing obsolete.
This is all before we even get to building quantum hardware. Qubits – the quantum version of a bit – are incredibly error-prone and that has meant quantum supremacy has stubbornly remained out of reach. The feeblest noise, such as a stray magnetic field, wipes out the qubit’s two-ways-at-once state – the quality that makes qubits so powerful in the first place, because traditional bits can only be in one state at a time. It is also really difficult to maintain qubits in their quantum state and when they lose their state, the errors start creeping in.
So, simply putting more qubits into the machine is not enough and if a company talks about a 5,000-qubit processor, this does not automatically mean it is a phenomenally powerful machine. Compare this to a traditional computer where a 64-bit architecture does equal higher efficiency compared to 32-bit.
Google, the internet technology subsidiary of conglomerate Alphabet, demonstrated two weeks ago that quantum error correction works, subject to certain conditions, on its Sycamore quantum processor, and showed that the method could be scaled. Those conditions? Google’s approach can only handle one of two error types at a time, rather than simultaneously, and the method could not fix errors that it detected in real time. It was a breakthrough, but not the holy grail.
Then, one week ago, Honeywell Quantum Solutions, a subsidiary of industrial conglomerate Honeywell, and Cambridge Quantum Computing – which are set to merge and form an independent business once regulatory hurdles are cleared – released their own study that showed their method could correct errors in real time. It was a bigger breakthrough, and brings us much closer to the holy grail.
-
IBM is also working on quantum computing and pictured here is its IBM Q at CES in 2020. Photo courtesy of IBM.
None of this is to say that quantum computing is a pipe dream. It will become a reality and it will revolutionise many industries. For example, earlier this year, pharmaceutical firm Roche signed a deal with Cambridge Quantum Computing to speed up its drug discovery.
And countless spinouts are trying to make advances in the field too, such as Riverlane, based on University of Cambridge research, which is developing an operating system for quantum computers.
C12 Quantum Electronics, a France-based spinout of Ecole Normale Supérieure and CNRS, is working on a quantum computing processor that relies on high-purity quantum nanotubes to minimise errors and significantly improve performance.
Another one is PsiQuantum, a US-based spinout of University of Bristol, that is working on a commercial quantum computer that will offer a million qubits – the point at which the technology would become general-purpose and useful to businesses.
And in April this year, University of Chicago’s Polsky Center launched the Duality accelerator, the first programme in the US that will focus exclusively on quantum technologies. It unveiled its inaugural cohort earlier this month, with six companies focused on challenges as varied as developing simulation software for modelling noise and current in quantum devices such as high-resolution quantum sensors (QuantCAD) or developing a quantum random number generator (Axion Technologies). For more on this, subscribe to our podcast, Talking Tech Transfer, which will feature an in-depth discussion of Duality with the Polsky Center’s Jay Schrankler next month.
Similar programmes already exist elsewhere. In the UK, commercialisation firm IP Group and state funding agency UK Research and Innovation (UKRI) teamed up in August 2020 to form a $15.7m accelerator aimed at nurturing early-stage quantum technologies. That accelerator is being funded through UKRI’s Commercialising Quantum Technologies Challenge, a $193m initiative set up through its Industrial Strategy Challenge Fund.
That is the other reality of quantum computing: it will become increasingly geopolitically relevant. No state will want its adversaries to have bigger computing power, and no government will want its industries seen left behind because progress is made elsewhere. It is not a reality many spinouts will be concerned with while they are trying to figure out how to get the technology to work in the first place, but it is a challenge that will materialise sooner rather than later. And it seems likely that something as unprecedently powerful as quantum technologies will eventually become subject to export controls in most geographies.
But this, like every challenge thrown at the innovation ecosystem, will also be resolved. After all, tackling the big problems is what university researchers do best. And although quantum computing will not magically make all of our troubles go away, for all of the hurdles that remain its potential heralds a very promising future indeed.
There is no doubt in my mind that tech transfer offices and spinouts will be key in getting us there.