Why Nvidia’s CEO is Wrong About Quantum Computing
Nvidia CEO Jensen Huang says quantum computing is 15 years away - here’s why he’s wrong.
Dominik Andrzejczuk
Jan 9, 2025 - 2:20 PM
On Tuesday, January 7th, during a panel at the world's largest Consumer Technology Conference CES, Jensen Huang remarked that useful commercial applications in quantum computing are at least 15 years away.
“If you said 15 years for very useful quantum computers, that would probably be on the early side, iIf you said 30, it’s probably on the late side. But if you picked 20, I think a whole bunch of us would believe it.”
Jensen Huang
Founder & CEO - Nvidia
These remarks torpedoed Quantum Computing stocks such as IonQ (IONQ) and Rigetti Computing (RGTI) with more than 40% losses on Wednesday morning.
IonQ’s shares were down by about 45%, while Rigetti’s shares fell by more than 48% mid-morning. Quantum Computing (QUBT), which announced a stock offering earlier this week to raise $100 million, saw its shares fall by about 49%. D-Wave Quantum (QBTS), meanwhile, saw shares fall by around 47%.
The remarks came off as peculiar considering that Nvidia (NVDA) has been investing in the space for the last 4 years, with products such as CUDA-Q and cuQuantum. What's even more bizarre is that just 4 months ago, Jensen states that Quantum Computing is 10-15 years away, a far cry from his 15-30 year assessment at CES.
Qubits as a Vanity Metric
Before we dive into why I believe we are closer to 5 years and not 15, we have to first look at the building blocks of a quantum computer: qubits. Qubits are the fundamental units of a quantum computer and allow us to compute on Quantum computers, similarly to how bits compute on classical computers. For a deeper dive on the topic of quantum computers, check out this primer.
One lazy way of determining the performance of quantum computers, is by the total number of qubits. In theory, a device with 1,000 qubits is more powerful than a device with 100, right?
Yes and no.
The total number of qubits is nothing more than a vanity metric these days. It’s not uncommon to see hardware companies making grandiose announcements on the total number of qubits they’re deploying on their devices. IBM Research shows the following image with 1 Million Qubits being the goal by 2030:
PsiQuantum Raises $450 Million to Build its Quantum Computer
The funds will be used to expand its team, which currently has about 150 people, and to build a 1 million-quantum-bit machine, said Jeremy O’Brien, co-founder and chief executive of the Palo Alto, Calif.-based company.
To reach this goal, we’re on a journey to build 1,000,000 physical qubits that work in concert inside a room-sized error-corrected quantum computer. That’s a big leap from today’s modestly-sized systems of fewer than 100 qubits.
SeeQC: 1 million qubit quantum computers: moving beyond the current “brute force” strategy
So as you can see, One Million Qubits seems to be the holy grail of quantum computing, but bearing in mind noise and error rates, what does that actually mean?
When is One Million Qubits Actually Valuable?
Before one considers which Quantum Computer on the market is the most state-of-the-art, one must first look at the most important quality of that device: its Gate Fidelities.
As you can see in the chart above, a device’s Algorithmic Qubits saturate as a function of total physical qubits. Thus, you get diminishing returns as you scale those physical qubits.
If, however, we start improving those error rates:
All of the sudden the story becomes far more interesting. The lower the error rates, the fewer physical qubits you actually need. Therefore, a device with 100 physical qubits and a 0.1% Error Rate can solve more problems than a device with one million physical qubits and a 1% Error Rate.
But What About Error Correction?
Error correction is an incredibly powerful tool that we need in order to achieve Broad Quantum Advantage. Google recently announced breakthroughs in error correction with their latest Willow chip. However, when the underlying error rates are high, Error Correction is not the saving grace it’s made out to be. The chart below illustrates how error correction can improve a device across the aforementioned gate error regimes.
As you scale physical qubit count, you also scale cost. Therefore, if your physical qubits maintain a high error rate, you’re still going to run into scalability problems, even with error correction. Take a look at one of IBM’s latest prototypes:
This thing is an absolute monster. Now imagine an entire football field with these things…This is how IBM plan on getting to 1 Million qubits. The higher the error rates, the smaller the gains as you scale your system. At higher error rates, you approach unreasonably larger deployments in order to achieve marginal gains.
According to IBM Research:
And we have to make sure that the fridge — the cryostat — doesn’t collapse. That could happen, if we were to continue adding, by brute force, more and more superconducting circuits to the bottom of the fridge. A cryostat with even one logical qubit made of, say, 500 physical qubits would be a structure of half a ton — and that is simply unfeasible.
which is why error correction is not the silver bullet it’s made out to be. Even if you are error correcting your devices, and they still have less than stellar error rates, it becomes incredibly costly to scale those devices. As the complexity of the problems you are trying to solve scales linearly, your overall costs scale exponentially, which kind of defeats the purpose of building Quantum Computers to begin with.
Therefore Quantum Computing is a delicate balancing act…the power of your device increases exponentially with algorithmic qubits but the ratio of algorithmic to physical qubits decreases as a function of error rates. The industry is betting that they can make noisy devices work by throwing millions of qubits at the problem because they don’t see a trajectory for getting those errors down. That’s why we hear this talk about building football field/whatever sized quantum computers and why Quantum Computing is still in the realm of science fiction.
But I do see a trajectory to getting the errors rates down. This will allow the industry to take advantage of these exponentials rather than fighting them, and focus on building small machines that can outperform a supercomputer.
So What’s the Correct Approach to Tackle This Problem?
If you’re thinking, “well, just build better qubits,” you’re absolutely right! IBM, Rigetti, Google, and all the other superconducting incumbents talk about building millions of qubits, because they cannot build better qubits.
In 2013, superconducting qubits looked good on paper, because they promised to scale a la Moore’s Law. Superconducting Qubits are manufactured using the same processes as today’s classical silicon chips. Therefore the promise was that, qubits scale the same way transistors scale. But then physics got in the way, and noise started to scale.
One candidate that is showing a whole lot of promise are Trapped Ions. Trapped Ions are more stable and have better connectivity to other qubits than their superconducting counterparts. As a whole, they have lower error rates, better readout fidelity and you can keep them in a cohered quantum state at orders of magnitude longer than superconducting qubits.
Many of the commercially relevant problems we want to solve today only require a few hundred high quality qubits. It is abundantly clear that the solution MUST focus on building better qubits, rather than filling warehouses with power hungry devices.
Quality Over Quantity
That is why I have money on Ion Trap architectures, such as IonQ (IONQ), Quantinuum and Oxford Ionics. These companies are just a few years away from hundreds of high fidelity qubits, which throws cold water on Huang's assessment at CES. In 2024 alone, Oxford Ionics have set world records in Single-qubit gate fidelity, Two-qubit gate fidelity and Quantum state preparation and measurement (SPAM). Quantinuum is also slated to IPO at some point this year, with rumors of up to a $20B valuation.
Quantum Computing will likely come out of nowhere, similarly to how OpenAI launched ChatGPT and started the AI hype. Don't be surprised when the rug pulls out from under you.
Dominik Andrzejczuk
Polish American Venture Capitalist