close
close

Quantum computers exceed critical error threshold

Quantum computers exceed critical error threshold

“The whole story depends on this kind of scaling,” said David Hayes, a physicist at quantum computing company Quantinuum. “It’s really exciting to see this become a reality.”

Majority rules

The simplest version of error correction works on ordinary “classic” computers, which represent information as a sequence of bits, i.e. zeros and ones. Any random error that flips the value of a bit causes an error.

You can protect yourself from errors by spreading information across multiple bits. The simplest approach is to rewrite every 0 as 000 and every 1 as 111. Any time the three bits in a group don’t all have the same value, you know an error has occurred, and a majority vote will fix the erroneous bit.

But the procedure doesn’t always work. If two bits in a triplet have errors at the same time, the majority vote will return the wrong answer.

To avoid this, you can increase the number of bits in each group. For example, a five-bit version of this “repeat code” can tolerate two errors per group. But while this larger code can handle more errors, you’ve also introduced more ways for things to go wrong. The net effect is only beneficial if the error rate of each individual bit is below a certain threshold. If not, adding more bits will only make your error problem worse.

As usual, the situation in the quantum world is more difficult. Qubits are prone to more errors than their classical relatives. They are also much harder to manipulate. Each step of a quantum calculation is another source of error, as is the error correction process itself. Furthermore, there is no way to measure the state of a qubit without irreversibly disturbing it – you have to somehow diagnose errors without ever directly observing them. All of this means that quantum information must be handled extremely carefully.

“It’s inherently more sensitive,” said John Preskill, a quantum physicist at the California Institute of Technology. “You have to take care of everything that can go wrong.”

Many researchers initially thought that quantum error correction was impossible. They were proven wrong in the mid-1990s when researchers developed simple examples of quantum error correction codes. But that just changed the prognosis from hopeless to discouraging.

As the researchers worked out the details, they realized that they needed to get the error rate for each operation on physical qubits below 0.01% – only one in 10,000 could go wrong. And that would just push them to the brink. In fact, they would have to go far beyond that – otherwise the error rates of logical qubits would decline painfully slowly as more physical qubits were added, and error correction would never work in practice.

Nobody knew nearly well enough how to make a qubit. But as it turns out, these early codes only scratched the surface of what’s possible.

The surface code

In 1995, Russian physicist Alexei Kitaev heard reports of a major theoretical breakthrough in quantum computing. The year before, the American applied mathematician Peter Shor had developed a quantum algorithm to decompose large numbers into their prime factors. Kitaev couldn’t get his hands on a copy of Shor’s paper, so he developed his own version of the algorithm from scratch – one that proved more versatile than Shor’s. Preskill was thrilled with the result and invited Kitaev to visit his group at Caltech.

“Alexei is truly a genius,” Preskill said. “I have known very few people with this brilliance.”

This short visit in the spring of 1997 was extremely productive. Kitaev told Preskill about two new ideas he had been pursuing: a “topological” approach to quantum computing that would require no active error correction at all, and a quantum error correction code based on similar mathematics. At first, he didn’t think code would be useful for quantum computations. Preskill was more optimistic and convinced Kitaev that a slight variation on his original idea was worth pursuing.

This variant, called surface code, is based on two overlapping grids of physical qubits. The ones in the first grid are “data” qubits. Together, these encode a single logical qubit. The second are “measuring” qubits. This allows researchers to indirectly look for errors without disrupting the calculation.

That’s a lot of qubits. But the surface code has other advantages. Its error checking scheme is much simpler than that of competing quantum codes. Furthermore, it only involves interactions between neighboring qubits – the feature that Preskill found so appealing.

In the years that followed, Kitaev, Preskill and a handful of colleagues worked on the details of the surface code. In 2006, two researchers showed that an optimized version of the code had an error threshold of about 1%, 100 times higher than the thresholds of previous quantum codes. These error rates were still out of reach for the rudimentary qubits of the mid-2000s, but they no longer seemed so out of reach.

Despite these advances, interest in surface code remained limited to a small community of theorists—people who didn’t work with qubits in the lab. Their work used an abstract mathematical framework that was alien to experimentalists at the time.

“It was really hard to understand what was going on,” recalls John Martinis, a physicist at the University of California, Santa Barbara, who is one of those experimenters. “It was like reading a paper on string theory.”

In 2008, a theorist named Austin Fowler set out to change this by introducing the benefits of surface code to experimenters in the United States. After four years, he found a receptive audience in the Santa Barbara group led by Martinis. Fowler, Martinis and two other researchers wrote a 50-page article that outlined a practical implementation of the surface code. They estimated that with sufficiently clever engineering, they could eventually reduce the error rates of their physical qubits to 0.1%, well below the surface code threshold. Then, in principle, they could increase the size of the grid to reduce the error rate of the logical qubits to an arbitrarily low level. It was a blueprint for a full-fledged quantum computer.

Leave a Reply

Your email address will not be published. Required fields are marked *