Quantum Error Correction Goes FOOM
62 points
21 hours ago
| 4 comments
| algassert.com
| HN
amluto
18 hours ago
[-]
I'm amused by the burying of the non-lede until half way through the paper. I, too, can maintain a 59-bit repetition code for over two hours on my quantum laptop (it's just a normal laptop, but it definitely obeys the laws of quantum mechanics):

Initialize the coded bit, using a 59-qubit repetition code that corrects bit flips but not phase errors, in IPython:

    In [1]: A = 59 * (False,)
Write a decoder:

    In [2]: def decode(physbits):
       ...:     return sum(physbits) > len(physbits)/2.0
Wait two hours [0]. I'll be lazy and only decode at the end of the two hours, but if I wanted error correction to get the full advantage, I would periodically run the error correction algorithm and fix detected errors. Now decode it:

    In [3]: decode(A)
    Out[3]: False
Holy cow, it worked!

I'm being rather tongue-in-cheek here, of course. But it's genuinely impressive that my laptop can stick 59 bits into DRAM cells containing a handful of electrons each, and all of them are just fine after several hours. And it's really really impressive that this research group got their superconducting qubits to store classical states well enough that their rather fancy error correcting device could keep up and preserve the logical state for two hours. [1]

But this isn't quantum error correction going FOOM, per se. It's classical. A bit-flip-corrected but not phase-flip-corrected qubit is precisely a classical bit, no more, no less.

The authors did also demonstrate that they could do the same trick correcting phase flips and not bit flips, but that's a tiny bit like turning the experiment on its side and getting the same result. Combining both demonstrations is impressive, though -- regardless of whether you look at the DRAM cells in my laptop as though the level is the Z basis or the X basis, they only work in one single basis. You cannot swap the role of level and phase in DRAM and get it to still work. But the researchers did pull that off on their two-hour-half-life device, and I find that quite impressive, and the fact that it worked strongly suggests that their device is genuinely 59 qubits, whereas no one could credibly argue that my laptop contains giga-qubits of DRAM. Fundamentally, you can do classical repetition using a repetition code, but you cannot do quantum computation with it. You need fancier, and more sensitive-to-errors, codes for this, and that's what the second half of the article is about.

[0] I didn't actually wait two hours. But I could have waited a week and gotten the same result.

[1] The researchers' qubits are nowhere near as good as my DRAM. They had to run their error correction a billion times or so during the course of their two hours. (My DRAM refreshes maybe ten thousand times over the course of two hours, and one can look at DRAM refreshes as correcting something a bit like a repetition code.)

reply
Strilanc
17 hours ago
[-]
Author here: yes that's all correct.

This is perhaps not clear enough, but the title refers to a pattern. For classical bits on a quantum computer this pattern is already playing out (as shown in the cited experiments), and for quantum bits I think it's about to play out.

Classical storage of classical bits is still far more reliable, of course. Hell, a rock chucked into one bucket or another is still more reliable. We'll never beat the classical computer at storing classical bits... but the rock in a bucket has some harsh competition coming.

I should maybe also mention that arbitrarily good qubits are a step on the road, not the end. I've seen a few twitter takes making that incorrect extrapolation. We'll still need hundreds of these logical qubits. It's conceivable that quantity also jumps suddenly... but that'd require even more complex block codes to start working (not just surface codes). I'm way less sure if that will happen in the next five years.

reply
vlovich123
9 hours ago
[-]
But you still believe that quantum computers have a likelihood of being possible to build AND that they can accomplish a task faster than classical? I feel like it’s going to get exponentially harder and expensive to get very small incremental gains and that actually beating a classical computer isn’t necessarily feasible (because of all the error correction involved and difficulty in manufacturing a computer with large number of qbits). Happy to be proven wrong of course.
reply
packetlost
8 hours ago
[-]
> But you still believe that quantum computers have a likelihood of being possible to build AND that they can accomplish a task faster than classical?

Not GP but yes. I'm reasonably confident that we will have quantum computers that are large and stable enough to have a real quantum advantage, but that's mostly because I believe Moore's law is truly dead and we will see a plateau in 'classical' CPU advancement and memory densities.

> I feel like it’s going to get exponentially harder and expensive to get very small incremental gains and that actually beating a classical computer isn’t necessarily feasible (because of all the error correction involved and difficulty in manufacturing a computer with large number of qbits)

I don't think people appreciate or realize that a good chunk of the innovations necessary to "get there" with quantum are traditional (albeit specialized) engineering problems, not new research (but breakthroughs can speed it up). I'm a much bigger fan of the "poking lasers at atoms" style of quantum computer than the superconducting ones for this reason, the engineering is more like building cleaner lasers and better AOMs [0] than trying to figure out how to super cool vats of silicon and copper. It's outside my area of expertise, but I would expect innovations to support better lithography to also benefit these types of systems, though less directly than superconducting.

Source: I worked on hard-realtime control systems for quantum computers in the past. Left because the academic culture can be quite toxic.

[0]: https://en.wikipedia.org/wiki/Acousto-optic_modulator

reply
vlovich123
19 minutes ago
[-]
I don’t know how people claim the science is solved and “it’s just engineering” when scaling up to no trivial quantum circuits is literally the problem no one has solved and hand waving it away as an “engineering problem” seems really disingenuous. Foundational science needs to be done to solve these problems.

Classical CPUs have slowed but not stopped but more importantly quantum machines haven’t even been built yet let alone been proven possible to scale up arbitrarily. Haven’t even demonstrated they can factor 17 faster than a classical computer.

reply
amluto
17 hours ago
[-]
I don’t really expect fancier codes to cause a huge jump in the number of logical qubits. At the end of the day, there’s some code rate (logical bits / physical bits) that makes a quantum computer work. The “FOOM” is the transition from that code rate changing from zero (lifetime of a logical bit is short) to something that is distinctly different from zero (the state lasts long enough to be useful when some credible code). Say the code rate is 0.001 when this happens. (I haven’t been in the field for a little while, but I’d expect higher because those huuuuge codes have huuuuge syndromes, which isn’t so fun. But if true topological QC ever works, it will be a different story.) The code rate is unlikely to ever be higher than 1/7 or so, and it will definitely not exceed 1. So there’s at most a factor of 1000, and probably less, to be gained by improving the code rate. This isn’t an exponential or super-exponential FOOM.

A factor of 1000 may well be the difference between destroying Shor’s-algorithm-prone cryptography and destroying it later, though.

reply
amluto
3 hours ago
[-]
I'll add some nuance here. In a classical computer, computing with coded bits is not particularly difficult. We've known how to do it with some degree of mathematical rigor for decades -- IIRC John von Neumann was interested in this topic. And we barely even need to do it explicitly: computers have accurate enough gates without explicit coding that merely error-correcting RAM (ECC-style) and data links (Ethernet, PCIe, etc) is good enough for almost all applications. Even in aerospace, usually we just have one extra majority vote over a few different computers.

Quantum computers are different. It seems quite unlikely that anyone build the equivalent of, say, an XOR gate that takes two single physical qubits in and spits out two physical qubits (this is quantum land -- the standard gates neither create nor destroy qubits, so the number of inputs and outputs is the same) that works well enough to actually represent that particular operation in whatever software is being run. Instead each logical operation will turn into multiple physical operations that work like an error correcting code. The easy classical trick where your transistor is janky at 0.9V so you run it at 1.0V amounts to moving more electrons around per operation, and this approach is analogous to correcting bit flips but not phase errors, and it makes your quantum computer stop being quantum.

And here's where it gets messy. The physical qubit technologies that are best for longish-term data storage may not be the same as the technologies that are good for computation, and those may not be the same technologies that are good for communication at a distance. (For example, photons are pretty good for transmitting quantum states, but transferring a qubit from a different technology to a photon state and back is not so easy, and demonstration of computation with photons have been pretty limited.) As an extreme example, one can, in principle, store quantum states quite robustly and even compute with them if one can find the correct kind of unobtanium (materials with the appropriate type of non-Abelian anyon surface states), but, last I heard, no one had much of an idea how to get the qubit states off the chip even if such a chip existed.

So it is possible that we'll end up with a quantum computer that doesn't scale, at least for a while. There might be 20k physical qubits, and some code rate, and some number of logical quantum operations you can do on the logical qubits before they decay or you get bored, and very little ability to scale to more than one computer that can split up a computation between them. In that case, the code rate is a big deal.

reply
Havoc
16 hours ago
[-]
Does quantum speed even matter?

I would have thought a wide enough array of qubits could functionally do "anything" in one shot

reply
Strilanc
15 hours ago
[-]
Yes, speed matters. No, quantum computers can't do everything instantly even with unbounded qubits.

A well studied example is that it's impossible to parallelize the steps in Grover's algorithm. To find a preimage amongst N possibilities, with only black box access, you need Ω(sqrt(N)) sequential steps on the quantum computer [1].

Another well known case is that there's no known way to execute a fault tolerant quantum circuit faster than its reaction depth (other than finding a rewrite that reduces the depth, such as replacing a ripple carry adder with a carry lookahead adder) [2]. There's no known way to make the reaction depth small in general.

Another example is GCD (greatest common divisor). It's conjectured to be an inherently sequential problem (no polylog depth classical circuit) and there's no known quantum circuit for GCD with lower depth than the classical circuits.

[1]: https://arxiv.org/abs/quant-ph/9711070

[2]: https://arxiv.org/abs/1210.4626

reply
tbrownaw
12 hours ago
[-]
There is a complexity class called BQP, which is more or less the things that quantum computers are good at. This includes P which is more or less the things they normal computers are good at. Things that quantum computers are better at would more or less be one minus the other. One interesting point is that BQP probably doesn't include all of the rather famous NP class.
reply
tsimionescu
16 hours ago
[-]
They most certainly can't, not even close to it. They can do a very limited subset of problems, and not at all in one shot - just far far far less shots than a classical computer. But even if you reduce and O(e^n) problem to an O(n) or O(n²) problem, that's not instantaneous, and the speed with which you perform these n or n² operations still matters.
reply
moffkalast
11 hours ago
[-]
Everything is possible when a quantum scientist needs to apply for another grant. Less so after the project is already signed.
reply
oh_my_goodness
8 hours ago
[-]
Look at the last plot.
reply
sparedwhistle
19 hours ago
[-]
What the hell is FOOM?
reply
layer8
10 hours ago
[-]
reply
MattPalmer1086
19 hours ago
[-]
Usually refers to a sudden increase in AI intelligence to super intelligence, i.e. the singularity. Basically an exponential increase in capability.
reply
austinjp
14 hours ago
[-]
It may also be a reference to the (comparatively ancient) "Pentium go F00F" bug.

https://en.wikipedia.org/wiki/Pentium_F00F_bug

reply
AlexCoventry
14 hours ago
[-]
Explosive ignition of a fire.
reply
cubefox
13 hours ago
[-]
It was Yudkowsky's colloquial term for hard takeoff:

https://www.lesswrong.com/posts/tjH8XPxAnr6JRbh7k/hard-takeo...

reply