Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Continuing:

Google (or anyone else) hasn't shown an implementation of an error correcting code, so we do not have data points for a model-free "ruler extrapolation" of logical error rate vs. lattice size.

In fact, I think the Sycamore qubits were "pre-threshold", i.e. no error correction gain possible even in theory. I wonder if someone will correct/confirm me. I remember the readout fidelity was particularly poor.

Furthermore, I would argue that the large readout errors make the observed scaling of total error slightly less impactful.

But don't get me wrong, it's still a monumental achievement.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: