You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/Tutorials/Message Passing.md
-11Lines changed: 0 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -247,7 +247,6 @@ lines!(ax, x, x, color = :black)
247
247
f
248
248
249
249
```
250
-

251
250
252
251
Trying to plot this function with the $\tanh$ versus plotting this function with the exponentials makes the numerical instability of the $\tanh$ apparent. Fortunately, the exponential form makes it apparent that it is not only symmetric about $y = x$, but also that the function is dominated by smaller values of $x$:
253
252
@@ -751,7 +750,6 @@ lines!(noise, FER, color = :red)
The overwhelming majority of convergeneces, for both $X$ and $Z$, occurred within one or two iterations. This is plausible for a couple of reasons. First, note that all variable nodes have degree four but all check nodes have degree nine! This is large for an LDPC code. For low error rates when errors are sparsely distributed over 254 qubits, it may be common that a single check node does not connect to more than one inncorrect variable node and the degrees are high enough to immediately flip any bit.
778
775
@@ -805,8 +802,6 @@ false
805
802
806
803
Rerunning the simulation without using Bayes' Theorem returns almost symmetric $X$ and $Z$ iteration counts. For completeness, we include both runs on a single plot. The first blue point on the left is due to a single convergence error and the spike on the left further shows that direct sampling is either not appropriate for this error rate or it has not been sampled enough times for accuracy.
## Example 2: Single-Shot Decoding With Metachecks
811
806
Next we're going to look at two single-shot decoding schemes. We will call the paper [quintavalle2021single](@cite) scheme one and [higgott2023improved](@cite) scheme two. We encourage the reader to check out both papers directly for details. Briefly, both schemes will consider data errors, as in the previous example, plus additional measurement errors (on the syndrome values). The code family we will look at has an extra matrix, $M$, with the property that $Ms = 0$ for any valid syndrome $s$ of the code. Then assuming that the measurement error $s_e$ didn't take us from a valid syndrome to another valid syndrome, $M(s + s_e) = Ms_e \neq 0$. Whether or not this happens depends on the properties of the classical code with $M$ as its parity-check matrix. To correct the syndrome, we decode using the Tanner graph based on $M$. Then we will use the corrected syndrome to decode the stabilizers.
By distance eight, scheme one was difficult to run without a cluster. We did not attempt distance nine with this scheme. This is problematic for many reasons, the most important of which is that many code families do not "settle in" to their asymptotic behaviors until distances much higher than this (although the exact distance depends on the decoder being used). For example, for the surface codes under minimum-weight perfect-matching (MWPM), anything below distance 20 is considered the small-code regime (compare this to distance seven for the same code family using trellis decoding).
0 commit comments