You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/Classical/linear_code.md
+4-11Lines changed: 4 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,6 +31,8 @@ cardinality
31
31
rate
32
32
```
33
33
34
+
See also: encode`
35
+
34
36
If the linear code was created by passing in a generator (parity-check) matrix, then this matrix is stored in addition to the standard form. Note that this matrix is potentially over complete (has more rows than its rank). The standard form is returned when the optional parameter `stand_form` is set to true. Some code families are not constructed using these matrices. In these cases, the matrices are initially `missing` and are computed and cached when these functions are called for the first time. Direct access to the underlying structs is not recommended.
35
37
```@docs
36
38
generator_matrix
@@ -50,9 +52,6 @@ standard_form_permutation
50
52
```
51
53
52
54
The minimum distance of some code families are known and are set during construction. The minimum distance is automatically computed in the constructor for codes which are deemed "small enough". Otherwise, the minimum distance is `missing`. Primitive bounds on the minimum distance are given by
53
-
```@docs
54
-
minimum_distance_lower_bound
55
-
```
56
55
57
56
```@docs
58
57
minimum_distance_upper_bound
@@ -81,24 +80,18 @@ The minimum distance and its bounds may be manually set as well. Nothing is done
Copy file name to clipboardExpand all lines: docs/src/Classical/new_codes_from_old.md
+6-10Lines changed: 6 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,27 +22,26 @@ construction_X3
22
22
```
23
23
24
24
The direct sum code has generator matrix `G1 ⊕ G2` and parity-check matrix `H1 ⊕ H2`.
25
+
25
26
```@docs
26
-
CodingTheory.⊕
27
+
⊕
27
28
```
28
29
29
30
The generator matrix of the (direct) product code is the kronecker product of the generator matrices of the inputs.
31
+
30
32
```@docs
31
-
CodingTheory.×
33
+
×
32
34
```
33
35
34
36
The parity-check matrix of the tensor product code is the kronecker product of the parity-check matrices of the inputs.
35
-
```@docs
36
-
CodingTheory.⊗
37
-
```
38
37
39
38
There is some debate on how to define this product. This is known to often be the full ambient space.
40
39
```@docs
41
40
entrywise_product_code
42
41
```
43
42
44
43
```@docs
45
-
CodingTheory./
44
+
/
46
45
```
47
46
48
47
`juxtaposition` is representation dependent and therefore works on the potentially over-complete generator matrices, not on the standard form.
@@ -52,10 +51,7 @@ juxtaposition
52
51
53
52
## Methods
54
53
55
-
If `C` is a quasi-cyclic code, `permute_code` returns a `LinearCode` object.
56
-
```@docs
57
-
permute_code
58
-
```
54
+
If `C` is a quasi-cyclic code, `permute_code` returns a `LinearCode` object. See: `permute_code`
59
55
60
56
The most common way to extend a code is to add an extra column to the generator matrix whose values make the sum of the rows zero. This is called an even extension and is the default for `extend(C)`. Alternatively, this new column may be inserted at any index `c` in the matrix, e.g. `extend(C, c)`. In the most general case, one may provide a vector `a` and define the values of the new column to be `-a` dot the row. The standard definition is clearly just the special case that `a` is the all-ones vector.
Copy file name to clipboardExpand all lines: docs/src/LDPC/codes.md
+3-35Lines changed: 3 additions & 35 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,14 +31,7 @@ Parity-check matrix: 6 × 9
31
31
1 1 0 0 0 0 1 1 1
32
32
```
33
33
34
-
Random regular LDPC codes maybe be constructed via
35
-
```@docs
36
-
regular_LDPC_code
37
-
```
38
-
and irregular LDPC codes via
39
-
```@docs
40
-
irregular_LDPC_code
41
-
```
34
+
Random regular LDPC codes maybe be constructed via `regular_LDPC_code` and `irregular_LDPC_code`
42
35
43
36
## Attributes
44
37
The polynomials ``\lambda(x)`` and ``\rho(x)`` as well as the degrees of each variable and check nodes are computed upon construction.
@@ -116,33 +109,8 @@ To count or explicitly enumerate the short cycles of the Tanner graph, use
116
109
count_short_cycles
117
110
```
118
111
119
-
```@docs
120
-
shortest_cycles
121
-
```
122
-
123
-
Various information about the ACE values of cycles in the Tanner graph may be computed with the following functions.
124
-
```@docs
125
-
ACE_spectrum
126
-
```
127
-
128
-
```@docs
129
-
shortest_cycle_ACE
130
-
```
112
+
See also: `shortest_cycles`
131
113
132
-
```@docs
133
-
ACE_distribution
134
-
```
135
-
136
-
```@docs
137
-
average_ACE_distribution
138
-
```
139
-
140
-
```@docs
141
-
median_ACE_distribution
142
-
```
143
-
144
-
```@docs
145
-
mode_ACE_distribution
146
-
```
114
+
Various information about the ACE value of cycles in the Tanner graph may be computed with the following functions. See: `ACE_spectrum`, `shortest_cycle_ACE`, `ACE_distribution`, `average_ACE_distribution`, `median_ACE_distribution`, `mode_ACE_distribution`
Copy file name to clipboardExpand all lines: docs/src/Tutorials/Message Passing.md
+2-8Lines changed: 2 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -247,7 +247,6 @@ lines!(ax, x, x, color = :black)
247
247
f
248
248
249
249
```
250
-

251
250
252
251
Trying to plot this function with the $\tanh$ versus plotting this function with the exponentials makes the numerical instability of the $\tanh$ apparent. Fortunately, the exponential form makes it apparent that it is not only symmetric about $y = x$, but also that the function is dominated by smaller values of $x$:
253
252
@@ -406,7 +405,7 @@ julia> Gallager_B(H, y)
406
405
```
407
406
408
407
## Decimation
409
-
Decimation was previously used in message-passing applications outside of error correction and was applied to stabilizer codes in [](@cite). The idea is to freeze the value of a variable node. We can either do this from the start to obtain a so-called *genie-aided* decoder, or we can periodically pause message passing to fix a bit. In *guided decimation*, we pause every fixed number of rounds and freeze the value of the variable node with the highest log-likelihood ratio. In *automated decimation*, we pause after every iteration and fix any bit whose absolute value has passed a certain threshold.
408
+
Decimation was previously used in message-passing applications outside of error correction and was applied to stabilizer codes [yao2024belief](@cite). The idea is to freeze the value of a variable node. We can either do this from the start to obtain a so-called *genie-aided* decoder, or we can periodically pause message passing to fix a bit. In *guided decimation*, we pause every fixed number of rounds and freeze the value of the variable node with the highest log-likelihood ratio. In *automated decimation*, we pause after every iteration and fix any bit whose absolute value has passed a certain threshold.
410
409
411
410
It is important to note that when a variable node is fixed to a specific value, the decoder is now sampling possible solutions with that fixed bit, which is different from the ML and MAP problems above. Furthermore, if there is a unique solution and a bit is fixed which does not match the solution, the decoder will fail instead of correcting that bit. For example, fixing the most reliable bit in guided decimation may mean fixing a bit which is still far from reliable and could go either way. On the other hand, fixing a bit could help the decoder converge faster and also break out of trapping sets. In this sense, decimation can be very helpful decoding degenerate stabilizer codes where there are many valid solutions and BP has a difficult time picking one to converge to.
412
411
@@ -751,7 +750,6 @@ lines!(noise, FER, color = :red)
The overwhelming majority of convergeneces, for both $X$ and $Z$, occurred within one or two iterations. This is plausible for a couple of reasons. First, note that all variable nodes have degree four but all check nodes have degree nine! This is large for an LDPC code. For low error rates when errors are sparsely distributed over 254 qubits, it may be common that a single check node does not connect to more than one inncorrect variable node and the degrees are high enough to immediately flip any bit.
778
775
@@ -805,8 +802,6 @@ false
805
802
806
803
Rerunning the simulation without using Bayes' Theorem returns almost symmetric $X$ and $Z$ iteration counts. For completeness, we include both runs on a single plot. The first blue point on the left is due to a single convergence error and the spike on the left further shows that direct sampling is either not appropriate for this error rate or it has not been sampled enough times for accuracy.
## Example 2: Single-Shot Decoding With Metachecks
811
806
Next we're going to look at two single-shot decoding schemes. We will call the paper [quintavalle2021single](@cite) scheme one and [higgott2023improved](@cite) scheme two. We encourage the reader to check out both papers directly for details. Briefly, both schemes will consider data errors, as in the previous example, plus additional measurement errors (on the syndrome values). The code family we will look at has an extra matrix, $M$, with the property that $Ms = 0$ for any valid syndrome $s$ of the code. Then assuming that the measurement error $s_e$ didn't take us from a valid syndrome to another valid syndrome, $M(s + s_e) = Ms_e \neq 0$. Whether or not this happens depends on the properties of the classical code with $M$ as its parity-check matrix. To correct the syndrome, we decode using the Tanner graph based on $M$. Then we will use the corrected syndrome to decode the stabilizers.
By distance eight, scheme one was difficult to run without a cluster. We did not attempt distance nine with this scheme. This is problematic for many reasons, the most important of which is that many code families do not "settle in" to their asymptotic behaviors until distances much higher than this (although the exact distance depends on the decoder being used). For example, for the surface codes under minimum-weight perfect-matching (MWPM), anything below distance 20 is considered the small-code regime (compare this to distance seven for the same code family using trellis decoding).
0 commit comments