Skip to content

Commit 5a2f1d4

Browse files
authored
Fix test Rademacher sampler (#8488)
**Context:** ### What is Rademacher sampler We used such a sampler in our finite-diff gradient module. Long story short, it's an evenly distributed `{+1, -1}` sampling. As a result, the `np.var` of such sampling can be proved to be **not a normal distribution**; in fact, as a [textbook](https://statproofbook.github.io/P/norm-chi2.html) chi-squared distribution, it satisfies $$S^2 = 1 - \bar{X}^2 \le 1$$ where $\bar{X}$ is the sampling average, which is clearly not symmetric over the expected variance $\sigma^2 = 1$. Furthermore, the CDF of this distribution looks like <img width="1200" height="700" alt="Code_Generated_Image(1)" src="https://github.com/user-attachments/assets/f37967e4-d347-42b9-83fe-c3d94fcf6b4c" /> Hence our original test for this sampler that validates the sampling variance with `atol = 4/N` has a considerable chance to fail given correct samples from correct distribution. **Description of the Change:** Complicated but basically just recall the stats stuff and set up the correct threshold. **Benefits:** **Possible Drawbacks:** **Related GitHub Issues:** [sc-90962]
1 parent ec1cc5f commit 5a2f1d4

File tree

1 file changed

+22
-7
lines changed

1 file changed

+22
-7
lines changed

tests/gradients/finite_diff/test_spsa_gradient.py

Lines changed: 22 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@
1717
import numpy as np
1818
import pytest
1919
from default_qubit_legacy import DefaultQubitLegacy
20+
from scipy import stats
2021

2122
import pennylane as qml
2223
from pennylane import numpy as pnp
@@ -89,9 +90,7 @@ def test_same_seeds(self):
8990
@pytest.mark.parametrize(
9091
"ids, num",
9192
[
92-
pytest.param(
93-
list(range(5)), 5, marks=pytest.mark.xfail(reason="To be fixed at sc-90962")
94-
),
93+
(list(range(5)), 5),
9594
([0, 2, 4], 5),
9695
([0], 1),
9796
([2, 3], 5),
@@ -105,14 +104,30 @@ def test_mean_and_var(self, ids, num, N, seed):
105104
ids_mask = np.zeros(num, dtype=bool)
106105
ids_mask[ids] = True
107106
outputs = [_rademacher_sampler(ids, num, rng=rng) for _ in range(N)]
107+
108108
# Test that the mean of non-zero entries is approximately right
109109
assert np.allclose(np.mean(outputs, axis=0)[ids_mask], 0, atol=4 / np.sqrt(N))
110+
110111
# Test that the variance of non-zero entries is approximately right
111-
assert np.allclose(np.var(outputs, axis=0)[ids_mask], 1, atol=4 / N)
112-
# Test that the mean of zero entries is exactly 0, because all entries should be
112+
# DEV NOTE: For Rademacher distribution X ∈ {-1, +1}, the sample variance S² has a special
113+
# property: S² = 1 - X̄². Since X̄ ~ N(0, 1/N), we have N·X̄² ~ χ²(1) with df=1 (NOT df=N-1).
114+
# This is the key insight: the variance is completely determined by the mean.
115+
#
116+
# For a one-sided lower bound test at 99.73% confidence (3-sigma equivalent, α = 0.0027):
117+
# We want P(S² < s_lower) = α, which translates to P(N·X̄² > N(1 - s_lower)) = α.
118+
# Since N·X̄² ~ χ²(1), we need: N(1 - s_lower) = χ²_{0.9973}(1)
119+
# Therefore: s_lower = 1 - χ²_{0.9973}(1) / N
120+
alpha = 0.0027 # 99.73% confidence (3-sigma equivalent)
121+
chi2_critical = stats.chi2.ppf(1 - alpha, df=1)
122+
lower_bound = 1 - chi2_critical / N
123+
sample_vars = np.var(outputs, axis=0)[ids_mask]
124+
assert np.all(sample_vars >= lower_bound), (
125+
f"Sample variance {sample_vars} fell below lower bound {lower_bound} "
126+
f"at {100*(1-alpha):.2f}% confidence level (3-sigma equivalent)"
127+
)
128+
129+
# Test that all the zero entries are exactly 0
113130
assert np.allclose(np.mean(outputs, axis=0)[~ids_mask], 0, atol=1e-8)
114-
# Test that the variance of zero entries is exactly 0, because all entries are the same
115-
assert np.allclose(np.var(outputs, axis=0)[~ids_mask], 0, atol=1e-8)
116131

117132

118133
class TestSpsaGradient:

0 commit comments

Comments
 (0)