@@ -387,6 +387,101 @@ of this is that the longer you run a simulation, the better you know your
387387results. Therefore, by running a simulation long enough, it is possible to
388388reduce the stochastic uncertainty to arbitrarily low levels.
389389
390+ Skewness
391+ ++++++++
392+
393+ The `skewness `_ of a population quantifies the asymmetry of the probability
394+ distribution around its mean. Positive and negative skewness indicate a
395+ longer/heavier right and left tail respectively. Let :math: `x_1 ,\ldots ,x_n` be
396+ the per-realization values for a bin, with sample mean :math: `\bar {x}` and
397+ sample central moments:
398+
399+ .. math ::
400+
401+ m_k \;=\; \frac {1 }{n}\sum _{i=1 }^{n}\bigl (x_i-\bar {x}\bigr )^k.
402+
403+ OpenMC reports the *adjusted Fisher-Pearson skewness * (defined for :math: `n \ge
404+ 3 `), which is commonly used in many statistical packages:
405+
406+ .. math ::
407+
408+ G_1 \;=\; \frac {\sqrt {n \cdot (n-1 )}}{\, n-2 \, }\cdot \frac {m_3 }{m_2 ^{3 /2 }}.
409+
410+ where :math: `m_2 ` and :math: `m_3 ` correspond to the biased sample second and
411+ third central moment respectively.
412+
413+ Kurtosis
414+ ++++++++
415+
416+ The `kurtosis `_ of a population quantifies tail weight (also called tailedness)
417+ of the probability distribution relative to a normal distribution. Positive
418+ excess kurtosis indicates *heavier tails * whereas negative excess kurtosis
419+ indicates *lighter tails *. Kurtosis is especially useful for identifying bins
420+ where occasional extreme scores dominate uncertainty. OpenMC reports the
421+ *adjusted excess kurtosis * (defined for :math: `n \ge 4 `):
422+
423+ .. math ::
424+
425+ G_2 \;=\; \frac {(n-1 )}{(n-2 )(n-3 )}
426+ \left [(n+1 )\,\frac {m_4 }{m_2 ^{2 }} \;-\; 3 (n-1 )\right ].
427+
428+ where :math: `m_2 ` and :math: `m_4 ` correspond to the biased sample second and
429+ fourth central moment respectively. For a perfectly normal distribution, the
430+ excess kurtosis is :math: `0 `.
431+
432+ Variance of Variance
433+ ++++++++++++++++++++
434+
435+ The variance of the variance (also known as the coefficient of variation
436+ squared) measures *stability of the sample variance * :math: `s^2 ` and, by
437+ extension, the reliability of reported relative errors. High VOV means that
438+ error bars themselves are noisy—often due to heavy tails, skewness, or too few
439+ realizations.
440+
441+ .. math ::
442+
443+ VOV = \frac {s^2 (s_{\bar {X}}^2 )}{s_{\bar {X}}^4 } = \frac {m_4 }{m_2 ^2 } - \frac {1 }{n}
444+
445+ where :math: `s_{\bar {X}}^2 ` is the estimated variance of the mean and
446+ :math: `s^2 (s_{\bar {X}}^2 )` is the estimated variance in :math: `s_{\bar {X}}^2 `.
447+ The MCNP manual suggests a hard threshold such that :math: `VOV < 0.1 ` to improve
448+ the probability of forming a reliable confidence interval. However, OpenMC does
449+ not enforce an universal cut-off because the suitability of any single threshold
450+ depends strongly on problem specifics (estimator choice, variance-reduction
451+ settings, tally binning, or even effective sample size).
452+
453+
454+ Normality Tests (D'Agostino-Pearson)
455+ ++++++++++++++++++++++++++++++++++++
456+
457+ These normality test verify the hypothesis that fluctuations are *approximately
458+ normal *, a working assumption behind many Monte Carlo diagnostics and
459+ `confidence-interval heuristics `_. Tests are provided for: (i) skewness-only,
460+ (ii) kurtosis-only, and (iii) the *omnibus * combination. OpenMC uses the
461+ finite-sample-adjusted skewness :math: `G_1 ` and excess kurtosis :math: `G_2 `
462+ above to construct standardized normal scores :math: `Z_1 ` (from :math: `G_1 `) and
463+ :math: `Z_2 ` (from :math: `G_2 `) via the D'Agostino-Pearson transformations. The
464+ omnibus statistic is
465+
466+ .. math ::
467+
468+ K^2 \;=\; Z_1 ^{\, 2 } \;+\; Z_2 ^{\, 2 }
469+ \;\sim \; \chi ^2 _{(2 )} \quad \text {under } H_0 :\ \text {normality}.
470+
471+ OpenMC reports :math: `Z_1 `, :math: `Z_2 `, :math: `K^2 `, and their p-values when
472+ prerequisites are met (skewness for :math: `n\ge 3 `, kurtosis and omnibus for
473+ :math: `n\ge 4 `). Given a user-chosen significance level :math: `\alpha ` (default
474+ is :math: `0.05 `), reject :math: `H_0 ` if :math: `\text {p-value}<\alpha `; otherwise
475+ fail to reject. OpenMC leaves the interpretation to the user, who should
476+ consider VOV together with skewness, kurtosis, and normality tests results when
477+ judging whether reported confidence intervals are credible for their application
478+ [#norm-tests ]_.
479+
480+ .. [#norm-tests ]
481+ Higher-moments accumulation must be enabled with ``higher_moments = True ``
482+ for running these diagnostics including the skewness, kurtosis, and normality
483+ tests.
484+
390485 Figure of Merit
391486+++++++++++++++
392487
@@ -405,14 +500,16 @@ defined as
405500.. math ::
406501 :label: relative_error
407502
408- r = \frac {s_\bar {X}}{\bar {x}}.
503+ r = \frac {s_{ \bar {X} }}{\bar {x}}.
409504
410505 Based on this definition, one can see that a higher FOM is desirable. The FOM is
411506useful as a comparative tool. For example, if a variance reduction technique is
412507being applied to a simulation, the FOM with variance reduction can be compared
413508to the FOM without variance reduction to ascertain whether the reduction in
414509variance outweighs the potential increase in execution time (e.g., due to
415- particle splitting).
510+ particle splitting). It is important to note that MCNP reports the FOM using CPU
511+ time (wall-clock time multiplied by the number of threads/cores), whereas OpenMC
512+ reports the FOM using only the wall-clock time :math: `t`.
416513
417514Confidence Intervals
418515++++++++++++++++++++
@@ -521,6 +618,8 @@ improve the estimate of the percentile.
521618
522619 .. rubric :: References
523620
621+ .. _confidence-interval heuristics : https://doi.org/10.1080/00031305.1990.10475751
622+
524623.. _following approximation : https://doi.org/10.1080/03610918708812641
525624
526625.. _Bessel's correction : https://en.wikipedia.org/wiki/Bessel's_correction
@@ -541,6 +640,10 @@ improve the estimate of the percentile.
541640
542641.. _converges in distribution : https://en.wikipedia.org/wiki/Convergence_of_random_variables#Convergence_in_distribution
543642
643+ .. _skewness : https://en.wikipedia.org/wiki/Skewness
644+
645+ .. _kurtosis : https://en.wikipedia.org/wiki/Kurtosis
646+
544647.. _confidence intervals : https://en.wikipedia.org/wiki/Confidence_interval
545648
546649.. _Student's t-distribution : https://en.wikipedia.org/wiki/Student%27s_t-distribution
0 commit comments