Skip to content

Commit bd6c321

Browse files
committed
checkpointing - addressing reviewer comments
1 parent 311f112 commit bd6c321

File tree

2 files changed

+318
-2385
lines changed

2 files changed

+318
-2385
lines changed

jupyter/sum-to-zero/sum_to_zero_evaluation.html

Lines changed: 315 additions & 2383 deletions
Large diffs are not rendered by default.

jupyter/sum-to-zero/sum_to_zero_evaluation.qmd

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ format:
2121
toc-location: right
2222
embed-resources: true
2323
execute:
24+
eval: false
2425
engine: jupyter
2526
include-in-header:
2627
- text: |
@@ -109,7 +110,7 @@ As a general rule, for small vectors, the hard sum-to-zero constraint is more ef
109110
for larger vectors, the soft sum-to-zero constraint is faster,
110111
but much depends on the specifics of the model and the data.
111112

112-
113+
The hard sum-to-zero is often problematic.
113114
For small $N$ and models with sensible priors, the hard sum-to-zero is usually satisfactory.
114115
But as the size of the vector grows, it distorts the marginal variance of the $N^{th}$.
115116
Given a parameter vector:
@@ -237,7 +238,7 @@ we have written a data-generating program to create datasets given the
237238
baseline disease prevalence, test specificity and sensitivity,
238239
and the desired number of diagnostic tests.
239240
In the `generated quantities` block we use Stan's PRNG functions to populate
240-
the true weights for the categorical coefficient vectors, and the relative percentages
241+
vthe true weights for the categorical coefficient vectors, and the relative percentages
241242
of per-category observations.
242243
Then we use a set of nested loops to generate the data for each demographic,
243244
using the PRNG equivalent of the model likelihood.

0 commit comments

Comments
 (0)