Skip to content

Commit 85b148f

Browse files
committed
re-label headings
Signed-off-by: Nathaniel <[email protected]>
1 parent c93d778 commit 85b148f

File tree

2 files changed

+22
-12
lines changed

2 files changed

+22
-12
lines changed

examples/case_studies/bayesian_sem_workflow.ipynb

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@
3737
" - SEM Conditional Formulation\n",
3838
" - SEM Marginal Formulation\n",
3939
" - SEM Mean Structure Formulation\n",
40+
" - Sensitivity Analysis: Comparing Model Fits\n",
4041
"- Parameter Recovery Models\n",
4142
" - SEM Hierarchical Formulation\n",
4243
" - SEM + Discrete Choice\n",
@@ -407,7 +408,7 @@
407408
"\n",
408409
"We can express the SEM in either a conditional or marginal formulation. The conditional form explicitly samples the latent variables, while the marginal form integrates them out of the likelihood.\n",
409410
"\n",
410-
"### Conditional Formulation\n",
411+
"#### Conditional Formulation\n",
411412
"This formulation treats the latent variables as parameters to be sampled directly. This is conceptually straightforward but often computationally demanding for Bayesian samplers.\n",
412413
"\n",
413414
"$$\n",
@@ -429,7 +430,7 @@
429430
"\n",
430431
"which highlights that the conditional formulation samples the latent variables explicitly. \n",
431432
"\n",
432-
"### Marginal Formulation\n",
433+
"#### Marginal Formulation\n",
433434
"Here the focus is on deriving the covariance matrix. \n",
434435
"\n",
435436
"$$\\Sigma_{\\mathcal{y}} = \\Psi + \\Lambda(I - B)^{-1}\\Psi_{\\zeta}(I - B)^{T}\\Lambda^{T} $$\n",
@@ -448,7 +449,7 @@
448449
"id": "78194165",
449450
"metadata": {},
450451
"source": [
451-
"### Setting up Utility Functions\n",
452+
"#### Setting up Utility Functions\n",
452453
"\n",
453454
"For this exercise we will lean on a range of utility functions to build and compare the expansionary sequence. These functions include repeated steps that will be required for any SEM model. These functions modularize the model-building process and make it easier to compare successive model expansions.\n",
454455
"\n",
@@ -4415,7 +4416,7 @@
44154416
"\n",
44164417
"We can also pull out the indirect and direct effects. This is one of the biggest pay-offs for SEM modelling. We've done the work of assessing measurement error and building an abstraction layer of __what-we-care-about__ over the observed indicators. We've considered various structures of the inferential relationships and isolated those direct effects from undue confounding influences. Now we can pull out the impact of mediation and moderation.\n",
44174418
"\n",
4418-
"## Comparing Models\n",
4419+
"## Sensitivity Analysis: Comparing Model Implications\n",
44194420
"\n",
44204421
"Let's first compare the model implied total effects, and the degrees of moderation between constructive and dysfunctional habits of thought on the satisfaction outcome."
44214422
]
@@ -5960,7 +5961,7 @@
59605961
"id": "6636967a",
59615962
"metadata": {},
59625963
"source": [
5963-
"## Discrete Choice Component\n",
5964+
"## SEM with Discrete Choice Component\n",
59645965
"\n",
59655966
"Combining SEM structures with Discrete choice models involves adding an extra likelihood term dependent on the latent factors. HR managers everywhere need to monitor attrition decisions. Often, they conceptualise the rationale for these decisions as being driven by abstract notions of job satisfaction. We now have tools to measure the latent constructs, but can we predict attrition outcomes from these latent predictors? \n",
59665967
"\n",
@@ -6601,7 +6602,11 @@
66016602
"id": "49fe7be1",
66026603
"metadata": {},
66036604
"source": [
6604-
"We can see here how our model structure now has two likelihood terms that are both based on the latent constructs `eta`. To demonstrate parameter recovery we need to sample from both outcomes simultaneously. "
6605+
"We can see here how our model structure now has two likelihood terms that are both based on the latent constructs `eta`. \n",
6606+
"\n",
6607+
"#### The Parameter Recovery Process\n",
6608+
"\n",
6609+
"To demonstrate parameter recovery we need to sample from both outcomes simultaneously. "
66056610
]
66066611
},
66076612
{

examples/case_studies/bayesian_sem_workflow.myst.md

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,7 @@ A further goal is to strengthen the foundation for SEM modeling in PyMC. We demo
3636
- SEM Conditional Formulation
3737
- SEM Marginal Formulation
3838
- SEM Mean Structure Formulation
39+
- Sensitivity Analysis: Comparing Model Fits
3940
- Parameter Recovery Models
4041
- SEM Hierarchical Formulation
4142
- SEM + Discrete Choice
@@ -245,7 +246,7 @@ In the structural model we specify how we believe the latent constructs relate t
245246

246247
We can express the SEM in either a conditional or marginal formulation. The conditional form explicitly samples the latent variables, while the marginal form integrates them out of the likelihood.
247248

248-
### Conditional Formulation
249+
#### Conditional Formulation
249250
This formulation treats the latent variables as parameters to be sampled directly. This is conceptually straightforward but often computationally demanding for Bayesian samplers.
250251

251252
$$
@@ -267,7 +268,7 @@ $$
267268

268269
which highlights that the conditional formulation samples the latent variables explicitly.
269270

270-
### Marginal Formulation
271+
#### Marginal Formulation
271272
Here the focus is on deriving the covariance matrix.
272273

273274
$$\Sigma_{\mathcal{y}} = \Psi + \Lambda(I - B)^{-1}\Psi_{\zeta}(I - B)^{T}\Lambda^{T} $$
@@ -282,7 +283,7 @@ We'll introduce each of these components are additional steps as we layer over t
282283

283284
+++
284285

285-
### Setting up Utility Functions
286+
#### Setting up Utility Functions
286287

287288
For this exercise we will lean on a range of utility functions to build and compare the expansionary sequence. These functions include repeated steps that will be required for any SEM model. These functions modularize the model-building process and make it easier to compare successive model expansions.
288289

@@ -960,7 +961,7 @@ The sampler diagnostics also seem healthy.
960961

961962
We can also pull out the indirect and direct effects. This is one of the biggest pay-offs for SEM modelling. We've done the work of assessing measurement error and building an abstraction layer of __what-we-care-about__ over the observed indicators. We've considered various structures of the inferential relationships and isolated those direct effects from undue confounding influences. Now we can pull out the impact of mediation and moderation.
962963

963-
## Comparing Models
964+
## Sensitivity Analysis: Comparing Model Implications
964965

965966
Let's first compare the model implied total effects, and the degrees of moderation between constructive and dysfunctional habits of thought on the satisfaction outcome.
966967

@@ -1283,7 +1284,7 @@ Another way we might interrogate the implications of a model is to see how well
12831284

12841285
+++
12851286

1286-
## Discrete Choice Component
1287+
## SEM with Discrete Choice Component
12871288

12881289
Combining SEM structures with Discrete choice models involves adding an extra likelihood term dependent on the latent factors. HR managers everywhere need to monitor attrition decisions. Often, they conceptualise the rationale for these decisions as being driven by abstract notions of job satisfaction. We now have tools to measure the latent constructs, but can we predict attrition outcomes from these latent predictors?
12891290

@@ -1399,7 +1400,11 @@ sem_model_discrete_choice_wide = make_discrete_choice_conditional(
13991400
pm.model_to_graphviz(sem_model_discrete_choice_tight)
14001401
```
14011402

1402-
We can see here how our model structure now has two likelihood terms that are both based on the latent constructs `eta`. To demonstrate parameter recovery we need to sample from both outcomes simultaneously.
1403+
We can see here how our model structure now has two likelihood terms that are both based on the latent constructs `eta`.
1404+
1405+
#### The Parameter Recovery Process
1406+
1407+
To demonstrate parameter recovery we need to sample from both outcomes simultaneously.
14031408

14041409
```{code-cell} ipython3
14051410
fixed_parameters = {

0 commit comments

Comments
 (0)