Skip to content

Commit 198c814

Browse files
committed
tidying
Signed-off-by: Nathaniel <[email protected]>
1 parent aeda1be commit 198c814

File tree

2 files changed

+12
-8
lines changed

2 files changed

+12
-8
lines changed

examples/case_studies/CFA_SEM.ipynb

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4863,7 +4863,7 @@
48634863
"cell_type": "markdown",
48644864
"metadata": {},
48654865
"source": [
4866-
"But we can also do more contemporary Bayesian posterior predictive checks as we pull out the predictive posterior distribution for each of the observed metrics. "
4866+
"However the focus on recovering a fit to such summary statistics is less compelling and more indirect than recovering the observed data itself. We can also do more contemporary Bayesian posterior predictive checks as we pull out the predictive posterior distribution for each of the observed metrics. "
48674867
]
48684868
},
48694869
{
@@ -5409,7 +5409,9 @@
54095409
"source": [
54105410
"## Bayesian Structural Equation Models\n",
54115411
"\n",
5412-
"We've now seen how measurement models help us understand the relationships between disparate indicator variables in a kind of crude way. We have postulated a system of latent factors and derived the correlations between these factors to help us understand the strength of relationships between the broader constructs of interest. This is kind a special case of a structural equation models. In the SEM tradition we're interested in figuring out aspects of the structural relations between variables that means want to posit dependence and independence relationship to interrogate our beliefs about influence flows through the system. For our data set we can postulate the following chain of dependencies\n",
5412+
"We've now seen how measurement models help us understand the relationships between disparate indicator variables in a kind of crude way. We have postulated a system of latent factors and derived the correlations between these factors to help us understand the strength of relationships between the broader constructs of interest. This is kind a special case of a structural equation models. In the SEM tradition we're interested in figuring out aspects of the structural relations between variables that means want to posit dependence and independence relationship to interrogate our beliefs about influence flows through the system. \n",
5413+
"\n",
5414+
"For our data set we can postulate the following chain of dependencies\n",
54135415
"\n",
54145416
"![Candidate Structural Model](structural_model_sem.png)\n",
54155417
"\n",
@@ -6569,7 +6571,7 @@
65696571
"source": [
65706572
"### Model Evaluation Checks\n",
65716573
"\n",
6572-
"A quick evaluation of model performance suggests we do somewhat less well in recovering the sample covariance structures than we did with simpler measurement model"
6574+
"A quick evaluation of model performance suggests we do somewhat less well in recovering the sample covariance structures than we did with simpler measurement model."
65736575
]
65746576
},
65756577
{
@@ -7800,7 +7802,7 @@
78007802
"\n",
78017803
"> [C]onditional independence is not a grace of nature for which we must wait passively, but rather a psychological necessity which we satisfy by organising our knowledge in a specific way. An important tool in such an organisation is the identification of intermediate variables that induce conditional independence among observables; if such variables are not in our vocabulary, we create them. In medical diagnosis, for instance, when some symptoms directly influence one another, the medical profession invents a name for that interaction (e.g. “syndrome”, “complication”, “pathological state”) and treats it as a new auxiliary variable that induces conditional independence.” - Pearl quoted in {cite:t}`levy2020bayesian` p61\n",
78027804
"\n",
7803-
"It's this deliberate and careful focus on the structure of conditionalisation that unites the seemingly disparate disciplines of psychometrics and causal inference. Both disciplines cultivate careful thinking about the structure of the data generating process and further proffer conditionalisation strategies to bettern target some estimand of interest. Both are well phrased in the expressive lexicon of a probabilistic programming language like `PyMC`. We encourage you to explore the rich possibilities for yourself! \n"
7805+
"It's this deliberate and careful focus on the structure of conditionalisation that unites the seemingly disparate disciplines of psychometrics and causal inference. Both disciplines cultivate careful thinking about the structure of the data generating process and further proffer conditionalisation strategies to better target some estimand of interest. Both are well phrased in the expressive lexicon of a probabilistic programming language like `PyMC`. We encourage you to explore the rich possibilities for yourself! \n"
78047806
]
78057807
},
78067808
{

examples/case_studies/CFA_SEM.myst.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -543,7 +543,7 @@ ax = sns.heatmap(residuals_posterior_cov, annot=True, cmap="bwr", mask=mask)
543543
ax.set_title("Residuals between Model Implied and Sample Covariances", fontsize=25);
544544
```
545545

546-
But we can also do more contemporary Bayesian posterior predictive checks as we pull out the predictive posterior distribution for each of the observed metrics.
546+
However the focus on recovering a fit to such summary statistics is less compelling and more indirect than recovering the observed data itself. We can also do more contemporary Bayesian posterior predictive checks as we pull out the predictive posterior distribution for each of the observed metrics.
547547

548548
```{code-cell} ipython3
549549
make_ppc(idata_mm, 100, drivers=residuals_posterior_cov.columns, dims=(5, 3));
@@ -671,7 +671,9 @@ It's worth highlighting here the cohort on the top left of the `SUP_P` graph whi
671671

672672
## Bayesian Structural Equation Models
673673

674-
We've now seen how measurement models help us understand the relationships between disparate indicator variables in a kind of crude way. We have postulated a system of latent factors and derived the correlations between these factors to help us understand the strength of relationships between the broader constructs of interest. This is kind a special case of a structural equation models. In the SEM tradition we're interested in figuring out aspects of the structural relations between variables that means want to posit dependence and independence relationship to interrogate our beliefs about influence flows through the system. For our data set we can postulate the following chain of dependencies
674+
We've now seen how measurement models help us understand the relationships between disparate indicator variables in a kind of crude way. We have postulated a system of latent factors and derived the correlations between these factors to help us understand the strength of relationships between the broader constructs of interest. This is kind a special case of a structural equation models. In the SEM tradition we're interested in figuring out aspects of the structural relations between variables that means want to posit dependence and independence relationship to interrogate our beliefs about influence flows through the system.
675+
676+
For our data set we can postulate the following chain of dependencies
675677

676678
![Candidate Structural Model](structural_model_sem.png)
677679

@@ -879,7 +881,7 @@ az.plot_forest(
879881

880882
### Model Evaluation Checks
881883

882-
A quick evaluation of model performance suggests we do somewhat less well in recovering the sample covariance structures than we did with simpler measurement model
884+
A quick evaluation of model performance suggests we do somewhat less well in recovering the sample covariance structures than we did with simpler measurement model.
883885

884886
```{code-cell} ipython3
885887
residuals_posterior_cov = get_posterior_resids(idata_sem0, 2500)
@@ -1014,7 +1016,7 @@ So if we specify the conditional distribution __correctly__, we recover the cond
10141016

10151017
> [C]onditional independence is not a grace of nature for which we must wait passively, but rather a psychological necessity which we satisfy by organising our knowledge in a specific way. An important tool in such an organisation is the identification of intermediate variables that induce conditional independence among observables; if such variables are not in our vocabulary, we create them. In medical diagnosis, for instance, when some symptoms directly influence one another, the medical profession invents a name for that interaction (e.g. “syndrome”, “complication”, “pathological state”) and treats it as a new auxiliary variable that induces conditional independence.” - Pearl quoted in {cite:t}`levy2020bayesian` p61
10161018
1017-
It's this deliberate and careful focus on the structure of conditionalisation that unites the seemingly disparate disciplines of psychometrics and causal inference. Both disciplines cultivate careful thinking about the structure of the data generating process and further proffer conditionalisation strategies to bettern target some estimand of interest. Both are well phrased in the expressive lexicon of a probabilistic programming language like `PyMC`. We encourage you to explore the rich possibilities for yourself!
1019+
It's this deliberate and careful focus on the structure of conditionalisation that unites the seemingly disparate disciplines of psychometrics and causal inference. Both disciplines cultivate careful thinking about the structure of the data generating process and further proffer conditionalisation strategies to better target some estimand of interest. Both are well phrased in the expressive lexicon of a probabilistic programming language like `PyMC`. We encourage you to explore the rich possibilities for yourself!
10181020

10191021
+++
10201022

0 commit comments

Comments
 (0)