You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/case_studies/CFA_SEM.myst.md
+73-6Lines changed: 73 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,9 @@ kernelspec:
21
21
22
22
+++
23
23
24
-
This is some introductory text.
24
+
In the psychometrics literature the data is often derived from a strategically constructed survey aimed at a particular target phenomena. Some intuited, but not yet measured, concept that arguably plays a role in human action, motivation or sentiment. The relative “fuzziness” of the subject matter in psychometrics has had a catalyzing effect on the methodological rigour sought in the science. Survey designs are agonized over for correct tone and rhythm of sentence structure. Measurement scales are doubly checked for reliability and correctness. The literature is consulted and questions are refined. Analysis steps are justified and tested under a wealth of modelling routines.
25
+
26
+
Model architectures are defined and refined to better express the hypothesized structures in the data-generating process. We will see how such due diligence leads to powerful and expressive models that grant us tractability on thorny questions of human affect.
Our data is borrowed from work by Boris Mayer and Andrew Ellis found [here](https://methodenlehre.github.io/SGSCLM-R-course/cfa-and-sem-with-lavaan.html#structural-equation-modelling-sem). They demonstrate CFA and SEM modelling with lavaan. We’ll load up their data. We have survey responses from ~300 individuals who have answered questions regarding their upbringing, self-efficacy and reported life-satisfaction. The hypothetical dependency structure in this life-satisfaction data-set posits a moderated relationship between scores related to life-satisfaction, parental and family support and self-efficacy. It is not a trivial task to be able to design a survey that can elicit answers plausibly mapped to each of these “factors” or themes, never mind finding a model of their relationship that can inform us as to the relative of impact of each on life-satisfaction outcomes.
47
+
48
+
First we'll pull out the data and examine some summary statistics.
49
+
50
+
44
51
```{code-cell} ipython3
45
52
df = pd.read_csv("../data/sem_data.csv")
46
53
df.head()
@@ -52,13 +59,43 @@ drivers = [c for c in df.columns if not c in ["region", "gender", "age", "ID"]]
plt.suptitle("Pair Plot of Indicator Metrics with Regression Fits", fontsize=30);
57
73
```
58
74
59
75
## Measurement Models
60
76
77
+
+++
78
+
79
+
A measurement model is a key component within the more general structural equation model. A measurement model specifies the relationships between observed variables (typically survey items or indicators) and their underlying latent constructs (often referred to as factors or latent variables). We start our presentation of SEM models more generally by focusing on a type of measurement model with it's own history - the confirmatory factor model (CFA) which specifies a particular structure to the relationships between our indicator variables and the latent factors. It is this structure which is up for confirmation in our modelling.
80
+
81
+
We'll start by fitting a "simple" CFA model in `PyMC` to demonstrate how the pieces fit together, we'll then expand our focus. Here we ignore the majority of our indicator variables and focus on the idea that there are two latent constructs: (1) Social Self-efficacy and (2) Life Satisfaction.
82
+
83
+
We're aiming to articulate a mathematical structure where our indicator variables $y_{ij}$ are determined by a latent factor $\text{Ksi}_{j}$ through an estimated factor loading $\lambda_{ij}$. Functionally we have a set of equations with error terms $\psi_i$
The goal is to articulate the relationship between the different factors in terms of the covariances between these latent terms and estimate the relationships each latent factor has with the manifest indicator variables. At a high level, we're saying the joint distribution can be represented through conditionalisation in the following schema
@@ -115,20 +158,30 @@ with pm.Model(coords=coords) as model:
115
158
pm.model_to_graphviz(model)
116
159
```
117
160
161
+
### Meausurement Model Structure
162
+
163
+
We can now see how the covariance structure among the latent constructs is integral piece of the overarching model design which is fed forward into our pseudo regression components and weighted with the respective lambda terms.
These factor loadings are generally important to interpret in terms of construct validity. Because we've specified one of the indicator variables to be fixed at 1, the other indicators which load on that factor should have a loading coefficient in broadly the same scale as the fixed point indicator that defines the construct scale. We're looking for consistency among the loadings to assess whether the indicators are reliable measures of the construct.
170
+
122
171
```{code-cell} ipython3
123
172
idata
124
173
```
125
174
175
+
Let's plot the trace diagnostics to validate the sampler has converged well to the posterior distribution.
One thing to highlight in particular about the Bayesian manner of fitting CFA and SEM models is that we now have access to the posterior distribution of the latent quantities. These samples can offer insight into particular individuals in our survey that is harder to glean from the multivariate presentation of the manifest variables.
184
+
132
185
```{code-cell} ipython3
133
186
fig, axs = plt.subplots(1, 2, figsize=(20, 9))
134
187
axs = axs.flatten()
@@ -156,8 +209,14 @@ ax2.set_title("Individual Life Satisfaction Metric \n On Latent Factor LS")
156
209
plt.show();
157
210
```
158
211
212
+
In this way we can identify and zero-in on individuals that appear to be outliers on one or more of the latent constructs.
213
+
214
+
+++
215
+
159
216
### Posterior Predictive Checks
160
217
218
+
As in more traditional Bayesian modelling, a core component of model evaluation is the assessment of the posterior predictive distribution i.e. the implied outcome distribution. Here too we can pull out draws against each of the indicator variables to assess for coherence and adequacy.
219
+
161
220
```{code-cell} ipython3
162
221
def make_ppc(
163
222
idata,
@@ -193,6 +252,8 @@ del idata
193
252
194
253
### Intermediate Cross-Loading Model
195
254
255
+
The idea of a measurment is maybe a little opaque when we only see models that fit well. Instead we want to briefly show how a in-apt measurement model gets reflected in the estimated parameters for the factor loadings. Here we specify a measurement model which attempts to couple the `se_social` and `sup_parents` indicators and bundle them into the same factor.
256
+
196
257
```{code-cell} ipython3
197
258
coords = {
198
259
"obs": list(range(len(df))),
@@ -285,6 +346,8 @@ with pm.Model(coords=coords) as model:
285
346
pm.model_to_graphviz(model)
286
347
```
287
348
349
+
Again our model samples well but the parameter estimates suggest that there is some inconsistency between the scale on which we're trying to force both sets of metrics.
0 commit comments