You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
y = post.β0 + post.β1 * xi + post.β2 * xi * m + post.β3 * m
89
+
y = post.β[0] + post.β[1] * xi + post.β[2] * xi * m + post.β[3] * m
90
90
region = y.quantile([0.025, 0.5, 0.975], dim="sample")
91
91
ax.fill_between(
92
92
xi,
@@ -119,7 +119,7 @@ def plot_moderation_effect(result, m, m_quantiles, ax=None):
119
119
120
120
# calculate 95% CI region and median
121
121
xi = xr.DataArray(np.linspace(np.min(m), np.max(m), 20), dims=["x_plot"])
122
-
rate = post.β1 + post.β2 * xi
122
+
rate = post.β[1] + post.β[2] * xi
123
123
region = rate.quantile([0.025, 0.5, 0.975], dim="sample")
124
124
125
125
ax.fill_between(
@@ -139,7 +139,7 @@ def plot_moderation_effect(result, m, m_quantiles, ax=None):
139
139
for p, m in zip(percentile_list, m_levels):
140
140
ax.plot(
141
141
m,
142
-
np.mean(post.β1) + np.mean(post.β2) * m,
142
+
np.mean(post.β[1]) + np.mean(post.β[2]) * m,
143
143
"o",
144
144
c=scalarMap.to_rgba(m),
145
145
markersize=10,
@@ -202,7 +202,7 @@ $$
202
202
+++
203
203
204
204
### Conceptual or path diagram
205
-
We can also draw moderation in a mode conceptual manner. This is perhaps visually simpler and easier to parse, but is less explicit. The moderation is shown by an arrow from the moderating variable to the line between a predictor and an outcome variable.
205
+
We can also draw moderation in a more conceptual manner. This is perhaps visually simpler and easier to parse, but is less explicit. The moderation is shown by an arrow from the moderating variable to the line between a predictor and an outcome variable.
206
206
207
207
But the diagram would represent the exact same equation as shown above.
208
208
@@ -344,13 +344,10 @@ def model_factory(x, m, y):
344
344
x = pm.Data("x", x)
345
345
m = pm.Data("m", m)
346
346
# priors
347
-
β0 = pm.Normal("β0", mu=0, sigma=10)
348
-
β1 = pm.Normal("β1", mu=0, sigma=10)
349
-
β2 = pm.Normal("β2", mu=0, sigma=10)
350
-
β3 = pm.Normal("β3", mu=0, sigma=10)
347
+
β = pm.Normal("β", mu=0, sigma=10, size=4)
351
348
σ = pm.HalfCauchy("σ", 1)
352
349
# likelihood
353
-
y = pm.Normal("y", mu=β0 + (β1 * x) + (β2 * x * m) + (β3 * m), sigma=σ, observed=y)
350
+
y = pm.Normal("y", mu=β[0] + (β[1] * x) + (β[2] * x * m) + (β[3] * m), sigma=σ, observed=y)
354
351
355
352
return model
356
353
```
@@ -373,7 +370,7 @@ with model:
373
370
Visualise the trace to check for convergence.
374
371
375
372
```{code-cell} ipython3
376
-
az.plot_trace(result);
373
+
az.plot_trace(result, compact=False);
377
374
```
378
375
379
376
We have good chain mixing and the posteriors for each chain look very similar, so no problems in that regard.
@@ -397,7 +394,7 @@ az.plot_pair(
397
394
And just for the sake of completeness, we can plot the posterior distributions for each of the $\beta$ parameters and use this to arrive at research conclusions.
For example, from an estimation (in contrast to a hypothesis testing) perspective, we could look at the posterior over $\beta_2$ and claim a credibly less than zero moderation effect.
@@ -424,10 +421,15 @@ ax.set_title("Data and posterior prediction");
424
421
### Spotlight graph
425
422
We can also visualise the moderation effect by plotting $\beta_1 + \beta_2 \cdot m$ as a function of the $m$. This was named a spotlight graph, see {cite:t}`spiller2013spotlights` and {cite:t}`mcclelland2017multicollinearity`.
426
423
424
+
```{code-cell} ipython3
425
+
# result.posterior["β"].isel(β_dim_0=2)
426
+
```
427
+
427
428
```{code-cell} ipython3
428
429
fig, ax = plt.subplots(1, 2, figsize=(10, 5))
429
430
plot_moderation_effect(result, m, m_quantiles, ax[0])
ax[1].set(title="Posterior distribution of $\\beta_2$");
431
433
```
432
434
433
435
The expression $\beta_1 + \beta_2 \cdot \text{moderator}$ defines the rate of change of the outcome (muscle percentage) per unit of $x$ (training hours/week). We can see that as age (the moderator) increases, this effect of training hours/week on muscle percentage decreases.
@@ -473,7 +475,7 @@ But readers are strongly encouraged to read {cite:t}`mcclelland2017multicollinea
473
475
- Updated by Benjamin T. Vincent in March 2022
474
476
- Updated by Benjamin T. Vincent in February 2023 to run on PyMC v5
475
477
- Updated to use `az.extract` by [Benjamin T. Vincent](https://github.com/drbenvincent) in February 2023 ([pymc-examples#522](https://github.com/pymc-devs/pymc-examples/pull/522))
476
-
- Updated by [Benjamin T. Vincent](https://github.com/drbenvincent) in May 2024 to incorporate causal concepts
478
+
- Updated by [Benjamin T. Vincent](https://github.com/drbenvincent) in June 2024 to incorporate causal concepts
0 commit comments