You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -97,7 +97,7 @@ fit <- lm_weightit(re78 ~ treat * (age + educ + race + married +
97
97
# G-computation for the treatment effect
98
98
library("marginaleffects")
99
99
avg_comparisons(fit, variables = "treat",
100
-
newdata = subset(lalonde, treat == 1))
100
+
newdata = subset(treat == 1))
101
101
```
102
102
103
103
Our confidence interval for `treat` contains 0, so there isn't evidence that `treat` has an effect on `re78`. Several types of standard errors are available in `WeightIt`, including analytical standard errors that account for estimation of the weights using M-estimation, robust standard errors that treat the weights as fixed, and bootstrapping. All type are described in detail at `vignette("estimating-effects")`.
No matter which method is selected, `weightitMSM()` estimates separate weights for each time period and then takes the product of the weights for each individual to arrive at the final estimated weights. Printing the output of `weightitMSM()` provides some details about the function call and the output. We can take a look at the quality of the weights with `summary()`, just as we could for point treatments.
144
+
`weightitMSM()` estimates separate weights for each time period and then takes the product of the weights for each individual to arrive at the final estimated weights. Printing the output of `weightitMSM()` provides some details about the function call and the output. We can take a look at the quality of the weights with `summary()`, just as we could for point treatments.
145
145
146
146
```{r}
147
147
summary(Wmsm.out)
@@ -171,11 +171,10 @@ Then, we compute the average expected potential outcomes under each treatment re
171
171
```{r, eval = me_ok}
172
172
library("marginaleffects")
173
173
(p <- avg_predictions(fit,
174
-
variables = c("A_1", "A_2", "A_3"),
175
-
type = "response"))
174
+
variables = c("A_1", "A_2", "A_3")))
176
175
```
177
176
178
-
We can compare the expected potential outcomes under each regime using `marginaleffects::hypotheses()`. To get all pairwise comparisons, supply the `avg_predictions()` output to `hypotheses(., "pairwise")`. To compare individual regimes, we can use `hypotheses()`, identifying the rows of the `avg_predictions()` output. For example, to compare the regimes with no treatment for all three time points vs. the regime with treatment for all three time points, we would run
177
+
We can compare the expected potential outcomes under each regime using `marginaleffects::hypotheses()`. To get all pairwise comparisons, supply the `avg_predictions()` output to `hypotheses(., ~ pairwise)`. To compare individual regimes, we can use `hypotheses()`, identifying the rows of the `avg_predictions()` output. For example, to compare the regimes with no treatment for all three time points vs. the regime with treatment for all three time points, we would run
Though the subgroup effects differ from each other in the sample, this difference is not statistically significant at the .05 level, so there is no evidence of moderation by `X5`.
584
584
585
-
When the moderator has more than two levels, it is possible to run an omnibus test for moderation by changing `hypothesis` to `"reference"` and supplying the output to `hypotheses()` with `joint = TRUE`, e.g.,
585
+
When the moderator has more than two levels, it is possible to run an omnibus test for moderation by changing `hypothesis` to `~reference` and supplying the output to `hypotheses()` with `joint = TRUE`, e.g.,
0 commit comments