@@ -305,6 +305,43 @@ def score(self, X: xr.DataArray, y: xr.DataArray) -> pd.Series:
305305 def calculate_impact (
306306 self , y_true : xr .DataArray , y_pred : az .InferenceData
307307 ) -> xr .DataArray :
308+ """
309+ Calculate the causal impact as the difference between observed and predicted values.
310+
311+ The impact is calculated using the posterior expectation (`mu`) rather than the
312+ posterior predictive (`y_hat`). This means the causal impact represents the
313+ difference from the expected value of the model, excluding observation noise.
314+ This approach provides a cleaner measure of the causal effect by focusing on
315+ the systematic difference rather than including sampling variability from the
316+ observation noise term.
317+
318+ Parameters
319+ ----------
320+ y_true : xr.DataArray
321+ The observed outcome values with dimensions ["obs_ind", "treated_units"].
322+ y_pred : az.InferenceData
323+ The posterior predictive samples containing the "mu" variable, which
324+ represents the expected value (mean) of the outcome.
325+
326+ Returns
327+ -------
328+ xr.DataArray
329+ The causal impact with dimensions ending in "obs_ind". The impact includes
330+ posterior uncertainty from the model parameters but excludes observation noise.
331+
332+ Notes
333+ -----
334+ By using `mu` (the posterior expectation) rather than `y_hat` (the posterior
335+ predictive with observation noise), the uncertainty in the impact reflects:
336+ - Parameter uncertainty in the fitted model
337+ - Uncertainty in the counterfactual prediction
338+
339+ But excludes:
340+ - Observation-level noise (sigma)
341+
342+ This makes the impact plots focus on the systematic causal effect rather than
343+ individual observation variability.
344+ """
308345 impact = y_true - y_pred ["posterior_predictive" ]["mu" ]
309346 return impact .transpose (..., "obs_ind" )
310347
0 commit comments