Conversation
Add explicit NaN check before positivity validation in _parse_cost_per_unit_df to match the validation pattern in BudgetOptimizer._validate_and_process_cost_per_unit. This prevents NaN values from silently passing through and causing misleading error messages about coordinate mismatches. Applied via @cursor push command
Add _deserialize_cost_per_unit helper that strips timezone info after pd.read_json (which may parse ISO dates as tz-aware UTC). Warn on save if the input DataFrame has tz-aware dates since timezone will be lost. Made-with: Cursor
- Fix cost_per_unit tensor coordinate order by adding reindex() call before extracting values to ensure coordinates match model order - Add idata.attrs['cost_per_unit'] update in fit() method to match set_cost_per_unit() behavior for consistency
- Fix cost_per_unit tensor coordinate order by adding reindex() call before extracting values to ensure coordinates match model order - Add idata.attrs['cost_per_unit'] update in fit() method to match set_cost_per_unit() behavior for consistency Applied via @cursor push command
…s/pymc-marketing into isofer/2309-cost-per-unit-part1
PR SummaryLow Risk Overview Updates the docs gallery to include the new Cost-per-Unit example and fixes a couple of gallery card images (Multi-Objective Optimization and Geo-Level Lift Test Calibration). Written by Cursor Bugbot for commit 4f75e34. This will update automatically on new commits. Configure here. |
| @@ -0,0 +1,1268 @@ | |||
| { | |||
There was a problem hiding this comment.
missing space 0.10or0.30 ?
Regarding the comment "Impressions are what actually drives revenue". There are so many strong opinions on this topic that I would like to leave it open for the modeler to decide based on the context. For example, spend data is an apples-to-apples comparison (EUR), but impressions in Google (as an absolute number) probably do not mean the same as impressions in Twitch. I have also seen really bad impressions data XD.
Hence, I will add a comment that the modeler has to be responsable to think about which variable to use :)
Reply via ReviewNB
There was a problem hiding this comment.
I think the intro was not clear enough. I didn't mean say that you can aggregate multiple social channels together (like Google and twitch). The dataset describe the case that you have only media channels - one social channel measured in impressions and one is TV. I'll clarify this.
Also if one channel is in Euro and one in USD, then you still should probably model each one in its original currency and use the cost-per-unit to convert to reduce the noise of currency conversion. It's a good example, and I should add a paragraph about it.
| @@ -0,0 +1,1268 @@ | |||
| { | |||
There was a problem hiding this comment.
|
Thanks @isofer ! This looks very nice! I just left one comment :) |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix prepared a fix for the issue found in the latest run.
- ✅ Fixed: Shallow merge drops
target_acceptfrom sampler config- Added target_accept: 0.90 to the notebook's sampler_config override to prevent it from being dropped during the shallow merge with the YAML config.
Or push these changes by commenting:
@cursor push 91b9eda4b6
Preview (91b9eda4b6)
diff --git a/docs/source/notebooks/mmm/mmm_cost_per_unit.ipynb b/docs/source/notebooks/mmm/mmm_cost_per_unit.ipynb
--- a/docs/source/notebooks/mmm/mmm_cost_per_unit.ipynb
+++ b/docs/source/notebooks/mmm/mmm_cost_per_unit.ipynb
@@ -292,6 +292,7 @@
" \"tune\": 1000,\n",
" \"draws\": 1000,\n",
" \"random_seed\": seed,\n",
+ " \"target_accept\": 0.90,\n",
" }\n",
" },\n",
")\n",| " \"draws\": 1000,\n", | ||
| " \"random_seed\": seed,\n", | ||
| " }\n", | ||
| " },\n", |
There was a problem hiding this comment.
Shallow merge drops target_accept from sampler config
Medium Severity
The notebook's model_kwargs passes a sampler_config dict that completely replaces (not deep-merges with) the YAML's sampler_config, because build_mmm_from_yaml does a shallow merge on line 144 of yaml.py. This silently drops target_accept: 0.90 from the YAML config. The sampler output confirms divergences occurred with a warning to "Increase target_accept or reparameterize" — the very setting that was lost. Either the notebook's sampler_config override needs to include target_accept: 0.90, or the YAML config values need to be aligned with the notebook's intent.
Additional Locations (1)
juanitorduz
left a comment
There was a problem hiding this comment.
Thanks @isofer ! Can ypu please add
%load_ext watermark
%watermark -n -u -v -iv -wat the end to add the env info please?



Demo notebook for cost per unit
📚 Documentation preview 📚: https://pymc-marketing--2363.org.readthedocs.build/en/2363/