Skip to content

Commit dd4bf34

Browse files
committed
Merge remote-tracking branch 'upstream/main'
2 parents e7bbd7c + 718ebbd commit dd4bf34

File tree

152 files changed

+41115
-12697
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

152 files changed

+41115
-12697
lines changed

.github/workflows/pre-commit.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,5 +15,5 @@ jobs:
1515
- uses: actions/setup-python@v2
1616
- uses: actions/setup-node@v2
1717
with:
18-
node-version: '15'
18+
node-version: '18'
1919
- uses: pre-commit/[email protected]
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
name: Read the Docs Pull Request Preview
2+
on:
3+
pull_request_target:
4+
types:
5+
- opened
6+
7+
permissions:
8+
pull-requests: write
9+
10+
jobs:
11+
documentation-links:
12+
runs-on: ubuntu-latest
13+
steps:
14+
- uses: readthedocs/actions/preview@v1
15+
with:
16+
project-slug: "pymc-examples"

.jupytext.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
1-
notebook_metadata_filter = "substitutions,-jupytext.text_representation.jupytext_version"
1+
notebook_metadata_filter = "myst,-jupytext.text_representation.jupytext_version"
22
formats = ["ipynb", ".myst.md:myst"]

.pre-commit-config.yaml

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,26 +1,25 @@
11
repos:
22
- repo: https://github.com/psf/black
3-
rev: 22.3.0
3+
rev: 23.7.0
44
hooks:
55
- id: black-jupyter
66
- repo: https://github.com/nbQA-dev/nbQA
7-
rev: 1.1.0
7+
rev: 1.7.0
88
hooks:
99
- id: nbqa-isort
1010
additional_dependencies: [isort==5.6.4]
1111
- id: nbqa-pyupgrade
1212
additional_dependencies: [pyupgrade==2.7.4]
1313
args: [--py37-plus]
1414
- repo: https://github.com/MarcoGorelli/madforhooks
15-
rev: 0.3.0
15+
rev: 0.4.1
1616
hooks:
1717
- id: check-execution-order
1818
args: [--strict]
1919
exclude: |
2020
(?x)^
2121
^examples/ode_models/ODE_with_manual_gradients\.ipynb$
2222
|examples/samplers/DEMetropolisZ_EfficiencyComparison\.ipynb$
23-
|examples/gaussian_processes/GP-Latent\.ipynb$
2423
|examples/gaussian_processes/GP-MaunaLoa2\.ipynb$
2524
|examples/samplers/MLDA_gravity_surveying\.ipynb$
2625
|examples/howto/sampling_callback\.ipynb$
@@ -33,7 +32,7 @@ repos:
3332
|examples/samplers/MLDA_variance_reduction_linear_regression\.ipynb$
3433
3534
- repo: https://github.com/FlamingTempura/bibtex-tidy
36-
rev: v1.8.5
35+
rev: v1.11.0
3736
hooks:
3837
- id: bibtex-tidy
3938
files: examples/references.bib
@@ -97,7 +96,7 @@ repos:
9796
language: pygrep
9897
types_or: [markdown, rst, jupyter]
9998
- repo: https://github.com/mwouts/jupytext
100-
rev: v1.13.7
99+
rev: v1.15.1
101100
hooks:
102101
- id: jupytext
103102
files: ^examples/.+\.ipynb$

examples/case_studies/bart_heteroscedasticity.myst.md renamed to examples/bart/bart_heteroscedasticity.myst.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,6 @@ kernelspec:
2424
In this notebook we show how to use BART to model heteroscedasticity as described in Section 4.1 of [`pymc-bart`](https://github.com/pymc-devs/pymc-bart)'s paper {cite:p}`quiroga2022bart`. We use the `marketing` data set provided by the R package `datarium` {cite:p}`kassambara2019datarium`. The idea is to model a marketing channel contribution to sales as a function of budget.
2525

2626
```{code-cell} ipython3
27-
:tags: []
28-
2927
import os
3028
3129
import arviz as az
@@ -37,8 +35,6 @@ import pymc_bart as pmb
3735
```
3836

3937
```{code-cell} ipython3
40-
:tags: []
41-
4238
%config InlineBackend.figure_format = "retina"
4339
az.style.use("arviz-darkgrid")
4440
plt.rcParams["figure.figsize"] = [10, 6]
@@ -157,8 +153,6 @@ The fit looks good! In fact, we see that the mean and variance increase as a fun
157153
## Watermark
158154

159155
```{code-cell} ipython3
160-
:tags: []
161-
162156
%load_ext watermark
163157
%watermark -n -u -v -iv -w -p pytensor
164158
```

examples/case_studies/BART_introduction.ipynb renamed to examples/bart/bart_introduction.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1037,7 +1037,7 @@
10371037
"id": "2b680d91",
10381038
"metadata": {},
10391039
"source": [
1040-
"This plot helps us understand the season behind the bad performance on the test set: Recall that in the variable importance ranking from the initial model we saw that `hour` was the most important predictor. On the other hand, our training data just sees `hour` values until $19$ (since is our train-test threshold). As BART learns how to partition the (training) data, it can not differentiate between `hour` values between $20$ and $22$ for example. It just cares that both values are greater that $19$. This is very important to understand when using BART! This explains why one should not use BART for time series forecasting if there is a trend component. In this case it is better to detrend the data first, model the remainder with BART and model the trend with a different model."
1040+
"This plot helps us understand the reason behind the bad performance on the test set: Recall that in the variable importance ranking from the initial model we saw that `hour` was the most important predictor. On the other hand, our training data just sees `hour` values until $19$ (since is our train-test threshold). As BART learns how to partition the (training) data, it can not differentiate between `hour` values between $20$ and $22$ for example. It just cares that both values are greater that $19$. This is very important to understand when using BART! This explains why one should not use BART for time series forecasting if there is a trend component. In this case it is better to detrend the data first, model the remainder with BART and model the trend with a different model."
10411041
]
10421042
},
10431043
{

examples/case_studies/BART_introduction.myst.md renamed to examples/bart/bart_introduction.myst.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -389,7 +389,7 @@ ax.set(
389389
);
390390
```
391391

392-
This plot helps us understand the season behind the bad performance on the test set: Recall that in the variable importance ranking from the initial model we saw that `hour` was the most important predictor. On the other hand, our training data just sees `hour` values until $19$ (since is our train-test threshold). As BART learns how to partition the (training) data, it can not differentiate between `hour` values between $20$ and $22$ for example. It just cares that both values are greater that $19$. This is very important to understand when using BART! This explains why one should not use BART for time series forecasting if there is a trend component. In this case it is better to detrend the data first, model the remainder with BART and model the trend with a different model.
392+
This plot helps us understand the reason behind the bad performance on the test set: Recall that in the variable importance ranking from the initial model we saw that `hour` was the most important predictor. On the other hand, our training data just sees `hour` values until $19$ (since is our train-test threshold). As BART learns how to partition the (training) data, it can not differentiate between `hour` values between $20$ and $22$ for example. It just cares that both values are greater that $19$. This is very important to understand when using BART! This explains why one should not use BART for time series forecasting if there is a trend component. In this case it is better to detrend the data first, model the remainder with BART and model the trend with a different model.
393393

394394
+++
395395

File renamed without changes.

0 commit comments

Comments
 (0)