Skip to content

Commit f5b0203

Browse files
authored
[CHORE] Fix remaining links (#1127)
1 parent 715f9d3 commit f5b0203

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+205
-208
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ npm i -g mint
112112
For additional instructions, you can read about it → [this link](https://mintlify.com/docs/installation).
113113

114114
```sh
115-
uv pip install -e '.[dev, docs]'
115+
uv sync --group dev
116116
make all_docs
117117
```
118118

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -75,13 +75,13 @@ Current Python alternatives for statistical models are slow, inaccurate and don'
7575
## Highlights
7676

7777
* Inclusion of `exogenous variables` and `prediction intervals` for ARIMA.
78-
* 20x [faster](./experiments/arima/) than `pmdarima`.
78+
* 20x [faster](https://github.com/Nixtla/statsforecast/tree/main/experiments/arima) than `pmdarima`.
7979
* 1.5x faster than `R`.
8080
* 500x faster than `Prophet`.
81-
* 4x [faster](./experiments/ets/) than `statsmodels`.
81+
* 4x [faster](https://github.com/Nixtla/statsforecast/tree/main/experiments/ets) than `statsmodels`.
8282
* 1,000,000 series in [30 min](https://github.com/Nixtla/statsforecast/tree/main/experiments/ray) with [ray](https://github.com/ray-project/ray).
8383
* Replace FB-Prophet in two lines of code and gain speed and accuracy. Check the experiments [here](https://github.com/Nixtla/statsforecast/tree/main/experiments/arima_prophet_adapter).
84-
* Fit 10 benchmark models on **1,000,000** series in [under **5 min**](./experiments/benchmarks_at_scale/).
84+
* Fit 10 benchmark models on **1,000,000** series in [under **5 min**](https://github.com/Nixtla/statsforecast/tree/main/experiments/benchmarks_at_scale/).
8585

8686
Missing something? Please open an issue or write us in [![Slack](https://img.shields.io/badge/Slack-4A154B?&logo=slack&logoColor=white)](https://join.slack.com/t/nixtlaworkspace/shared_invite/zt-135dssye9-fWTzMpv2WBthq8NK0Yvu6A)
8787

docs/src/core/distributed.fugue.html.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -144,19 +144,19 @@ cv_results = sf.cross_validation(
144144

145145
## How It Works
146146

147-
1. **Automatic Detection**: When you pass a Spark, Dask, or Ray DataFrame to StatsForecast methods, the FugueBackend is automatically used.
147+
1. __Automatic Detection__: When you pass a Spark, Dask, or Ray DataFrame to StatsForecast methods, the FugueBackend is automatically used.
148148

149-
2. **Data Partitioning**: Data is partitioned by `unique_id`, allowing parallel processing across different time series.
149+
2. __Data Partitioning__: Data is partitioned by `unique_id`, allowing parallel processing across different time series.
150150

151-
3. **Distributed Execution**: Each partition is processed independently using the standard StatsForecast logic.
151+
3. __Distributed Execution__: Each partition is processed independently using the standard StatsForecast logic.
152152

153-
4. **Result Aggregation**: Results are collected and returned in the same format as the input (Spark/Dask/Ray DataFrame).
153+
4. __Result Aggregation__: Results are collected and returned in the same format as the input (Spark/Dask/Ray DataFrame).
154154

155155
## Supported Backends
156156

157-
- **Apache Spark**: For large-scale distributed processing
158-
- **Dask**: For flexible distributed computing with Python
159-
- **Ray**: For modern distributed machine learning workloads
157+
- __Apache Spark__: For large-scale distributed processing
158+
- __Dask__: For flexible distributed computing with Python
159+
- __Ray__: For modern distributed machine learning workloads
160160

161161
## Notes
162162

@@ -167,6 +167,6 @@ cv_results = sf.cross_validation(
167167

168168
## See Also
169169

170-
- [Core StatsForecast Methods](core.html)
170+
- [Core StatsForecast Methods](./core.html)
171171
- [Distributed Computing Examples](https://github.com/Nixtla/statsforecast/tree/main/experiments/ray)
172172
- [Fugue Documentation](https://fugue-tutorials.readthedocs.io/)

nbs/docs/contribute/issue-labels.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ You can browse all `help wanted` issues [here](https://github.com/nixtla/nixtla/
3333

3434
The `bug` label flags issues that outline something that's currently not functioning correctly.
3535

36-
You can report a bug by following the instructions [here](/contribute/issues#report-a-bug).
36+
You can report a bug by following the instructions [here](./issues.html#report-a-bug).
3737

3838
### The `discussion` Label
3939

@@ -43,13 +43,13 @@ If an issue is labeled as `discussion`, it signifies that more conversation is n
4343

4444
The `documentation` label identifies issues pertaining to our documentation.
4545

46-
You can contribute to improving our documentation by creating issues following the guidelines [here](/contribute/issues#improve-our-docs).
46+
You can contribute to improving our documentation by creating issues following the guidelines [here](./issues.html#improve-our-docs).
4747

4848
### The `enhancement` Label
4949

5050
As Nixtla continues to evolve, there are always areas that can be enhanced. All issues suggesting improvements to Nixtla are tagged with the `enhancement` label.
5151

52-
You can propose a feature by following the instructions [here](/contribute/issues#request-a-feature).
52+
You can propose a feature by following the instructions [here](./issues.html#request-a-feature).
5353

5454
### The `discussion` Label
5555

@@ -58,4 +58,3 @@ If an issue is labeled as `discussion`, it needs more information before it can
5858
### The `requested` Label
5959

6060
Our users are welcomed to propose improvements, report bugs, request feature, etc. Any issue originating from them is flagged as `requested`.
61-

nbs/docs/experiments/AmazonStatsForecast.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@
4646
"source": [
4747
"## Amazon Forecast\n",
4848
"\n",
49-
"Amazon Forecast is a fully automated solution for time series forecasting. The solution can take the time series to forecast and exogenous variables (temporal and static). For this experiment, we used the AutoPredict functionality of Amazon Forecast following the steps of [this tutorial](https://docs.aws.amazon.com/forecast/latest/dg/gs-console.html). A detailed description of the particular steps for this dataset can be found [here](./AmazonStatsForecast).\n",
49+
"Amazon Forecast is a fully automated solution for time series forecasting. The solution can take the time series to forecast and exogenous variables (temporal and static). For this experiment, we used the AutoPredict functionality of Amazon Forecast following the steps of [this tutorial](https://docs.aws.amazon.com/forecast/latest/dg/gs-console.html). A detailed description of the particular steps for this dataset can be found [here](./amazonstatsforecast.html).\n",
5050
"\n",
5151
"Amazon Forecast creates predictors with AutoPredictor, which involves applying the optimal combination of algorithms to each time series in your datasets. The predictor is an Amazon Forecast model that is trained using your target time series, related time series, item metadata, and any additional datasets you include. \n",
5252
"\n",
@@ -70,7 +70,7 @@
7070
"source": [
7171
"### Install necessary libraries\n",
7272
"\n",
73-
"We assume you have StatsForecast already installed. Check this guide for instructions on [how to install StatsForecast](../getting-started/0_Installation).\n",
73+
"We assume you have StatsForecast already installed. Check this guide for instructions on [how to install StatsForecast](../getting-started/installation.html).\n",
7474
"\n",
7575
"Additionally, we will install `s3fs` to read from the S3 Filesystem of AWS. (If you don't want to use a cloud storage provider, you can read your files locally using pandas)"
7676
]
@@ -225,7 +225,7 @@
225225
"\n",
226226
"We fit the model by instantiating a new `StatsForecast` object with the following parameters:\n",
227227
"\n",
228-
"* `models`: a list of models. Select the models you want from [models](../../models) and import them. For this example, we will use `AutoETS` and `DynamicOptimizedTheta`. We set `season_length` to 7 because we expect seasonal effects every week. (See: [Seasonal periods](https://robjhyndman.com/hyndsight/seasonal-periods/))\n",
228+
"* `models`: a list of models. Select the models you want from [models](../../src/core/models.html) and import them. For this example, we will use `AutoETS` and `DynamicOptimizedTheta`. We set `season_length` to 7 because we expect seasonal effects every week. (See: [Seasonal periods](https://robjhyndman.com/hyndsight/seasonal-periods/))\n",
229229
"\n",
230230
"* `freq`: a string indicating the frequency of the data. (See [panda's available frequencies](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases).)\n",
231231
"\n",

nbs/docs/experiments/ETS_ray_m5.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"id": "664acaef-8fd6-4874-a3ef-ddf32dbbe67d",
1616
"metadata": {},
1717
"source": [
18-
"In this notebook we show how to use `StatsForecast` and `ray` to forecast thounsands of time series in less than 6 minutes (M5 dataset). Also, we show that `StatsForecast` has better performance in time and accuracy compared to [`Prophet` running on a Spark cluster](./Prophet_spark_m5) using DataBricks.\n",
18+
"In this notebook we show how to use `StatsForecast` and `ray` to forecast thounsands of time series in less than 6 minutes (M5 dataset). Also, we show that `StatsForecast` has better performance in time and accuracy compared to [`Prophet` running on a Spark cluster](./prophet_spark_m5.html) using DataBricks.\n",
1919
"\n",
2020
"In this example, we used a ray cluster (AWS) of 11 instances of type m5.2xlarge (8 cores, 32 GB RAM)."
2121
]

nbs/docs/getting-started/getting_Started_complete.ipynb

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@
4040
"source": [
4141
":::{.callout-warning collapse=\"true\"}\n",
4242
"## Prerequisites\n",
43-
"This Guide assumes basic familiarity with StatsForecast. For a minimal example visit the [Quick Start](./1_Getting_Started_short).\n",
43+
"This Guide assumes basic familiarity with StatsForecast. For a minimal example visit the [Quick Start](./getting_started_short.html).\n",
4444
":::\n",
4545
"\n",
4646
"Follow this article for a step-by-step guide on building a production-ready forecasting pipeline for multiple time series. \n",
@@ -65,15 +65,15 @@
6565
"## Not Covered in this guide\n",
6666
"\n",
6767
"* Forecasting at scale using clusters on the cloud. \n",
68-
" * [Forecast the M5 Dataset in 5min](../experiments/ETS_ray_m5) using Ray clusters.\n",
69-
" * [Forecast the M5 Dataset in 5min](../experiments/Prophet_spark_m5) using Spark clusters.\n",
68+
" * [Forecast the M5 Dataset in 5min](../experiments/ets_ray_m5.html) using Ray clusters.\n",
69+
" * [Forecast the M5 Dataset in 5min](../experiments/prophet_spark_m5.html) using Spark clusters.\n",
7070
" * Learn how to predict [1M series in less than 30min](https://www.anyscale.com/blog/how-nixtla-uses-ray-to-accurately-predict-more-than-a-million-time-series).\n",
7171
"\n",
7272
"* Training models on Multiple Seasonalities. \n",
73-
" * Learn to use multiple seasonality in this [Electricity Load forecasting](../tutorials/ElectricityLoadForecasting) tutorial.\n",
73+
" * Learn to use multiple seasonality in this [Electricity Load forecasting](../tutorials/electricityloadforecasting.html) tutorial.\n",
7474
"\n",
7575
"* Using external regressors or exogenous variables\n",
76-
" * Follow this tutorial to [include exogenous variables](../how-to-guides/Exogenous) like weather or holidays or static variables like category or family. \n",
76+
" * Follow this tutorial to [include exogenous variables](../how-to-guides/exogenous.html) like weather or holidays or static variables like category or family. \n",
7777
"\n",
7878
"* Comparing StatsForecast with other popular libraries.\n",
7979
" * You can reproduce our benchmarks [here](https://github.com/Nixtla/statsforecast/tree/main/experiments).\n",
@@ -86,7 +86,7 @@
8686
"source": [
8787
"## Install libraries\n",
8888
"\n",
89-
"We assume you have StatsForecast already installed. Check this guide for instructions on [how to install StatsForecast](./0_Installation)."
89+
"We assume you have StatsForecast already installed. Check this guide for instructions on [how to install StatsForecast](./installation.html)."
9090
]
9191
},
9292
{
@@ -306,7 +306,7 @@
306306
"\n",
307307
"* **Theta Models:** fit two theta lines to a deseasonalized time series, using different techniques to obtain and combine the two theta lines to produce the final forecasts. Examples: Theta, DynamicTheta\n",
308308
"\n",
309-
"Here you can check the complete list of [models](../../src/core/models_intro) .\n",
309+
"Here you can check the complete list of [models](../../src/core/models_intro.html) .\n",
310310
"\n",
311311
"For this example we will use:\n",
312312
"\n",
@@ -365,7 +365,7 @@
365365
"source": [
366366
"We fit the models by instantiating a new `StatsForecast` object with the following parameters:\n",
367367
"\n",
368-
"* `models`: a list of models. Select the models you want from [models](../../src/core/models_intro) and import them.\n",
368+
"* `models`: a list of models. Select the models you want from [models](../../src/core/models_intro.html) and import them.\n",
369369
"\n",
370370
"* `freq`: a string indicating the frequency of the data. (See [pandas available frequencies](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases).)\n",
371371
"\n",

nbs/docs/getting-started/getting_Started_complete_polars.ipynb

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
"source": [
3535
":::{.callout-warning collapse=\"true\"}\n",
3636
"## Prerequisites\n",
37-
"This Guide assumes basic familiarity with StatsForecast. For a minimal example visit the [Quick Start](./Getting_Started_short)\n",
37+
"This Guide assumes basic familiarity with StatsForecast. For a minimal example visit the [Quick Start](./getting_started_short.html)\n",
3838
":::\n",
3939
"\n",
4040
"Follow this article for a step-by-step guide on building a production-ready forecasting pipeline for multiple time series. \n",
@@ -59,15 +59,15 @@
5959
"## Not Covered in this guide\n",
6060
"\n",
6161
"* Forecasting at scale using clusters on the cloud. \n",
62-
" * [Forecast the M5 Dataset in 5min](../experiments/ETS_ray_m5) using Ray clusters.\n",
63-
" * [Forecast the M5 Dataset in 5min](../experiments/Prophet_spark_m5) using Spark clusters.\n",
62+
" * [Forecast the M5 Dataset in 5min](../experiments/ets_ray_m5.html) using Ray clusters.\n",
63+
" * [Forecast the M5 Dataset in 5min](../experiments/prophet_spark_m5.html) using Spark clusters.\n",
6464
" * Learn how to predict [1M series in less than 30min](https://www.anyscale.com/blog/how-nixtla-uses-ray-to-accurately-predict-more-than-a-million-time-series).\n",
6565
"\n",
6666
"* Training models on Multiple Seasonalities. \n",
67-
" * Learn to use multiple seasonality in this [Electricity Load forecasting](../tutorials/ElectricityLoadForecasting) tutorial.\n",
67+
" * Learn to use multiple seasonality in this [Electricity Load forecasting](../tutorials/electricityloadforecasting.html) tutorial.\n",
6868
"\n",
6969
"* Using external regressors or exogenous variables\n",
70-
" * Follow this tutorial to [include exogenous variables](../how-to-guides/Exogenous) like weather or holidays or static variables like category or family. \n",
70+
" * Follow this tutorial to [include exogenous variables](../how-to-guides/exogenous.html) like weather or holidays or static variables like category or family. \n",
7171
"\n",
7272
"* Comparing StatsForecast with other popular libraries.\n",
7373
" * You can reproduce our benchmarks [here](https://github.com/Nixtla/statsforecast/tree/main/experiments).\n",
@@ -80,7 +80,7 @@
8080
"source": [
8181
"## Install libraries\n",
8282
"\n",
83-
"We assume you have StatsForecast already installed. Check this guide for instructions on [how to install StatsForecast](./0_Installation)."
83+
"We assume you have StatsForecast already installed. Check this guide for instructions on [how to install StatsForecast](./installation.html)."
8484
]
8585
},
8686
{
@@ -278,7 +278,7 @@
278278
"\n",
279279
"* **Theta Models:** fit two theta lines to a deseasonalized time series, using different techniques to obtain and combine the two theta lines to produce the final forecasts. Examples: Theta, DynamicTheta\n",
280280
"\n",
281-
"Here you can check the complete list of [models](../models_intro.qmd).\n",
281+
"Here you can check the complete list of [models](../../src/core/models_intro.html).\n",
282282
"\n",
283283
"For this example we will use:\n",
284284
"\n",
@@ -337,7 +337,7 @@
337337
"source": [
338338
"We fit the models by instantiating a new `StatsForecast` object with the following parameters:\n",
339339
"\n",
340-
"* `models`: a list of models. Select the models you want from [models](../../src/core/models_intro) and import them.\n",
340+
"* `models`: a list of models. Select the models you want from [models](../../src/core/models_intro.html) and import them.\n",
341341
"\n",
342342
"* `freq`: a string indicating the frequency of the data. (See [panda's available frequencies](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases).) This is also available with Polars.\n",
343343
"\n",

nbs/docs/getting-started/getting_Started_short.ipynb

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818
"`StatsForecast` follows the sklearn model API. For this minimal example, you will create an instance of the StatsForecast class and then call its `fit` and `predict` methods. We recommend this option if speed is not paramount and you want to explore the fitted values and parameters. \n",
1919
"\n",
2020
":::{.callout-tip}\n",
21-
"If you want to forecast many series, we recommend using the `forecast` method. Check this [Getting Started with multiple time series](../../2_Getting_Started_complete) guide. \n",
21+
"If you want to forecast many series, we recommend using the `forecast` method. Check this [Getting Started with multiple time series](./getting_started_complete.html) guide. \n",
2222
":::\n",
2323
"\n",
2424
"The input to StatsForecast is always a data frame in [long format](https://www.theanalysisfactor.com/wide-and-long-data/) with three columns: `unique_id`, `ds` and `y`:\n",
@@ -32,7 +32,7 @@
3232
"\n",
3333
"As an example, let\u2019s look at the US Air Passengers dataset. This time series consists of monthly totals of a US airline passengers from 1949 to 1960. The CSV is available [here](https://www.kaggle.com/datasets/chirag19/air-passengers).\n",
3434
"\n",
35-
"We assume you have StatsForecast already installed. Check this guide for instructions on [how to install StatsForecast](../../0_Installation).\n",
35+
"We assume you have StatsForecast already installed. Check this guide for instructions on [how to install StatsForecast](./installation.html).\n",
3636
"\n",
3737
"First, we\u2019ll import the data:"
3838
]
@@ -146,8 +146,8 @@
146146
"cell_type": "markdown",
147147
"metadata": {},
148148
"source": [
149-
"We fit the model by instantiating a new `StatsForecast` object with its [two required parameters](../../models):\n",
150-
"* `models`: a list of models. Select the models you want from [models](../../models) and import them. For this example, we will use a `AutoARIMA` model. We set `season_length` to 12 because we expect seasonal effects every 12 months. (See: [Seasonal periods](https://robjhyndman.com/hyndsight/seasonal-periods/))\n",
149+
"We fit the model by instantiating a new `StatsForecast` object with its [two required parameters](../../src/core/models.html):\n",
150+
"* `models`: a list of models. Select the models you want from [models](../../src/core/models.html) and import them. For this example, we will use a `AutoARIMA` model. We set `season_length` to 12 because we expect seasonal effects every 12 months. (See: [Seasonal periods](https://robjhyndman.com/hyndsight/seasonal-periods/))\n",
151151
"\n",
152152
"* `freq`: a string indicating the frequency of the data. (See [pandas available frequencies](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases).)\n",
153153
"\n",
@@ -337,9 +337,9 @@
337337
":::{.callout-tip}\n",
338338
"## Next Steps\n",
339339
"\n",
340-
"* Build and end-to-end forecasting pipeline following best practices in [End to End Walkthrough](./2_Getting_Started_complete)\n",
341-
"* [Forecast millions of series](../experiments/Prophet_spark_m5) in a scalable cluster in the cloud using Spark and Nixtla\n",
342-
"* [Detect anomalies](../tutorials/AnomalyDetection) in your past observations\n",
340+
"* Build and end-to-end forecasting pipeline following best practices in [End to End Walkthrough](./getting_started_complete.html)\n",
341+
"* [Forecast millions of series](../experiments/prophet_spark_m5.html) in a scalable cluster in the cloud using Spark and Nixtla\n",
342+
"* [Detect anomalies](../tutorials/anomalydetection.html) in your past observations\n",
343343
":::"
344344
]
345345
},

0 commit comments

Comments
 (0)