Skip to content

Commit 9c8eca3

Browse files
rogeliomjRogelio Melomarcopeix
authored
Improve grammar and readability in documentation (#1423)
Co-authored-by: Rogelio Melo <rogeliomj@MacBook-Pro-de-Rogelio.local> Co-authored-by: Marco <marco@nixtla.io>
1 parent 66aeb5e commit 9c8eca3

13 files changed

+70
-64
lines changed

nbs/docs/capabilities/cross_validation.ipynb

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@
7878
"source": [
7979
"## 2. Read the data\n",
8080
"\n",
81-
"For this tutorial, we use part of the hourly M4 dataset. It is stored in a parquet file for efficiency. You can use ordinary pandas operations to read your data in other formats likes `.csv`. \n",
81+
"For this tutorial, we use part of the hourly M4 dataset. It is stored in a parquet file for efficiency. However, you can use ordinary pandas operations to read your data in other formats likes `.csv`. \n",
8282
"\n",
8383
"The input to `NeuralForecast` is always a data frame in [long format](https://www.theanalysisfactor.com/wide-and-long-data/) with three columns: `unique_id`, `ds` and `y`:\n",
8484
"\n",
@@ -180,7 +180,7 @@
180180
"cell_type": "markdown",
181181
"metadata": {},
182182
"source": [
183-
"For simplicity, we use only a single series to explore in detail the cross-validation functionality. Also, let's use the first 700 time steps, such that we work with round numbers, making it easier to visualize and understand cross-validation."
183+
"For simplicity, we focus on a single time series to explore the cross-validation functionality in detail. We also use only the first 700 time steps, which allows us to work with round numbers and makes the cross-validation process easier to visualize and understand."
184184
]
185185
},
186186
{
@@ -449,7 +449,7 @@
449449
"cell_type": "markdown",
450450
"metadata": {},
451451
"source": [
452-
"In the figure above, we see that we have 4 cutoff points, which correspond to our four cross-validation windows. Of course, notice that the windows are set from the end of the dataset. That way, the model trains on past data to predict future data. \n",
452+
"In the figure above, we observe four cutoff points, each corresponding to a cross-validation window. Note that these windows are defined from the end of the dataset, ensuring that the model is trained on past data to predict future data.\n",
453453
"\n",
454454
":::{.callout-warning collapse=\"true\"}\n",
455455
"## Important note\n",
@@ -655,11 +655,17 @@
655655
"metadata": {},
656656
"source": [
657657
"In the figure above, we see that our two folds overlap between time steps 601 and 650, since the step size is 50. This happens because:\n",
658+
"\n",
658659
"- fold 1: model is trained using time steps 0 to 550 and predicts 551 to 650 (h=100)\n",
659660
"- fold 2: model is trained using time steps 0 to 600 (`step_size=50`) and predicts 601 to 700\n",
660661
"\n",
661662
"Be aware that when evaluating a model trained with overlapping cross-validation windows, some time steps have more than one prediction. This may bias your evaluation metric, as the repeated time steps are taken into account in the metric multiple times."
662663
]
664+
},
665+
{
666+
"cell_type": "markdown",
667+
"metadata": {},
668+
"source": []
663669
}
664670
],
665671
"metadata": {

nbs/docs/capabilities/exogenous_variables.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@
2424
"All NeuralForecast models are capable of incorporating exogenous variables to model the following conditional predictive distribution:\n",
2525
"$$\\mathbb{P}(\\mathbf{y}_{t+1:t+H} \\;|\\; \\mathbf{y}_{[:t]},\\; \\mathbf{x}^{(h)}_{[:t]},\\; \\mathbf{x}^{(f)}_{[:t+H]},\\; \\mathbf{x}^{(s)} )$$\n",
2626
"\n",
27-
"where the regressors are static exogenous $\\mathbf{x}^{(s)}$, historic exogenous $\\mathbf{x}^{(h)}_{[:t]}$, exogenous available at the time of the prediction $\\mathbf{x}^{(f)}_{[:t+H]}$ and autorregresive features $\\mathbf{y}_{[:t]}$. Depending on the [train loss](../../losses.pytorch), the model outputs can be point forecasts (location estimators) or uncertainty intervals (quantiles)."
27+
"where the regressors are static exogenous $\\mathbf{x}^{(s)}$, historic exogenous $\\mathbf{x}^{(h)}_{[:t]}$, exogenous available at the time of the prediction $\\mathbf{x}^{(f)}_{[:t+H]}$ and autoregressive features $\\mathbf{y}_{[:t]}$. Depending on the [train loss](../../losses.pytorch), the model outputs can be point forecasts (location estimators) or uncertainty intervals (quantiles)."
2828
]
2929
},
3030
{

nbs/docs/getting-started/datarequirements.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
"metadata": {},
77
"source": [
88
"# Data Requirements\n",
9-
"> Dataset input requirments"
9+
"> Dataset input requirements"
1010
]
1111
},
1212
{
@@ -352,7 +352,7 @@
352352
"cell_type": "markdown",
353353
"metadata": {},
354354
"source": [
355-
"In this example `Y_df` only contains two columns: `timestamp`, and `value`. To use `NeuralForecast` we have to include the `unique_id` column and rename the previuos ones."
355+
"In this example `Y_df` only contains two columns: `timestamp`, and `value`. To use `NeuralForecast` we have to include the `unique_id` column and rename the previous ones."
356356
]
357357
},
358358
{

nbs/docs/getting-started/installation.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@
4747
"* distributed training with spark: `pip install neuralforecast[spark]`\n",
4848
"* saving and loading from S3: `pip install neuralforecast[aws]`\n",
4949
"\n",
50-
"#### User our env (optional)\n",
50+
"#### Use our env (optional)\n",
5151
"\n",
5252
"If you don't have a Conda environment and need tools like Numba, Pandas, NumPy, Jupyter, Tune, and Nbdev you can use ours by following these steps:\n",
5353
"\n",

nbs/docs/getting-started/quickstart.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -373,7 +373,7 @@
373373
"cell_type": "markdown",
374374
"metadata": {},
375375
"source": [
376-
"Finally, we plot the forecasts of both models againts the real values."
376+
"Finally, we plot the forecasts of both models against the real values."
377377
]
378378
},
379379
{

nbs/docs/tutorials/adding_models.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@
2222
"\n",
2323
"We highly recommend reading first the Getting Started and the NeuralForecast Map tutorials!\n",
2424
"\n",
25-
"Additionally, refer to the [CONTRIBUTING guide](https://github.com/Nixtla/neuralforecast/blob/main/CONTRIBUTING.md) for the basics how to contribute to NeuralForecast.\n",
25+
"Additionally, refer to the [CONTRIBUTING guide](https://github.com/Nixtla/neuralforecast/blob/main/CONTRIBUTING.md) for the basics of how to contribute to NeuralForecast.\n",
2626
"\n",
2727
":::"
2828
]

nbs/docs/tutorials/longhorizon_nhits.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@
6161
"\n",
6262
"It return three Dataframes: `Y_df` contains the values for the target variables, `X_df` contains exogenous calendar features and `S_df` contains static features for each time-series (none for ETTm2). For this example we will only use `Y_df`.\n",
6363
"\n",
64-
"If you want to use your own data just replace `Y_df`. Be sure to use a long format and have a simmilar structure than our data set."
64+
"If you want to use your own data just replace `Y_df`. Be sure to use a long format and have a similar structure to our data set."
6565
]
6666
},
6767
{

nbs/docs/tutorials/longhorizon_probabilistic.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@
6060
"\n",
6161
"It return three Dataframes: `Y_df` contains the values for the target variables, `X_df` contains exogenous calendar features and `S_df` contains static features for each time-series (none for ETTm2). For this example we will only use `Y_df`.\n",
6262
"\n",
63-
"If you want to use your own data just replace `Y_df`. Be sure to use a long format and have a simmilar structure than our data set."
63+
"If you want to use your own data just replace `Y_df`. Be sure to use a long format and have a similar structure to our data set."
6464
]
6565
},
6666
{

nbs/docs/tutorials/longhorizon_transformers.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@
6969
"\n",
7070
"It return three Dataframes: `Y_df` contains the values for the target variables, `X_df` contains exogenous calendar features and `S_df` contains static features for each time-series (none for ETTm2). For this example we will only use `Y_df`.\n",
7171
"\n",
72-
"If you want to use your own data just replace `Y_df`. Be sure to use a long format and have a simmilar structure than our data set."
72+
"If you want to use your own data just replace `Y_df`. Be sure to use a long format and have a similar structure to our data set."
7373
]
7474
},
7575
{

nbs/docs/tutorials/robust_forecasting.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -687,7 +687,7 @@
687687
"cell_type": "markdown",
688688
"metadata": {},
689689
"source": [
690-
"Finally, we plot the forecasts of both models againts the real values.\n",
690+
"Finally, we plot the forecasts of both models against the real values.\n",
691691
"\n",
692692
"And evaluate the accuracy of the `NHITS-Huber` and `NHITS-Normal` forecasters."
693693
]

0 commit comments

Comments
 (0)