Skip to content

Commit 0adcb58

Browse files
authored
[DOCS] Docstrings (#1279)
1 parent bb315be commit 0adcb58

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

66 files changed

+640
-457
lines changed

nbs/models.autoformer.ipynb

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -445,8 +445,9 @@
445445
"\t`activation`: str=`GELU`, activation from ['ReLU', 'Softplus', 'Tanh', 'SELU', 'LeakyReLU', 'PReLU', 'Sigmoid', 'GELU'].<br>\n",
446446
" `encoder_layers`: int=2, number of layers for the TCN encoder.<br>\n",
447447
" `decoder_layers`: int=1, number of layers for the MLP decoder.<br>\n",
448-
" `distil`: bool = True, wether the Autoformer decoder uses bottlenecks.<br>\n",
448+
" `MovingAvg_window`: int=25, window size for the moving average filter.<br>\n",
449449
" `loss`: PyTorch module, instantiated train loss class from [losses collection](https://nixtla.github.io/neuralforecast/losses.pytorch.html).<br>\n",
450+
" `valid_loss`: PyTorch module, instantiated validation loss class from [losses collection](https://nixtla.github.io/neuralforecast/losses.pytorch.html).<br>\n",
450451
" `max_steps`: int=1000, maximum number of training steps.<br>\n",
451452
" `learning_rate`: float=1e-3, Learning rate between (0, 1).<br>\n",
452453
" `num_lr_decays`: int=-1, Number of learning rate decays, evenly distributed across max_steps.<br>\n",
@@ -460,7 +461,7 @@
460461
" `scaler_type`: str='robust', type of scaler for temporal inputs normalization see [temporal scalers](https://nixtla.github.io/neuralforecast/common.scalers.html).<br>\n",
461462
" `random_seed`: int=1, random_seed for pytorch initializer and numpy generators.<br>\n",
462463
" `drop_last_loader`: bool=False, if True `TimeSeriesDataLoader` drops last non-full batch.<br>\n",
463-
" `alias`: str, optional, Custom name of the model.<br>\n",
464+
" `alias`: str, optional, Custom name of the model.<br>\n",
464465
" `optimizer`: Subclass of 'torch.optim.Optimizer', optional, user specified optimizer instead of the default choice (Adam).<br>\n",
465466
" `optimizer_kwargs`: dict, optional, list of parameters used by the user specified `optimizer`.<br>\n",
466467
" `lr_scheduler`: Subclass of 'torch.optim.lr_scheduler.LRScheduler', optional, user specified lr_scheduler instead of the default choice (StepLR).<br>\n",
@@ -511,6 +512,7 @@
511512
" scaler_type: str = 'identity',\n",
512513
" random_seed: int = 1,\n",
513514
" drop_last_loader: bool = False,\n",
515+
" alias: Optional[str] = None,\n",
514516
" optimizer = None,\n",
515517
" optimizer_kwargs = None,\n",
516518
" lr_scheduler = None,\n",
@@ -519,8 +521,8 @@
519521
" **trainer_kwargs):\n",
520522
" super(Autoformer, self).__init__(h=h,\n",
521523
" input_size=input_size,\n",
522-
" hist_exog_list=hist_exog_list,\n",
523524
" stat_exog_list=stat_exog_list,\n",
525+
" hist_exog_list=hist_exog_list,\n",
524526
" futr_exog_list = futr_exog_list,\n",
525527
" exclude_insample_y = exclude_insample_y,\n",
526528
" loss=loss,\n",
@@ -531,14 +533,15 @@
531533
" early_stop_patience_steps=early_stop_patience_steps,\n",
532534
" val_check_steps=val_check_steps,\n",
533535
" batch_size=batch_size,\n",
534-
" windows_batch_size=windows_batch_size,\n",
535536
" valid_batch_size=valid_batch_size,\n",
537+
" windows_batch_size=windows_batch_size,\n",
536538
" inference_windows_batch_size=inference_windows_batch_size,\n",
537539
" start_padding_enabled = start_padding_enabled,\n",
538540
" step_size=step_size,\n",
539541
" scaler_type=scaler_type,\n",
540-
" drop_last_loader=drop_last_loader,\n",
541542
" random_seed=random_seed,\n",
543+
" drop_last_loader=drop_last_loader,\n",
544+
" alias=alias,\n",
542545
" optimizer=optimizer,\n",
543546
" optimizer_kwargs=optimizer_kwargs,\n",
544547
" lr_scheduler=lr_scheduler,\n",

nbs/models.bitcn.ipynb

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -223,6 +223,7 @@
223223
" scaler_type: str = 'identity',\n",
224224
" random_seed: int = 1,\n",
225225
" drop_last_loader: bool = False,\n",
226+
" alias: Optional[str] = None,\n",
226227
" optimizer = None,\n",
227228
" optimizer_kwargs = None,\n",
228229
" lr_scheduler = None,\n",
@@ -252,6 +253,7 @@
252253
" scaler_type=scaler_type,\n",
253254
" random_seed=random_seed,\n",
254255
" drop_last_loader=drop_last_loader,\n",
256+
" alias=alias,\n",
255257
" optimizer=optimizer,\n",
256258
" optimizer_kwargs=optimizer_kwargs,\n",
257259
" lr_scheduler=lr_scheduler,\n",

nbs/models.deepar.ipynb

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,7 @@
153153
"\n",
154154
" **Parameters:**<br>\n",
155155
" `h`: int, Forecast horizon. <br>\n",
156-
" `input_size`: int, autorregresive inputs size, y=[1,2,3,4] input_size=2 -> y_[t-2:t]=[1,2].<br>\n",
156+
" `input_size`: int, maximum sequence length for truncated train backpropagation. Default -1 uses 3 * horizon <br>\n",
157157
" `lstm_n_layers`: int=2, number of LSTM layers.<br>\n",
158158
" `lstm_hidden_size`: int=128, LSTM hidden size.<br>\n",
159159
" `lstm_dropout`: float=0.1, LSTM dropout.<br>\n",
@@ -209,9 +209,9 @@
209209
" decoder_hidden_layers: int = 0,\n",
210210
" decoder_hidden_size: int = 0,\n",
211211
" trajectory_samples: int = 100,\n",
212-
" futr_exog_list = None,\n",
213-
" hist_exog_list = None,\n",
214212
" stat_exog_list = None,\n",
213+
" hist_exog_list = None,\n",
214+
" futr_exog_list = None,\n",
215215
" exclude_insample_y = False,\n",
216216
" loss = DistributionLoss(distribution='StudentT', level=[80, 90], return_params=False),\n",
217217
" valid_loss = MAE(),\n",
@@ -229,6 +229,7 @@
229229
" scaler_type: str = 'identity',\n",
230230
" random_seed: int = 1,\n",
231231
" drop_last_loader = False,\n",
232+
" alias: Optional[str] = None,\n",
232233
" optimizer = None,\n",
233234
" optimizer_kwargs = None,\n",
234235
" lr_scheduler = None,\n",
@@ -242,9 +243,9 @@
242243
" # Inherit BaseWindows class\n",
243244
" super(DeepAR, self).__init__(h=h,\n",
244245
" input_size=input_size,\n",
245-
" futr_exog_list=futr_exog_list,\n",
246-
" hist_exog_list=hist_exog_list,\n",
247246
" stat_exog_list=stat_exog_list,\n",
247+
" hist_exog_list=hist_exog_list,\n",
248+
" futr_exog_list=futr_exog_list,\n",
248249
" exclude_insample_y = exclude_insample_y,\n",
249250
" loss=loss,\n",
250251
" valid_loss=valid_loss,\n",
@@ -254,14 +255,15 @@
254255
" early_stop_patience_steps=early_stop_patience_steps,\n",
255256
" val_check_steps=val_check_steps,\n",
256257
" batch_size=batch_size,\n",
257-
" windows_batch_size=windows_batch_size,\n",
258258
" valid_batch_size=valid_batch_size,\n",
259+
" windows_batch_size=windows_batch_size,\n",
259260
" inference_windows_batch_size=inference_windows_batch_size,\n",
260261
" start_padding_enabled=start_padding_enabled,\n",
261262
" step_size=step_size,\n",
262263
" scaler_type=scaler_type,\n",
263-
" drop_last_loader=drop_last_loader,\n",
264264
" random_seed=random_seed,\n",
265+
" drop_last_loader=drop_last_loader,\n",
266+
" alias=alias,\n",
265267
" optimizer=optimizer,\n",
266268
" optimizer_kwargs=optimizer_kwargs,\n",
267269
" lr_scheduler=lr_scheduler,\n",

nbs/models.deepnpts.ipynb

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -142,14 +142,14 @@
142142
" \n",
143143
" def __init__(self,\n",
144144
" h,\n",
145-
" input_size: int = -1,\n",
145+
" input_size: int,\n",
146146
" hidden_size: int = 32,\n",
147147
" batch_norm: bool = True,\n",
148148
" dropout: float = 0.1,\n",
149149
" n_layers: int = 2,\n",
150-
" futr_exog_list = None,\n",
151-
" hist_exog_list = None,\n",
152150
" stat_exog_list = None,\n",
151+
" hist_exog_list = None,\n",
152+
" futr_exog_list = None,\n",
153153
" exclude_insample_y = False,\n",
154154
" loss = MAE(),\n",
155155
" valid_loss = MAE(),\n",
@@ -167,6 +167,7 @@
167167
" scaler_type: str = 'standard',\n",
168168
" random_seed: int = 1,\n",
169169
" drop_last_loader = False,\n",
170+
" alias: Optional[str] = None,\n",
170171
" optimizer = None,\n",
171172
" optimizer_kwargs = None,\n",
172173
" lr_scheduler = None,\n",
@@ -186,9 +187,9 @@
186187
" # Inherit BaseWindows class\n",
187188
" super(DeepNPTS, self).__init__(h=h,\n",
188189
" input_size=input_size,\n",
189-
" futr_exog_list=futr_exog_list,\n",
190-
" hist_exog_list=hist_exog_list,\n",
191190
" stat_exog_list=stat_exog_list,\n",
191+
" hist_exog_list=hist_exog_list,\n",
192+
" futr_exog_list=futr_exog_list,\n",
192193
" exclude_insample_y = exclude_insample_y,\n",
193194
" loss=loss,\n",
194195
" valid_loss=valid_loss,\n",
@@ -198,14 +199,15 @@
198199
" early_stop_patience_steps=early_stop_patience_steps,\n",
199200
" val_check_steps=val_check_steps,\n",
200201
" batch_size=batch_size,\n",
201-
" windows_batch_size=windows_batch_size,\n",
202202
" valid_batch_size=valid_batch_size,\n",
203+
" windows_batch_size=windows_batch_size,\n",
203204
" inference_windows_batch_size=inference_windows_batch_size,\n",
204205
" start_padding_enabled=start_padding_enabled,\n",
205206
" step_size=step_size,\n",
206207
" scaler_type=scaler_type,\n",
207-
" drop_last_loader=drop_last_loader,\n",
208208
" random_seed=random_seed,\n",
209+
" drop_last_loader=drop_last_loader,\n",
210+
" alias=alias,\n",
209211
" optimizer=optimizer,\n",
210212
" optimizer_kwargs=optimizer_kwargs,\n",
211213
" lr_scheduler=lr_scheduler,\n",

nbs/models.dilated_rnn.ipynb

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -376,8 +376,8 @@
376376
"\n",
377377
" **Parameters:**<br>\n",
378378
" `h`: int, forecast horizon.<br>\n",
379-
" `input_size`: int, maximum sequence length for truncated train backpropagation. Default -1 uses all history.<br>\n",
380-
" `inference_input_size`: int, maximum sequence length for truncated inference. Default -1 uses all history.<br>\n",
379+
" `input_size`: int, maximum sequence length for truncated train backpropagation. Default -1 uses 3 * horizon <br>\n",
380+
" `inference_input_size`: int, maximum sequence length for truncated inference. Default None uses input_size history.<br>\n",
381381
" `cell_type`: str, type of RNN cell to use. Options: 'GRU', 'RNN', 'LSTM', 'ResLSTM', 'AttentiveLSTM'.<br>\n",
382382
" `dilations`: int list, dilations betweem layers.<br>\n",
383383
" `encoder_hidden_size`: int=200, units for the RNN's hidden state size.<br>\n",
@@ -387,6 +387,7 @@
387387
" `futr_exog_list`: str list, future exogenous columns.<br>\n",
388388
" `hist_exog_list`: str list, historic exogenous columns.<br>\n",
389389
" `stat_exog_list`: str list, static exogenous columns.<br>\n",
390+
" `exclude_insample_y`: bool=False, the model skips the autoregressive features y[t-input_size:t] if True.<br>\n",
390391
" `loss`: PyTorch module, instantiated train loss class from [losses collection](https://nixtla.github.io/neuralforecast/losses.pytorch.html).<br>\n",
391392
" `valid_loss`: PyTorch module=`loss`, instantiated valid loss class from [losses collection](https://nixtla.github.io/neuralforecast/losses.pytorch.html).<br>\n",
392393
" `max_steps`: int, maximum number of training steps.<br>\n",
@@ -396,6 +397,9 @@
396397
" `val_check_steps`: int, Number of training steps between every validation loss check.<br>\n",
397398
" `batch_size`: int=32, number of different series in each batch.<br>\n",
398399
" `valid_batch_size`: int=None, number of different series in each validation and test batch.<br>\n",
400+
" `windows_batch_size`: int=128, number of windows to sample in each training batch, default uses all.<br>\n",
401+
" `inference_windows_batch_size`: int=1024, number of windows to sample in each inference batch, -1 uses all.<br>\n",
402+
" `start_padding_enabled`: bool=False, if True, the model will pad the time series with zeros at the beginning, by input size.<br> \n",
399403
" `step_size`: int=1, step size between each window of temporal data.<br>\n",
400404
" `scaler_type`: str='robust', type of scaler for temporal inputs normalization see [temporal scalers](https://nixtla.github.io/neuralforecast/common.scalers.html).<br>\n",
401405
" `random_seed`: int=1, random_seed for pytorch initializer and numpy generators.<br>\n",
@@ -417,8 +421,8 @@
417421
"\n",
418422
" def __init__(self,\n",
419423
" h: int,\n",
420-
" input_size: int,\n",
421-
" inference_input_size: int = -1,\n",
424+
" input_size: int = -1,\n",
425+
" inference_input_size: Optional[int] = None,\n",
422426
" cell_type: str = 'LSTM',\n",
423427
" dilations: List[List[int]] = [[1, 2], [4, 8]],\n",
424428
" encoder_hidden_size: int = 128,\n",
@@ -445,6 +449,7 @@
445449
" scaler_type: str = 'robust',\n",
446450
" random_seed: int = 1,\n",
447451
" drop_last_loader: bool = False,\n",
452+
" alias: Optional[str] = None,\n",
448453
" optimizer = None,\n",
449454
" optimizer_kwargs = None,\n",
450455
" lr_scheduler = None,\n",
@@ -454,6 +459,7 @@
454459
" super(DilatedRNN, self).__init__(\n",
455460
" h=h,\n",
456461
" input_size=input_size,\n",
462+
" inference_input_size=inference_input_size,\n",
457463
" futr_exog_list=futr_exog_list,\n",
458464
" hist_exog_list=hist_exog_list,\n",
459465
" stat_exog_list=stat_exog_list,\n",
@@ -474,6 +480,7 @@
474480
" scaler_type=scaler_type,\n",
475481
" random_seed=random_seed,\n",
476482
" drop_last_loader=drop_last_loader,\n",
483+
" alias=alias,\n",
477484
" optimizer=optimizer,\n",
478485
" optimizer_kwargs=optimizer_kwargs,\n",
479486
" lr_scheduler=lr_scheduler,\n",

nbs/models.dlinear.ipynb

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -143,13 +143,14 @@
143143
"\n",
144144
" *Parameters:*<br>\n",
145145
" `h`: int, forecast horizon.<br>\n",
146-
" `input_size`: int, maximum sequence length for truncated train backpropagation. Default -1 uses all history.<br>\n",
147-
" `futr_exog_list`: str list, future exogenous columns.<br>\n",
148-
" `hist_exog_list`: str list, historic exogenous columns.<br>\n",
146+
" `input_size`: int, maximum sequence length for truncated train backpropagation. <br>\n",
149147
" `stat_exog_list`: str list, static exogenous columns.<br>\n",
148+
" `hist_exog_list`: str list, historic exogenous columns.<br>\n",
149+
" `futr_exog_list`: str list, future exogenous columns.<br>\n",
150150
" `exclude_insample_y`: bool=False, the model skips the autoregressive features y[t-input_size:t] if True.<br>\n",
151151
" `moving_avg_window`: int=25, window size for trend-seasonality decomposition. Should be uneven.<br>\n",
152152
" `loss`: PyTorch module, instantiated train loss class from [losses collection](https://nixtla.github.io/neuralforecast/losses.pytorch.html).<br>\n",
153+
" `valid_loss`: PyTorch module=`loss`, instantiated valid loss class from [losses collection](https://nixtla.github.io/neuralforecast/losses.pytorch.html).<br>\n",
153154
" `max_steps`: int=1000, maximum number of training steps.<br>\n",
154155
" `learning_rate`: float=1e-3, Learning rate between (0, 1).<br>\n",
155156
" `num_lr_decays`: int=-1, Number of learning rate decays, evenly distributed across max_steps.<br>\n",
@@ -160,6 +161,7 @@
160161
" `windows_batch_size`: int=1024, number of windows to sample in each training batch, default uses all.<br>\n",
161162
" `inference_windows_batch_size`: int=1024, number of windows to sample in each inference batch.<br>\n",
162163
" `start_padding_enabled`: bool=False, if True, the model will pad the time series with zeros at the beginning, by input size.<br>\n",
164+
" `step_size`: int=1, step size between each window of temporal data.<br>\n",
163165
" `scaler_type`: str='robust', type of scaler for temporal inputs normalization see [temporal scalers](https://nixtla.github.io/neuralforecast/common.scalers.html).<br>\n",
164166
" `random_seed`: int=1, random_seed for pytorch initializer and numpy generators.<br>\n",
165167
" `drop_last_loader`: bool=False, if True `TimeSeriesDataLoader` drops last non-full batch.<br>\n",
@@ -205,6 +207,7 @@
205207
" scaler_type: str = 'identity',\n",
206208
" random_seed: int = 1,\n",
207209
" drop_last_loader: bool = False,\n",
210+
" alias: Optional[str] = None,\n",
208211
" optimizer = None,\n",
209212
" optimizer_kwargs = None,\n",
210213
" lr_scheduler = None,\n",
@@ -232,6 +235,7 @@
232235
" step_size=step_size,\n",
233236
" scaler_type=scaler_type,\n",
234237
" drop_last_loader=drop_last_loader,\n",
238+
" alias=alias,\n",
235239
" random_seed=random_seed,\n",
236240
" optimizer=optimizer,\n",
237241
" optimizer_kwargs=optimizer_kwargs,\n",

0 commit comments

Comments
 (0)