Patch release focusing on:
- Bug fix to solve the persisting bug of passing
weights_onlyinload_from_checkpointforlightning <2.6. - Bug fix to non-writeable encoder issue caused by pandas copy-on-write behavior.
- [BUG] Torch doesn't support the conversion of non-writeable numpyFix non-writeable encoder issue and update tests (#1989) @cngmid
- [BUG] Solve the persisting bug of passing
weights_onlyinload_from_checkpoint(#2027) @phoeenniixx
- [MNT] Add CI step with pinned dependencies as of Nov 2025 (#2029) @phoeenniixx
@cngmid, @phoeenniixx
Release focusing on:
- python 3.14 support
- Solving the unpickling error in weight loading
- Deduplicating utilities with
scikit-baseand adding it as a core dependency - Addition of new
predictinterface for Beta v2 - Improvements to model backends
- Refactor N-BEATS blocks to separate KAN logic by @khenm in #2012
- Efficient Attention Backend for TimeXer @anasashbin #1997
- New
predictinterface for v2 models by @phoeenniixx in #1984 - Efficient Attention Backend for TimeXer @anasashbin #1997
-
Tuner import change due to a Lightning breaking change. Lightning v2.6 introduced a breaking change in its checkpoint loading behavior, which caused unpickling errors during weight loading in
pytorch-forecasting(see #2000). To address this,pytorch-forecastingnow provides its ownTunerwrapper that exposes the requiredweights_onlyargument when callinglr_find().- When using
pytorch-forecasting > 1.5.0withlightning > 2.5, please usepytorch_forecasting.tuning.Tunerin place oflightning.pytorch.tuner.Tuner. See #2000 for details.
- When using
- [MNT] Dependabot: Bump actions/upload-artifact from 4 to 5 (#1986) @dependabot[bot]
- [MNT] Dependabot: Bump actions/download-artifact from 5 to 6 (#1985) @dependabot[bot]
- [MNT] Fix typos (#1988) @szepeviktor
- [MNT] Dependabot: Bump actions/checkout from 5 to 6 (#1991) @dependabot[bot]
- [MNT] Add version bound for
lightning(#2001) @phoeenniixx - [MNT] Dependabot: Bump actions/upload-artifact from 5 to 6 (#2005) @dependabot[bot]
- [MNT] Dependabot: Bump actions/download-artifact from 6 to 7 (#2006) @dependabot[bot]
- [MNT] Dependabot: Update sphinx requirement from <8.2.4,>3.2 to >3.2,<9.1.1 (#2013) @dependabot[bot]
- [MNT] Dependabot: Update lightning requirement from <2.6.0,>=2.0.0 to >=2.0.0,<2.7.0 (#2002) @dependabot[bot]
- [MNT] Add python 3.14 support (#2015) @phoeenniixx
- [MNT] Update changelog generator script to return markdown files (#2016) @phoeenniixx
- [MNT] deduplicating utilities with
scikit-base(#1929) @fkiraly - [MNT] Update
rufflinting target version topython 3.10(#2017) @phoeenniixx
- [ENH] Consistent 3D output for single-target point predictions in
TimeXerv1. (#1936) @PranavBhatP - [ENH] Efficient Attention Backend for TimeXer (#1997) @anasashb
- [ENH] Add
predictto v2 models (#1984) @phoeenniixx - [ENH] Refactor N-BEATS blocks to separate KAN logic (#2012) @khenm
- [MNT] deduplicating utilities with
scikit-base(#1929) @fkiraly
- [BUG] Align TimeXer v2 endogenous/exogenous usage with tslib metadata (#2009) @ahmedkansulum
- [BUG] Solve the unpickling error in weight Loading (#2000) @phoeenniixx
- [DOC] add
CODE_OF_CONDUCT.mdandGOVERNANCE.md(#2014) @phoeenniixx
@ahmedkansulum, @anasashb, @dependabot[bot], @fkiraly, @khenm, @phoeenniixx, @PranavBhatP, @szepeviktor, @agobbifbk
Release focusing on:
- python 3.9 end-of-life
- changes to testing framework.
- New estimators in
pytorch-forecastingv1 and beta v2.
- Kolmogorov Arnold Block for
NBeatsby @Sohaib-Ahmed21 in sktime#1751 xLSTMTimeimplementation by @phoeenniixx in sktime#1709
- Implementing D2 data module, tests and
TimeXermodel fromtslibfor PTF v2 by @PranavBhatP in sktime#1836 - Add
DLinearmodel fromtslibfor PTF v2 by @PranavBhatP in sktime#1874 - Add
Samformermodel for PTF v2 from DSIPTS by @PranavBhatP in sktime#1952 Tidemodel in PTF v2 interface fromdsiptsby @phoeenniixx in sktime#1889
- [ENH] Test framework for
ptf-v2by @phoeenniixx in sktime#1841 - [ENH] Implementing D2 data module, tests and
TimeXermodel fromtslibfor v2 by @PranavBhatP in sktime#1836 - [ENH]
DLinearmodel fromtslibby @PranavBhatP in sktime#1874 - [ENH] Enable
DeprecationWarning,PendingDeprecationWarningandFutureWarningwhen running pytest by @fnhirwa in sktime#1912 - [ENH] Suppress
__array_wrap__warning innumpy 2fortorchandpandasby @fnhirwa in sktime#1911 - [ENH] Suppress PyTorch deprecation warning: UserWarning:
nn.init.constantis now deprecated in favor ofnn.init.constant_by @fnhirwa in sktime#1915 - [ENH] two-way linkage of model package classes and neural network classes by @fkiraly in sktime#1888
- [ENH] Add a copy of
BaseFixtureGeneratortopytorch-forecasting/tests/_baseas a true base class by @PranavBhatP in sktime#1919 - [ENH] Remove references to model from the
BaseFixtureGeneratorby @phoeenniixx in sktime#1923 - [ENH] Improve test framework for v1 models by @phoeenniixx in sktime#1908
- [ENH]
xLSTMTimeimplementation by @phoeenniixx in sktime#1709 - [ENH] Improve test framework for v1 metrics by @PranavBhatP in sktime#1907
- [ENH]
Tidemodel inv2interface by @phoeenniixx in sktime#1889 - [ENH] docstring test suite for functions by @fkiraly in sktime#1955
- [ENH] Add missing test for forward output of
TimeXeras proposed in #1936 by @PranavBhatP in sktime#1951 - [ENH] Add
Samformermodel for PTF v2 from DSIPTS by @PranavBhatP in sktime#1952 - [ENH] Kolmogorov Arnold Block for NBeats by @Sohaib-Ahmed21 in sktime#1751
- [ENH] Standardize output format for
tslibv2 models by @phoeenniixx in sktime#1965 - [ENH] Add
Metricssupport toptf-v2by @phoeenniixx in sktime#1960 - [ENH]
check_estimatorutility for checking new estimators against unified API contract by @fkiraly in sktime#1954 - [ENH] Standardize testing of estimator outputs and skip tests for non-conformant estimators by @PranavBhatP in sktime#1971
- [BUG] Fix issue with
EncodeNormalizer(method='standard', center=False)for scale value by @fnhirwa in sktime#1902 - [BUG] fixed memory leak in
TimeSeriesDatasetby using@cached_propertyand clean-up of index construction by @Vishnu-Rangiah in sktime#1905 - [BUG] Fix issue with
plot_prediction_actual_by_variableunsupported operand type(s) for *: 'numpy.ndarray' and 'Tensor' by @fnhirwa in sktime#1903 - [BUG] Correctly set lagged variables to known when lag >= horizon by @hubkrieb in sktime#1910
- [BUG] Updated base_model.py to account for importing error by @Himanshu-Verma-ds in sktime#1488
- [BUG][DOC] Fix documentation: pass loss argument to BaseModel in custom models tutorial example by @PranavBhatP in sktime#1931
- [BUG] fix broken version inspection if package distribution has
Nonename by @lohraspco in sktime#1926 - [BUG] fix sporadic
tkinterfailures in CI by @fkiraly in sktime#1937 - [BUG] Device inconsistency in
MQF2DistributionLossraising: RuntimeError: Expected all tensors to be on the same device by @fnhirwa in sktime#1916 - [BUG] fixed memory leak in BaseModel by detach some tensor by @zju-ys in sktime#1924
- [BUG] Fix
TimeSeriesDataSetwrong inferredtensordtypewhentime_idxis included in features by @cngmid in sktime#1950 - [BUG] standardize output format of xLSTMTime estimator for point predictions by @sanskarmodi8 in sktime#1978
- [BUG] Standardize output format of NBeats and NBeatsKAN estimators by @sanskarmodi8 in sktime#1977
- [DOC] Correct documentation for N-BEATS by @Pinaka07 in sktime#1914
- [DOC] 1.1.0 changelog - missing entries by @jdb78 in sktime#1512
- [DOC] fix minor typo in changelog by @fkiraly in sktime#1917
- [DOC] Missing parenthesis in docstring of MASE by @caph1993 in sktime#1944
- [MNT] remove import conditionals for
python 3.6by @fkiraly in sktime#1928 - [MNT] Dependabot: bump actions/download-artifact from 4 to 5 by @dependabot[bot] in sktime#1939
- [MNT] Dependabot: Bump actions/checkout from 4 to 5 by @dependabot[bot] in sktime#1942
- [MNT] Check versions in wheels workflow by @szepeviktor in sktime#1948
- [MNT] Dependabot: Bump actions/setup-python from 5 to 6 by @dependabot[bot] in sktime#1963
- [MNT] Update CODEOWNERS with current core dev state by @fkiraly in sktime#1972
- [MNT] python 3.9 end-of-life by @phoeenniixx in sktime#1980
@agobbifbk, @caph1993, @cngmid, @fkiraly, @fnhirwa, @Himanshu-Verma-ds, @hubkrieb, @jdb78, @lohraspco, @phoeenniixx, @Pinaka07, @PranavBhatP, @sanskarmodi8, @Sohaib-Ahmed21, @szepeviktor @Vishnu-Rangiah, @zju-ys
Feature and maintenance update.
- beta: experimental unified API for
pytorch-forecasting 2.0release: https://github.com/sktime/pytorch-forecasting/blob/main/docs/source/tutorials/ptf_V2_example.ipynb. Feedback appreciated in issue 1736. TimeXermodel fromthumlby @PranavBhatP in sktime#1797
- [ENH] Add Type hints to
TimeSeriesDataSetto align with pep 585 by @fnhirwa in sktime#1819 - [ENH] Allow multiple instances from multiple mock classes in
_safe_importby @fnhirwa in sktime#1818 - [ENH] EXPERIMENTAL PR: D1 and D2 layer for v2 refactor by @phoeenniixx in sktime#1811
- [ENH] EXPERIMENTAL PR: make the
data_moduledataclass-like by @phoeenniixx in sktime#1832 - [ENH] EXPERIMENTAL: TFT model based on the new data pipeline by @phoeenniixx in sktime#1812
- [ENH] test suite for
pytorch-forecastingforecasters by @fkiraly in sktime#1780 - [ENH]
TemporalFusionTransformer- allow mixed precision training by @Marcrb2 in sktime#1518 - [ENH] move model base classes into
models.basemodule - part 1 by @fkiraly in sktime#1773 - [ENH] move model base classes into
models.basemodule - part 2 by @fkiraly in sktime#1774 - [ENH] move model base classes into
models.basemodule - part 3 by @fkiraly in sktime#1776 - [ENH] tests for
TiDEModel by @PranavBhatP in sktime#1843 - [ENH] refactor test metadata container to include data loader configs by @fkiraly in sktime#1861
- [ENH]
DecoderMLPmetadata container for v1 tests by @fkiraly in sktime#1859 - [ENH]
TimeXermodel fromthumlby @PranavBhatP in sktime#1797 - [ENH] EXPERIMENTAL: Example notebook based on the new data pipeline by @phoeenniixx in sktime#1813
- [ENH] refactor test data scenario generation to
tests._data_scenariosby @fkiraly in sktime#1877
- [BUG] fix absolute errorbar by @MartinoMensio in sktime#1579
- [BUG] EXPERIMENTAL PR: Solve the bug in
data_moduleby @phoeenniixx in sktime#1834 - [BUG] fix incorrect concatenation dimension in
concat_sequencesby @cngmid in sktime#1827 - [BUG] Fix for the case when reduction is set to
noneby @fnhirwa in sktime#1872 - [BUG] enable silenced TFT v2 tests by @fkiraly in sktime#1878
- [DOC] fix
gradient_clipvalue in tutorials to ensure reproducible outputs similar to the committed cell output by @gbilleyPeco in sktime#1750 - [DOC] Fix typos in getting started section of the documentation by @pietsjoh in sktime#1399
- [DOC] improved pull request template by @fkiraly in sktime#1866
- [DOC] add project badges to README: sponsoring and downloads by @fkiraly in sktime#1891
- [MNT] Isolate
cpflowpackage, towards fixing readthedocs build by @fkiraly in sktime#1775 - [MNT] fix readthedocs build by @fkiraly in sktime#1777
- [MNT] move release to trusted publishers by @fkiraly in sktime#1800
- [MNT] standardize
dependabot.ymlby @fkiraly in sktime#1799 - [MNT] remove
tj-actionsby @fkiraly in sktime#1798 - [MNT] Dependabot: bump codecov/codecov-action from 1 to 5 by @dependabot in sktime#1803
- [MNT] disable automated merge and approve actions by @fkiraly in sktime#1804
- build(deps): update sphinx requirement from
<7.2.6,>3.2to>3.2,<8.2.4by @dependabot in sktime#1787 - [MNT] Move config from
setup.cfgtopyproject.tomlby @Borda in sktime#1852 - [MNT] Move
pytestconfiguration topyproject.tomlby @Borda in sktime#1851 - [MNT] Add 'UP' to extend-select for pyupgrade python syntax by @Borda in sktime#1856
- [MNT] Replace Black with Ruff formatting and update configuration by @Borda in sktime#1853
- [MNT] issue templates by @fkiraly in sktime#1867
- [MNT] Clearly define the MLP as a class/nn.model by @jobs-git in sktime#1864
@agobbifbk, @Borda, @cngmid, @fkiraly, @fnhirwa, @gbilleyPeco, @jobs-git, @Marcrb2, @MartinoMensio, @phoeenniixx, @pietsjoh, @PranavBhatP
Feature and maintenance update.
python 3.13supporttidemodel- bugfixes for TFT
- [ENH] Tide model. by @Sohaib-Ahmed21 in sktime#1734
- [ENH] refactor
__init__modules to no longer contain classes - preparatory commit by @fkiraly in sktime#1739 - [ENH] refactor
__init__modules to no longer contain classes by @fkiraly in sktime#1738 - [ENH] extend package author attribution requirement in license to present by @fkiraly in sktime#1737
- [ENH] linting tide model by @fkiraly in sktime#1742
- [ENH] move tide model - part 1 by @fkiraly in sktime#1743
- [ENH] move tide model - part 2 by @fkiraly in sktime#1744
- [ENH] clean-up refactor of
TimeSeriesDataSetby @fkiraly in sktime#1746
- [BUG] Bugfix when no exogenous variable is passed to TFT by @XinyuWuu in sktime#1667
- [BUG] Fix issue when training TFT model on mac M1 mps device. element 0 of tensors does not require grad and does not have a grad_fn by @fnhirwa in sktime#1725
- [DOC] Fix the spelling error of holding by @xiaokongkong in sktime#1719
- [DOC] Updated documentation on
TimeSeriesDataSet.predict_modeby @madprogramer in sktime#1720 - [DOC] General PR to improve docs by @julian-fong in sktime#1705
- [DOC] Correct argument for optimizer
rangerinTemporal Fusion Transformertutorial by @fnhirwa in sktime#1724 - [DOC] Fixed typo "monotone_constaints" by @Luke-Chesley in sktime#1516
- [DOC] minor fixes in documentation by @fkiraly in sktime#1763
- [DOC] improve and add
tidemodel to docs by @PranavBhatP in sktime#1762
- [MNT] update linting: limit line length to 88, add
isortby @fkiraly in sktime#1740 - [MNT] update nbeats/sub_modules.py to remove overhead in tensor creation by @d-schmitt in sktime#1580
- [MNT] Temporary fix for lint errors to conform to the recent changes in linting rules see #1749 by @fnhirwa in sktime#1748
- [MNT] python 3.13 support by @fkiraly in sktime#1691
@d-schmitt, @fkiraly, @fnhirwa, @julian-fong, @Luke-Chesley, @madprogramer, @PranavBhatP, @Sohaib-Ahmed21, @xiaokongkong, @XinyuWuu
Maintenance update, minor feature additions and bugfixes.
- support for
numpy 2.X - end of life for
python 3.8 - fixed documentation build
- bugfixes
pytorch-forecastingis now compatible withnumpy 2.X(core dependency)optuna(tuning soft dependency) bounds have been update to>=3.1.0,<5.0.0
- [BUG] fix
AttributeError: 'ExperimentWriter' object has no attribute 'add_figure'by @ewth in sktime#1694
- [DOC] typo fixes in changelog by @fkiraly in sktime#1660
- [DOC] update URLs to
sktimeorg by @fkiraly in sktime#1674
- [MNT] handle
mps backendfor lower versions of pytorch and fixmpsfailure onmacOS-latestrunner by @fnhirwa in sktime#1648 - [MNT] updates the actions in the doc build CI by @fkiraly in sktime#1673
- [MNT] fixes to
readthedocs.ymlby @fkiraly in sktime#1676 - [MNT] updates references in CI and doc locations to
mainby @fkiraly in sktime#1677 - [MNT]
show_versionsutility by @fkiraly in sktime#1688 - [MNT] Relax
numpybound tonumpy<3.0.0by @XinyuWuu in sktime#1624 - [MNT] fix
pre-commitfailures onmainby @ewth in sktime#1696 - [MNT] Move linting to ruff by @airookie17 in sktime#1692 1693
- [MNT]
rufflinting - allow use of assert (S101) by @fkiraly in sktime#1701 - [MNT]
ruff- fix list related linting failures C416 and C419 by @fkiraly in sktime#1702 - [MNT] Delete poetry.lock by @benHeid in sktime#1704
- [MNT] fix
blackdoesn't haveextrasdependency by @fnhirwa in sktime#1697 - [MNT] Remove mutable objects from defaults by @eugenio-mercuriali in sktime#1699
- [MNT] remove docs build in ci for all pr by @yarnabrina in sktime#1712
- [MNT] EOL for python 3.8 by @fkiraly in sktime#1661
- [MNT] remove
poetry.lockby @fkiraly in sktime#1651 - [MNT] update
pre-commitrequirement from<4.0.0,>=3.2.0to>=3.2.0,<5.0.0by @dependabot in https://github.com/sktime/pytorch-forecasting/pull/ - [MNT] update optuna requirement from
<4.0.0,>=3.1.0to>=3.1.0,<5.0.0by @dependabot in sktime#1715 - [MNT] CODEOWNERS file by @fkiraly in sktime#1710
@airookie17, @benHeid, @eugenio-mercuriali, @ewth, @fkiraly, @fnhirwa, @XinyuWuu, @yarnabrina
Hotfix for accidental package name change in pyproject.toml.
The package name is now corrected to pytorch-forecasting.
Maintenance update widening compatibility ranges and consolidating dependencies:
TSMixermodel, see TSMixer: An All-MLP Architecture for Time Series Forecasting.- support for python 3.11 and 3.12, added CI testing
- support for MacOS, added CI testing
- core dependencies have been minimized to
numpy,torch,lightning,scipy,pandas, andscikit-learn. - soft dependencies are available in soft dependency sets:
all_extrasfor all soft dependencies, andtuningforoptunabased optimization.
- the following are no longer core dependencies and have been changed to optional dependencies :
optuna,statsmodels,pytorch-optimize,matplotlib. Environments relying on functionality requiring these dependencies need to be updated to install these explicitly. optunabounds have been updated tooptuna >=3.1.0,<4.0.0optuna-integrateis now an additional soft dependency, in case ofoptuna >=3.3.0
- from 1.2.0, the default optimizer will be changed from
"ranger"to"adam"to avoid non-torchdependencies in defaults.pytorch-optimizeoptimizers can still be used. Users should set the optimizer explicitly to continue using"ranger". - from 1.1.0, the loggers do not log figures if soft dependency
matplotlibis not present, but will raise no exceptions in this case. To log figures, ensure thatmatplotlibis installed.
@andre-marcos-perez, @avirsaha, @bendavidsteel, @benHeid, @bohdan-safoniuk, @Borda, @CahidArda, @fkiraly, @fnhirwa, @germanKoch, @jacktang, @jdb78, @jurgispods, @maartensukel, @MBelniak, @orangehe, @pavelzw, @sfalkena, @tmct, @XinyuWuu, @yarnabrina
- Upgraded to pytorch 2.0 and lightning 2.0. This brings a couple of changes, such as configuration of trainers. See the lightning upgrade guide. For PyTorch Forecasting, this particularly means if you are developing own models, the class method
epoch_endhas been renamed toon_epoch_endand replacingmodel.summarize()withModelSummary(model, max_depth=-1)andTuner(trainer)is its own class, sotrainer.tunerneeds replacing. (#1280) - Changed the
predict()interface returning named tuple - see tutorials.
- The predict method is now using the lightning predict functionality and allows writing results to disk (#1280).
- Fixed robust scaler when quantiles are 0.0, and 1.0, i.e. minimum and maximum (#1142)
- Removed pandoc from dependencies as issue with poetry install (#1126)
- Added metric attributes for torchmetric resulting in better multi-GPU performance (#1126)
- "robust" encoder method can be customized by setting "center", "lower" and "upper" quantiles (#1126)
- DeepVar network (#923)
- Enable quantile loss for N-HiTS (#926)
- MQF2 loss (multivariate quantile loss) (#949)
- Non-causal attention for TFT (#949)
- Tweedie loss (#949)
- ImplicitQuantileNetworkDistributionLoss (#995)
- Fix learning scale schedule (#912)
- Fix TFT list/tuple issue at interpretation (#924)
- Allowed encoder length down to zero for EncoderNormalizer if transformation is not needed (#949)
- Fix Aggregation and CompositeMetric resets (#949)
- Dropping Python 3.6 support, adding 3.10 support (#479)
- Refactored dataloader sampling - moved samplers to pytorch_forecasting.data.samplers module (#479)
- Changed transformation format for Encoders to dict from tuple (#949)
- jdb78
- Fix with creating tensors on correct devices (#908)
- Fix with MultiLoss when calculating gradient (#908)
- jdb78
- Added new
N-HiTSnetwork that has consistently beatenN-BEATS(#890) - Allow using torchmetrics as loss metrics (#776)
- Enable fitting
EncoderNormalizer()with limited data history usingmax_lengthargument (#782) - More flexible
MultiEmbedding()with convenienceoutput_sizeandinput_sizeproperties (#829) - Fix concatenation of attention (#902)
- Fix pip install via github (#798)
- jdb78
- christy
- lukemerrick
- Seon82
- Added support for running
lightning.trainer.test(#759)
- Fix inattention mutation to
x_cont(#732). - Compatibility with pytorch-lightning 1.5 (#758)
- eavae
- danielgafni
- jdb78
- Use target name instead of target number for logging metrics (#588)
- Optimizer can be initialized by passing string, class or function (#602)
- Add support for multiple outputs in Baseline model (#603)
- Added Optuna pruner as optional parameter in
TemporalFusionTransformer.optimize_hyperparameters(#619) - Dropping support for Python 3.6 and starting support for Python 3.9 (#639)
- Initialization of TemporalFusionTransformer with multiple targets but loss for only one target (#550)
- Added missing transformation of prediction for MLP (#602)
- Fixed logging hyperparameters (#688)
- Ensure MultiNormalizer fit state is detected (#681)
- Fix infinite loop in TimeDistributedEmbeddingBag (#672)
- jdb78
- TKlerx
- chefPony
- eavae
- L0Z1K
-
Removed
dropout_categoricalsparameter fromTimeSeriesDataSet. Usecategorical_encoders=dict(<variable_name>=NaNLabelEncoder(add_nan=True)) instead (#518) -
Rename parameter
allow_missingsforTimeSeriesDataSettoallow_missing_timesteps(#518) -
Transparent handling of transformations. Forward methods should now call two new methods (#518):
transform_outputto explicitly rescale the network outputs into the de-normalized spaceto_network_outputto create a dict-like named tuple. This allows tracing the modules with PyTorch's JIT. Onlypredictionis still required which is the main network output.
Example:
def forward(self, x): normalized_prediction = self.module(x) prediction = self.transform_output(prediction=normalized_prediction, target_scale=x["target_scale"]) return self.to_network_output(prediction=prediction)
- Fix quantile prediction for tensors on GPUs for distribution losses (#491)
- Fix hyperparameter update for RecurrentNetwork.from_dataset method (#497)
- Improved validation of input parameters of TimeSeriesDataSet (#518)
- Allow lists for multiple losses and normalizers (#405)
- Warn if normalization is with scale
< 1e-7(#429) - Allow usage of distribution losses in all settings (#434)
- Fix issue when predicting and data is on different devices (#402)
- Fix non-iterable output (#404)
- Fix problem with moving data to CPU for multiple targets (#434)
- jdb78
- domplexity
- Adding a filter functionality to the timeseries dataset (#329)
- Add simple models such as LSTM, GRU and a MLP on the decoder (#380)
- Allow usage of any torch optimizer such as SGD (#380)
- Moving predictions to CPU to avoid running out of memory (#329)
- Correct determination of
output_sizefor multi-target forecasting with the TemporalFusionTransformer (#328) - Tqdm autonotebook fix to work outside of Jupyter (#338)
- Fix issue with yaml serialization for TensorboardLogger (#379)
- jdb78
- JakeForsey
- vakker
- Make tuning trainer kwargs overwritable (#300)
- Allow adding categories to NaNEncoder (#303)
- Underlying data is copied if modified. Original data is not modified inplace (#263)
- Allow plotting of interpretation on passed figure for NBEATS (#280)
- Fix memory leak for plotting and logging interpretation (#311)
- Correct shape of
predict()method output for multi-targets (#268) - Remove cloudpickle to allow GPU trained models to be loaded on CPU devices from checkpoints (#314)
- jdb78
- kigawas
- snumumrik
- Added missing output transformation which was switched off by default (#260)
- Add "Release Notes" section to docs (#237)
- Enable usage of lag variables for any model (#252)
- Require PyTorch>=1.7 (#245)
- Fix issue for multi-target forecasting when decoder length varies in single batch (#249)
- Enable longer subsequences for min_prediction_idx that were previously wrongfully excluded (#250)
- jdb78
- Adding support for multiple targets in the TimeSeriesDataSet (#199) and amended tutorials.
- Temporal fusion transformer and DeepAR with support for multiple targets (#199)
- Check for non-finite values in TimeSeriesDataSet and better validate scaler argument (#220)
- LSTM and GRU implementations that can handle zero-length sequences (#235)
- Helpers for implementing auto-regressive models (#236)
- TimeSeriesDataSet's
yof the dataloader is a tuple of (target(s), weight) - potentially breaking for model or metrics implementation Most implementations will not be affected as hooks in BaseModel and MultiHorizonMetric were modified. (#199)
- Fixed autocorrelation for pytorch 1.7 (#220)
- Ensure reproducibility by replacing python
set()withdict.fromkeys()(mostly TimeSeriesDataSet) (#221) - Ensures BetaDistributionLoss does not lead to infinite loss if actuals are 0 or 1 (#233)
- Fix for GroupNormalizer if scaling by group (#223)
- Fix for TimeSeriesDataSet when using
min_prediction_idx(#226)
- jdb78
- JustinNeumann
- reumar
- rustyconover
- Tutorial on how to implement a new architecture covering basic and advanced use cases (#188)
- Additional and improved documentation - particularly of implementation details (#188)
- Moved multiple private methods to public methods (particularly logging) (#188)
- Moved
get_maskmethod from BaseModel into utils module (#188) - Instead of using label to communicate if model is training or validating, using
self.trainingattribute (#188) - Using
sample((n,))of pytorch distributions instead of deprecatedsample_n(n)method (#188)
- Beta distribution loss for probabilistic models such as DeepAR (#160)
- BREAKING: Simplifying how to apply transforms (such as logit or log) before and after applying encoder. Some transformations are included by default but a tuple of a forward and reverse transform function can be passed for arbitrary transformations. This requires to use a
transformationkeyword in target normalizers instead of, e.g.log_scale(#185)
- Incorrect target position if
len(static_reals) > 0leading to leakage (#184) - Fixing predicting completely unseen series (#172)
- jdb78
- JakeForsey
- Using GRU cells with DeepAR (#153)
- GPU fix for variable sequence length (#169)
- Fix incorrect syntax for warning when removing series (#167)
- Fix issue when using unknown group ids in validation or test dataset (#172)
- Run non-failing CI on PRs from forks (#166, #156)
- Improved model selection guidance and explanations on how TimeSeriesDataSet works (#148)
- Clarify how to use with conda (#168)
- jdb78
- JakeForsey
- DeepAR by Amazon (#115)
- First autoregressive model in PyTorch Forecasting
- Distribution loss: normal, negative binomial and log-normal distributions
- Currently missing: handling lag variables and tutorial (planned for 0.6.1)
- Improved documentation on TimeSeriesDataSet and how to implement a new network (#145)
- Internals of encoders and how they store center and scale (#115)
- Update to PyTorch 1.7 and PyTorch Lightning 1.0.5 which came with breaking changes for CUDA handling and with optimizers (PyTorch Forecasting Ranger version) (#143, #137, #115)
- jdb78
- JakeForesey
- Fix issue where hyperparameter verbosity controlled only part of output (#118)
- Fix occasional error when
.get_parameters()fromTimeSeriesDataSetfailed (#117) - Remove redundant double pass through LSTM for temporal fusion transformer (#125)
- Prevent installation of pytorch-lightning 1.0.4 as it breaks the code (#127)
- Prevent modification of model defaults in-place (#112)
- Hyperparameter tuning with optuna to tutorial
- Control over verbosity of hyper parameter tuning
- Interpretation error when different batches had different maximum decoder lengths
- Fix some typos (no changes to user API)
This release has only one purpose: Allow usage of PyTorch Lightning 1.0 - all tests have passed.
- Additional checks for
TimeSeriesDataSetinputs - now flagging if series are lost due to highmin_encoder_lengthand ensure parameters are integers - Enable classification - simply change the target in the
TimeSeriesDataSetto a non-float variable, use theCrossEntropymetric to optimize and output as many classes as you want to predict
- Ensured PyTorch Lightning 0.10 compatibility
- Using
LearningRateMonitorinstead ofLearningRateLogger - Use
EarlyStoppingcallback in trainercallbacksinstead ofearly_stoppingargument - Update metric system
update()andcompute()methods - Use
Tuner(trainer).lr_find()instead oftrainer.lr_find()in tutorials and examples
- Using
- Update poetry to 1.1.0
- Removed attention to current datapoint in TFT decoder to generalise better over various sequence lengths
- Allow resuming optuna hyperparamter tuning study
- Fixed inconsistent naming and calculation of
encoder_lengthin TimeSeriesDataSet when added as feature
- jdb78
- Backcast loss for N-BEATS network for better regularisation
- logging_metrics as explicit arguments to models
- MASE (Mean absolute scaled error) metric for training and reporting
- Metrics can be composed, e.g.
0.3* metric1 + 0.7 * metric2 - Aggregation metric that is computed on mean prediction over all samples to reduce mean-bias
- Increased speed of parsing data with missing datapoints. About 2s for 1M data points. If
numbais installed, 0.2s for 1M data points - Time-synchronize samples in batches: ensure that all samples in each batch have with same time index in decoder
- Improved subsequence detection in TimeSeriesDataSet ensures that there exists a subsequence starting and ending on each point in time.
- Fix
min_encoder_length = 0being ignored and processed asmin_encoder_length = max_encoder_length
- jdb78
- dehoyosb
- More tests driving coverage to ~90%
- Performance tweaks for temporal fusion transformer
- Reformatting with sort
- Improve documentation - particularly expand on hyper parameter tuning
- Fix PoissonLoss quantiles calculation
- Fix N-Beats visualisations
- Calculating partial dependency for a variable
- Improved documentation - in particular added FAQ section and improved tutorial
- Data for examples and tutorials can now be downloaded. Cloning the repo is not a requirement anymore
- Added Ranger Optimizer from
pytorch_rangerpackage and fixed its warnings (part of preparations for conda package release) - Use GPU for tests if available as part of preparation for GPU tests in CI
- BREAKING: Fix typo "add_decoder_length" to "add_encoder_length" in TimeSeriesDataSet
- Fixing plotting predictions vs actuals by slicing variables
Fix bug where predictions were not correctly logged in case of decoder_length == 1.
- Add favicon to docs page
Update build system requirements to be parsed correctly when installing with pip install git+https://github.com/jdb78/pytorch-forecasting
- Add tests for MacOS
- Automatic releases
- Coverage reporting
This release improves robustness of the code.
-
Fixing bug across code, in particularly
- Ensuring that code works on GPUs
- Adding tests for models, dataset and normalisers
- Test using GitHub Actions (tests on GPU are still missing)
-
Extend documentation by improving docstrings and adding two tutorials.
-
Improving default arguments for TimeSeriesDataSet to avoid surprises
- Basic tests for data and model (mostly integration tests)
- Automatic target normalization
- Improved visualization and logging of temporal fusion transformer
- Model bugfixes and performance improvements for temporal fusion transformer
- Metrics are reduced to calculating loss. Target transformations are done by new target transformer