@@ -21,24 +21,39 @@ changelog <https://keepachangelog.com/en/1.1.0/>`_ format. This project follows
2121.. Removed
2222.. #######
2323
24- Unreleased
25- ----------
24+ Version 2025.9 - 2025-08-18
25+ ---------------------------
2626
2727Added
2828#####
2929
30- - Additional logs to the checkpoints, model and the output dirs at the end of training
31- - When downloading checkpoints and models from Hugging Face, the files will be cached
32- locally and re-used.
30+ - Use the best model instead of the latest model for evaluation at the end of training.
31+ - Log the best epoch when loading checkpoints.
32+ - Allow changing the scheduler factor in PET.
33+ - Introduce checkpoint versioning and updating.
34+ - Added CI tests on GPU.
35+ - Log the number of model parameters before training starts.
36+ - Add additional logs to the checkpoints, model, and output directories at the end of
37+ training.
38+ - Cache files locally and re-use them when downloading checkpoints and models from
39+ Hugging Face.
3340- ``extra_data `` is now a valid section in the ``options.yaml `` file, allowing users to
34- add custom data to the training set. The data is contained in the dataloader and can
35- be used in custom loss functions or models.
36- - ``mtt eval `` can be used to evaluate models on a ``DiskDataset ``.
41+ add custom data to the training set. The data is included in the dataloader and can be
42+ used in custom loss functions or models.
43+ - ``mtt eval `` can now evaluate models on a ``DiskDataset ``.
44+
45+ Changed
46+ #######
47+
48+ - Updated to a new general composition model.
49+ - Updated to a new implementation of LLPR.
3750
3851Fixed
3952#####
4053
41- - Log is shown when training with ``restart="auto" ``
54+ - Fixed ``device `` and ``dtype `` not being set during LoRA fine-tuning in PET.
55+ - Log messages are now shown when training with ``restart="auto" ``.
56+ - Fixed incorrect sub-section naming in the Wandb logger.
4257
4358Version 2025.8 - 2025-06-11
4459---------------------------
0 commit comments