You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/release_notes.rst
+9-5Lines changed: 9 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,6 +12,9 @@ Upcoming Release
12
12
The features in this section are not released yet, but will be part of the next release! To use the features already you have to install the main branch,
13
13
e.g. ``pip install git+https://github.com/assume-framework/assume``
14
14
15
+
0.6.0 - (18th March 2026)
16
+
=========================
17
+
15
18
**Improvements:**
16
19
- **Deterministic behavior with seed setting**: Simulations are now deterministic by default for improved reproducibility. This can be controlled via a seed setting in `config.yaml` files, therefore it only applies for scenarios loaded via `load_scenario_folder`. Note that complete determinism is not guaranteed for all hardware and software configurations, especially with PyTorch-based learning strategies. It may also decrease performance of reinforcement learning due to disabled non-deterministic optimizations.
17
20
- ``seed`` not set in top-level of config: Sets the seed to a fixed default value (42) for deterministic behavior.
@@ -20,13 +23,14 @@ Upcoming Release
20
23
- **Delete environment.yaml**: The environment.yaml file has been removed from the repository to simplify maintenance and was completely redundant with the `pyproject.toml`. Users can as before create their own environment using the provided pip installation instructions, which allows for more flexibility and easier updates.
21
24
- **Add validation for simulation setup**: Added checks to validate the simulation setup for common issues, such as missing bidding strategies or inconsistent market configurations. Warnings are issued to inform users of potential problems that could affect simulation results.
22
25
- **Added reward calculation for unit operators**: Unit operators have now the opportunity to calculate rewards based on the returned orderbooks for their own purposes. This enables learning strategies on unit operator level / portfolio learning strategies.
23
-
- **Upgrade to Pandas 3**
24
-
- **Structured Validation Error**: Introduces the new ValidationError to represent a failing validation. Since it derives from the base ValidationError, all existing error handling remains compatible, but users can now also catch this specific error type to handle validation errors separately if desired.
25
-
- **Support for Python 3.14**
26
+
- **Structured Validation Error**: Introduces the new ValidationError to represent a failing validation. Since it derives from the base ValidationError, all existing error handling remains compatible, but users can now also catch this specific error type to
27
+
- **Add support for Pandas 3**handle validation errors separately if desired.
28
+
- **Add support for Python 3.14**
26
29
27
30
**Bug Fixes:**
28
31
- **Fix buffer and update order**: Fixed the order of buffer writing and policy updating in the learning role to ensure that both have the exact same order, which is necessary so that during updates the correct data is used. Thisbug will have compormised learning with very heterogeneous units after the last release.
29
32
- **Fix data loss in RL learning role**: Fixed data loss in RL learning role by implementing atomic swap with carry-over for incomplete timesteps in cache
33
+
- **Update notebooks to always install latest repo version from Google Colab**: This ensures that the latest version is always used
30
34
31
35
32
36
0.5.6 - (23th December 2025)
@@ -35,7 +39,7 @@ Upcoming Release
35
39
**Bug Fixes:**
36
40
37
41
- **Changed action clamping**: The action clamping was changed to extreme values defined by dicts. Instead of using the min and max of a forward pass in the NN, the clamping is now based on the activation function of the actor network. Previously, the output range was incorrectly assumed based only on the input, which failed when weights were negative due to Xavier initialization.
38
-
- **Adjusted reward scaling**: Reward scaling now considers current available power instead of the unit’s max_power, reducing reward distortion when availability limits capacity. Available power is now derived from offered_order_volume instead of unit.calculate_min_max_power. Because dispatch is set before reward calculation, the previous method left available power at 0 whenever the unit was dispatched.
42
+
- **Adjusted reward scaling**: Reward scaling now considers current available power instead of the unit's max_power, reducing reward distortion when availability limits capacity. Available power is now derived from offered_order_volume instead of unit.calculate_min_max_power. Because dispatch is set before reward calculation, the previous method left available power at 0 whenever the unit was dispatched.
39
43
- **Update pytest dependency**: Tests now run with Pytest 9
40
44
- **Add new docs feature**: dependencies to build docs can now be installed with `pip install -e .[docs]`
41
45
- **Fix tests on Windows**: One test was always failing on Windows, which is fixed so that all tests succeed on all archs
@@ -123,7 +127,7 @@ Upcoming Release
123
127
- **Add single bid RL strategy:** Added a new reinforcement learning strategy that allows agents to submit bids based on one action value only that determines the price at which the full capacity is offered.
124
128
- **Bidding Strategy for Elastic Demand**: The new `EnergyHeuristicElasticStrategy` enables demand units to submit multiple bids that approximate a marginal utility curve, using
125
129
either linear or isoelastic price elasticity models. Unlike other strategies, it does **not** rely on predefined volumes—bids are dynamically generated based on the
126
-
unit’s elasticity configuration. To use this strategy, set `bidding_strategy` to `"demand_energy_heuristic_elastic"` in the `demand_units.csv` file and specify the following
130
+
unit's elasticity configuration. To use this strategy, set `bidding_strategy` to `"demand_energy_heuristic_elastic"` in the `demand_units.csv` file and specify the following
127
131
parameters: `elasticity` (must be negative), `elasticity_model` (`"linear"` or `"isoelastic"`), `num_bids`, and `price` (which acts as `max_price`). The `elasticity_model`
128
132
defines the shape of the demand curve, with `"linear"` producing a straight-line decrease and `"isoelastic"` generating a hyperbolic curve. `num_bids` determines how many
129
133
bid steps are submitted, allowing control over the granularity of demand flexibility.
0 commit comments