Skip to content

Commit e9f8376

Browse files
authored
release v0.6.0 (#764)
2 parents 092b828 + 98d7e45 commit e9f8376

File tree

2 files changed

+18
-5
lines changed

2 files changed

+18
-5
lines changed

MANIFEST.in

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# SPDX-FileCopyrightText: ASSUME Developers
2+
#
3+
# SPDX-License-Identifier: AGPL-3.0-or-later
4+
5+
global-exclude *.pyc __pycache__/
6+
7+
# Exclude entire directories
8+
prune examples/
9+
prune tests/

docs/source/release_notes.rst

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,9 @@ Upcoming Release
1212
The features in this section are not released yet, but will be part of the next release! To use the features already you have to install the main branch,
1313
e.g. ``pip install git+https://github.com/assume-framework/assume``
1414

15+
0.6.0 - (18th March 2026)
16+
=========================
17+
1518
**Improvements:**
1619
- **Deterministic behavior with seed setting**: Simulations are now deterministic by default for improved reproducibility. This can be controlled via a seed setting in `config.yaml` files, therefore it only applies for scenarios loaded via `load_scenario_folder`. Note that complete determinism is not guaranteed for all hardware and software configurations, especially with PyTorch-based learning strategies. It may also decrease performance of reinforcement learning due to disabled non-deterministic optimizations.
1720
- ``seed`` not set in top-level of config: Sets the seed to a fixed default value (42) for deterministic behavior.
@@ -20,13 +23,14 @@ Upcoming Release
2023
- **Delete environment.yaml**: The environment.yaml file has been removed from the repository to simplify maintenance and was completely redundant with the `pyproject.toml`. Users can as before create their own environment using the provided pip installation instructions, which allows for more flexibility and easier updates.
2124
- **Add validation for simulation setup**: Added checks to validate the simulation setup for common issues, such as missing bidding strategies or inconsistent market configurations. Warnings are issued to inform users of potential problems that could affect simulation results.
2225
- **Added reward calculation for unit operators**: Unit operators have now the opportunity to calculate rewards based on the returned orderbooks for their own purposes. This enables learning strategies on unit operator level / portfolio learning strategies.
23-
- **Upgrade to Pandas 3**
24-
- **Structured Validation Error**: Introduces the new ValidationError to represent a failing validation. Since it derives from the base ValidationError, all existing error handling remains compatible, but users can now also catch this specific error type to handle validation errors separately if desired.
25-
- **Support for Python 3.14**
26+
- **Structured Validation Error**: Introduces the new ValidationError to represent a failing validation. Since it derives from the base ValidationError, all existing error handling remains compatible, but users can now also catch this specific error type to
27+
- **Add support for Pandas 3**handle validation errors separately if desired.
28+
- **Add support for Python 3.14**
2629

2730
**Bug Fixes:**
2831
- **Fix buffer and update order**: Fixed the order of buffer writing and policy updating in the learning role to ensure that both have the exact same order, which is necessary so that during updates the correct data is used. Thisbug will have compormised learning with very heterogeneous units after the last release.
2932
- **Fix data loss in RL learning role**: Fixed data loss in RL learning role by implementing atomic swap with carry-over for incomplete timesteps in cache
33+
- **Update notebooks to always install latest repo version from Google Colab**: This ensures that the latest version is always used
3034

3135

3236
0.5.6 - (23th December 2025)
@@ -35,7 +39,7 @@ Upcoming Release
3539
**Bug Fixes:**
3640

3741
- **Changed action clamping**: The action clamping was changed to extreme values defined by dicts. Instead of using the min and max of a forward pass in the NN, the clamping is now based on the activation function of the actor network. Previously, the output range was incorrectly assumed based only on the input, which failed when weights were negative due to Xavier initialization.
38-
- **Adjusted reward scaling**: Reward scaling now considers current available power instead of the units max_power, reducing reward distortion when availability limits capacity. Available power is now derived from offered_order_volume instead of unit.calculate_min_max_power. Because dispatch is set before reward calculation, the previous method left available power at 0 whenever the unit was dispatched.
42+
- **Adjusted reward scaling**: Reward scaling now considers current available power instead of the unit's max_power, reducing reward distortion when availability limits capacity. Available power is now derived from offered_order_volume instead of unit.calculate_min_max_power. Because dispatch is set before reward calculation, the previous method left available power at 0 whenever the unit was dispatched.
3943
- **Update pytest dependency**: Tests now run with Pytest 9
4044
- **Add new docs feature**: dependencies to build docs can now be installed with `pip install -e .[docs]`
4145
- **Fix tests on Windows**: One test was always failing on Windows, which is fixed so that all tests succeed on all archs
@@ -123,7 +127,7 @@ Upcoming Release
123127
- **Add single bid RL strategy:** Added a new reinforcement learning strategy that allows agents to submit bids based on one action value only that determines the price at which the full capacity is offered.
124128
- **Bidding Strategy for Elastic Demand**: The new `EnergyHeuristicElasticStrategy` enables demand units to submit multiple bids that approximate a marginal utility curve, using
125129
either linear or isoelastic price elasticity models. Unlike other strategies, it does **not** rely on predefined volumes—bids are dynamically generated based on the
126-
units elasticity configuration. To use this strategy, set `bidding_strategy` to `"demand_energy_heuristic_elastic"` in the `demand_units.csv` file and specify the following
130+
unit's elasticity configuration. To use this strategy, set `bidding_strategy` to `"demand_energy_heuristic_elastic"` in the `demand_units.csv` file and specify the following
127131
parameters: `elasticity` (must be negative), `elasticity_model` (`"linear"` or `"isoelastic"`), `num_bids`, and `price` (which acts as `max_price`). The `elasticity_model`
128132
defines the shape of the demand curve, with `"linear"` producing a straight-line decrease and `"isoelastic"` generating a hyperbolic curve. `num_bids` determines how many
129133
bid steps are submitted, allowing control over the granularity of demand flexibility.

0 commit comments

Comments
 (0)