Skip to content

Commit e25da0f

Browse files
committed
improving notebooks on RL, more work to do [skip ci]
1 parent caf639e commit e25da0f

File tree

4 files changed

+322
-48
lines changed

4 files changed

+322
-48
lines changed

CHANGELOG.rst

Lines changed: 11 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,16 +31,24 @@ Change Log
3131
- [???] "asynch" multienv
3232
- [???] properly model interconnecting powerlines
3333

34-
34+
Work kind of in progress
35+
----------------------------------
3536
- TODO A number of max buses per sub
3637
- TODO in the runner, save multiple times the same sceanrio
3738
- TODO in the gym env, make the action_space and observation_space attribute
3839
filled automatically (see ray integration, it's boring to have to copy paste...)
3940

41+
Next release
42+
---------------------------------
43+
- TODO Notebook for tf_agents
44+
- TODO Notebook for acme
45+
- TODO Notebook using "keras rl" (see https://keras.io/examples/rl/ppo_cartpole/)
46+
- TODO put the Grid2opEnvWrapper directly in grid2op as GymEnv
47+
- TODO example for MCTS https://github.com/bwfbowen/muax et https://github.com/google-deepmind/mctx
48+
4049
[1.10.3] - 2024-xx-yy
4150
-------------------------
4251
- TODO Automatic "experimental_read_from_local_dir"
43-
- TODO Notebook for stable baselines
4452

4553
- [BREAKING] `env.chronics_hander.set_max_iter(xxx)` is now a private function. Use
4654
`env.set_max_iter(xxx)` or even better `env.reset(options={"max step": xxx})`.
@@ -60,7 +68,7 @@ Change Log
6068

6169
[1.10.2] - 2024-05-27
6270
-------------------------
63-
- [BREAKING] the `runner.run_one_episode` now returns an extra first argument:
71+
- [BREAKING] the `runner.run_one_episode` now returns an extra argument (first position):
6472
`chron_id, chron_name, cum_reward, timestep, max_ts = runner.run_one_episode()` which
6573
is consistant with `runner.run(...)` (previously it returned only
6674
`chron_name, cum_reward, timestep, max_ts = runner.run_one_episode()`)

getting_started/11_IntegrationWithExistingRLFrameworks.ipynb

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,6 @@
2929
"\n",
3030
"Other RL frameworks are not cover here. If you already use them, let us know !\n",
3131
"- https://github.com/PaddlePaddle/PARL/blob/develop/README.md (used by the winner teams of Neurips competitions !) Work in progress.\n",
32-
"- https://github.com/wau/keras-rl2\n",
3332
"- https://github.com/deepmind/acme\n",
3433
"\n",
3534
"Note also that there is still the possibility to use past codes in the l2rpn-baselines repository: https://github.com/rte-france/l2rpn-baselines . This repository contains code snippets that can be reuse to make really nice agents on the l2rpn competitions. You can try it out :-) \n",
@@ -85,11 +84,13 @@
8584
"- [Action space](#Action-space): basic usage of the action space, by removing redundant feature (`gym_env.observation_space.ignore_attr`) or transforming feature from a continuous space to a discrete space (`ContinuousToDiscreteConverter`)\n",
8685
"- [Observation space](#Observation-space): basic usage of the observation space, by removing redunddant features (`keep_only_attr`) or to scale the data on between a certain range (`ScalerAttrConverter`)\n",
8786
"- [Making the grid2op agent](#Making-the-grid2op-agent) explains how to make a grid2op agent once trained. Note that a more \"agent focused\" view is provided in the notebook [04_TrainingAnAgent](04_TrainingAnAgent.ipynb) !\n",
88-
"- [1) RLLIB](#1\\)-RLLIB): more advance usage for customizing the observation space (`gym_env.observation_space.reencode_space` and `gym_env.observation_space.add_key`) or modifying the type of gym attribute (`MultiToTupleConverter`) as well as an example of how to use RLLIB framework\n",
89-
"- [2)-Stable baselines](#2\\)-Stable-baselines): even more advanced usage for customizing the observation space by concatenating it to a single \"Box\" (instead of a dictionnary) thanks to `BoxGymObsSpace` and to use `BoxGymActSpace` if you are more focus on continuous actions and `MultiDiscreteActSpace` for discrete actions (**NB** in both case there will be loss of information as compared to regular grid2op actions! for example it will be harder to have a representation of the graph of the grid there)\n",
90-
"- [3) Tf Agents](#3\\)-Tf-Agents) explains how to convert the action space into a \"Discrete\" gym space thanks to `DiscreteActSpace`\n",
9187
"\n",
92-
"On each sections, we also explain concisely how to train the agent. Note that we did not spend any time on customizing the default agents and training scheme. It is then less than likely that these agents there"
88+
"To dive deeper and with proper \"hands on\", you can refer to one of the following notebooks that uses real RL frameworks:\n",
89+
"\n",
90+
"1) RLLIB: see notebook [11_ray_integration](./11_ray_integration.ipynb) for more information about RLLIB\n",
91+
"2) Stable baselines: see notebook [11_ray_integration](./11_stable_baselines3_integration.ipynb) for more information about stables-baselines3\n",
92+
"3) tf agents: coming soon\n",
93+
"4) acme: coming soon"
9394
]
9495
},
9596
{
@@ -1316,7 +1317,7 @@
13161317
"name": "python",
13171318
"nbconvert_exporter": "python",
13181319
"pygments_lexer": "ipython3",
1319-
"version": "3.10.13"
1320+
"version": "3.8.10"
13201321
}
13211322
},
13221323
"nbformat": 4,

getting_started/11_ray_integration.ipynb

Lines changed: 82 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -15,11 +15,17 @@
1515
"\n",
1616
"This notebook is more an \"example of what works\" rather than a deep dive tutorial.\n",
1717
"\n",
18-
"See stable-baselines3.readthedocs.io/ for a more detailed information.\n",
18+
"See https://docs.ray.io/en/latest/rllib/rllib-env.html#configuring-environments for a more detailed information.\n",
1919
"\n",
20-
"This notebook is tested with grid2op 1.10.2 and stable baselines3 version 2.3.2 on an ubuntu 20.04 machine.\n",
20+
"See also https://docs.ray.io/en/latest/rllib/package_ref/doc/ray.rllib.algorithms.algorithm_config.AlgorithmConfig.html for other details\n",
2121
"\n",
22+
"This notebook is tested with grid2op 1.10.2 and ray 2.9 on an ubuntu 20.04 machine.\n",
2223
"\n",
24+
"- [0 Some tips to get started](#0-some-tips-to-get-started) : is a reminder on what you can do to make things work. Indeed, this notebook explains \"how to use grid2op with stable baselines\" but not \"how to create a working agent able to operate a real powergrid in real time with stable baselines\". We wish we could explain the later...\n",
25+
"- [1 Create the \"Grid2opEnvWrapper\" class](#1-create-the-grid2openvwraper-class) : explain how to create the main grid2op env class that you can use a \"gymnasium\" environment. \n",
26+
"- [2 Create an environment, and train a first policy](#2-create-an-environment-and-train-a-first-policy): show how to create an environment from the class above (is pretty easy)\n",
27+
"- [3 Evaluate the trained agent ](#3-evaluate-the-trained-agent): show how to evaluate the trained \"agent\"\n",
28+
"- [4 Some customizations](#4-some-customizations): explain how to perform some customization of your agent / environment / policy\n",
2329
"## 0 Some tips to get started\n",
2430
"\n",
2531
"<font color='red'> It is unlikely that \"simply\" using a RL algorithm on a grid2op environment will lead to good results for the vast majority of environments.</font>\n",
@@ -62,7 +68,7 @@
6268
"metadata": {},
6369
"source": [
6470
"\n",
65-
"## 1 Create the \"Grid2opEnv\" class\n",
71+
"## 1 Create the \"Grid2opEnvWrapper\" class\n",
6672
"\n",
6773
"In the next cell, we define a custom environment (that will internally use the `GymEnv` grid2op class). It is not strictly needed\n",
6874
"\n",
@@ -102,12 +108,14 @@
102108
"source": [
103109
"from gymnasium import Env\n",
104110
"from gymnasium.spaces import Discrete, MultiDiscrete, Box\n",
111+
"import json\n",
105112
"\n",
106113
"import ray\n",
107114
"from ray.rllib.algorithms.ppo import PPOConfig\n",
108115
"from ray.rllib.algorithms import ppo\n",
109116
"\n",
110117
"from typing import Dict, Literal, Any\n",
118+
"import copy\n",
111119
"\n",
112120
"import grid2op\n",
113121
"from grid2op.gym_compat import GymEnv, BoxGymObsSpace, DiscreteActSpace, BoxGymActSpace, MultiDiscreteActSpace\n",
@@ -201,9 +209,13 @@
201209
" else:\n",
202210
" raise NotImplementedError(f\"action type '{act_type}' is not currently supported.\")\n",
203211
" \n",
204-
" \n",
205-
" def reset(self, seed, options):\n",
212+
" def reset(self, seed=None, options=None):\n",
206213
" # use default _gym_env (from grid2op.gym_compat module)\n",
214+
" # NB: here you can also specify \"default options\" when you reset, for example:\n",
215+
" # - limiting the duration of the episode \"max step\"\n",
216+
" # - starting at different steps \"init ts\"\n",
217+
" # - study difficult scenario \"time serie id\"\n",
218+
" # - specify an initial state of your grid \"init state\"\n",
207219
" return self._gym_env.reset(seed=seed, options=options)\n",
208220
" \n",
209221
" def step(self, action):\n",
@@ -216,23 +228,23 @@
216228
"cell_type": "markdown",
217229
"metadata": {},
218230
"source": [
219-
"Now we init ray, because we need to."
231+
"## 2 Create an environment, and train a first policy"
220232
]
221233
},
222234
{
223-
"cell_type": "code",
224-
"execution_count": null,
235+
"cell_type": "markdown",
225236
"metadata": {},
226-
"outputs": [],
227237
"source": [
228-
"ray.init()"
238+
"Now we init ray, because we need to."
229239
]
230240
},
231241
{
232-
"cell_type": "markdown",
242+
"cell_type": "code",
243+
"execution_count": null,
233244
"metadata": {},
245+
"outputs": [],
234246
"source": [
235-
"## 2 Make a default environment, and train a PPO agent for one iteration"
247+
"ray.init()"
236248
]
237249
},
238250
{
@@ -279,7 +291,58 @@
279291
"cell_type": "markdown",
280292
"metadata": {},
281293
"source": [
282-
"## 3 Train a PPO agent using 2 \"runners\" to make the rollouts\n",
294+
"## 3 Evaluate the trained agent\n",
295+
"\n",
296+
"This notebook is a simple quick introduction for stable baselines only. So we don't really recall everything that has been said previously.\n",
297+
"\n",
298+
"Please consult the section `0) Recommended initial steps` of the notebook [11_IntegrationWithExistingRLFrameworks](./11_IntegrationWithExistingRLFrameworks.ipynb) for more information.\n",
299+
"\n",
300+
"**TLD;DR** grid2op offers the possibility to test your agent on scenarios / episodes different from the one it has been trained. We greatly encourage you to use this functionality.\n",
301+
"\n",
302+
"There are two main ways to evaluate your agent:\n",
303+
"\n",
304+
"- you stay in the \"gymnasium\" world (see [here](#31-staying-in-the-gymnasium-ecosystem) ) and you evaluate your policy directly just like you would any other gymnasium compatible environment. Simple, easy but without support for some grid2op features\n",
305+
"- you \"get back\" to the \"grid2op\" world (detailed [here](#32-using-the-grid2op-ecosystem)) by \"converting\" your NN policy into something that is able to output grid2op like action. This introduces yet again a \"wrapper\" but you can benefit from all grid2op features, such as the `Runner` to save an inspect what your policy has done.\n",
306+
"\n",
307+
"<font color='red'> We show here just a simple examples to \"get easily started\". For much better working agents, you can have a look at l2rpn-baselines code. There you have classes that maps the environment, the agents etc. to grid2op directly (you don't have to copy paste any wrapper).</font> \n",
308+
"\n",
309+
"\n",
310+
"\n",
311+
"### 3.1 staying in the gymnasium ecosystem\n",
312+
"\n",
313+
"You can do pretty much what you want, but you have to do it yourself, or use any of the \"Wrappers\" available in gymnasium https://gymnasium.farama.org/main/api/wrappers/ (*eg* https://gymnasium.farama.org/main/api/wrappers/misc_wrappers/#gymnasium.wrappers.RecordEpisodeStatistics) or in your RL framework.\n",
314+
"\n",
315+
"For the sake of simplicity, we show how to do things \"manually\" even though we do not recommend to do it like that."
316+
]
317+
},
318+
{
319+
"cell_type": "code",
320+
"execution_count": null,
321+
"metadata": {},
322+
"outputs": [],
323+
"source": []
324+
},
325+
{
326+
"cell_type": "markdown",
327+
"metadata": {},
328+
"source": [
329+
"### 3.2 using the grid2op environment"
330+
]
331+
},
332+
{
333+
"cell_type": "code",
334+
"execution_count": null,
335+
"metadata": {},
336+
"outputs": [],
337+
"source": []
338+
},
339+
{
340+
"cell_type": "markdown",
341+
"metadata": {},
342+
"source": [
343+
"## 4 some customizations\n",
344+
"\n",
345+
"### 4.1 Train a PPO agent using 2 \"runners\" to make the rollouts\n",
283346
"\n",
284347
"In this second example, we explain briefly how to train the model using 2 \"processes\". This is, the agent will interact with 2 agents at the same time during the \"rollout\" phases.\n",
285348
"\n",
@@ -296,7 +359,7 @@
296359
"\n",
297360
"# use multiple runners\n",
298361
"config2 = (PPOConfig().training(gamma=0.9, lr=0.01)\n",
299-
" .environment(env=Grid2opEnv, env_config={})\n",
362+
" .environment(env=Grid2opEnvWrapper, env_config={})\n",
300363
" .resources(num_gpus=0)\n",
301364
" .env_runners(num_env_runners=2, num_envs_per_env_runner=1, num_cpus_per_env_runner=1)\n",
302365
" .framework(\"tf2\")\n",
@@ -326,7 +389,7 @@
326389
"cell_type": "markdown",
327390
"metadata": {},
328391
"source": [
329-
"## 4 Use non default parameters to make the grid2op environment\n",
392+
"### 4.2 Use non default parameters to make the grid2op environment\n",
330393
"\n",
331394
"In this third example, we will train a policy using the \"box\" action space, and on another environment (`l2rpn_idf_2023` instead of `l2rpn_case14_sandbox`)"
332395
]
@@ -345,7 +408,7 @@
345408
" \"act_type\": \"box\",\n",
346409
" }\n",
347410
"config3 = (PPOConfig().training(gamma=0.9, lr=0.01)\n",
348-
" .environment(env=Grid2opEnv, env_config=env_config)\n",
411+
" .environment(env=Grid2opEnvWrapper, env_config=env_config)\n",
349412
" .resources(num_gpus=0)\n",
350413
" .env_runners(num_env_runners=2, num_envs_per_env_runner=1, num_cpus_per_env_runner=1)\n",
351414
" .framework(\"tf2\")\n",
@@ -392,7 +455,7 @@
392455
" \"act_type\": \"multi_discrete\",\n",
393456
" }\n",
394457
"config4 = (PPOConfig().training(gamma=0.9, lr=0.01)\n",
395-
" .environment(env=Grid2opEnv, env_config=env_config4)\n",
458+
" .environment(env=Grid2opEnvWrapper, env_config=env_config4)\n",
396459
" .resources(num_gpus=0)\n",
397460
" .env_runners(num_env_runners=2, num_envs_per_env_runner=1, num_cpus_per_env_runner=1)\n",
398461
" .framework(\"tf2\")\n",
@@ -422,7 +485,7 @@
422485
"cell_type": "markdown",
423486
"metadata": {},
424487
"source": [
425-
"## 5 Customize the policy (number of layers, size of layers etc.)\n",
488+
"### 4.3 Customize the policy (number of layers, size of layers etc.)\n",
426489
"\n",
427490
"This notebook does not aim at covering all possibilities offered by ray / rllib. For that you need to refer to the ray / rllib documentation.\n",
428491
"\n",
@@ -439,7 +502,7 @@
439502
"\n",
440503
"# Use a \"Box\" action space (mainly to use redispatching, curtailment and storage units)\n",
441504
"config5 = (PPOConfig().training(gamma=0.9, lr=0.01)\n",
442-
" .environment(env=Grid2opEnv, env_config={})\n",
505+
" .environment(env=Grid2opEnvWrapper, env_config={})\n",
443506
" .resources(num_gpus=0)\n",
444507
" .env_runners(num_env_runners=2, num_envs_per_env_runner=1, num_cpus_per_env_runner=1)\n",
445508
" .framework(\"tf2\")\n",

0 commit comments

Comments
 (0)