|
| 1 | +# Example: PPO on Countdown dataset with experience replay |
| 2 | + |
| 3 | +In this example, we follow the main settings in [`ppo_countdown`](../ppo_countdown/README.md), |
| 4 | +and demonstrate the **experience replay** mechanisms in Trinity-RFT. |
| 5 | + |
| 6 | + |
| 7 | +### Motivations |
| 8 | + |
| 9 | +One motivation for experience replay is that, it is often desirable to improve learning efficiency by reusing the rollout samples for multiple training steps, especially in scenarios where rollout (with agent-environment interaction) is slow or expensive. |
| 10 | +Moreover, experience replay offers a straightforward method for filling pipeline bubbles in the trainer (caused by discrepencies between explorer's and trainer's speeds) with useful computation, improving hardware utilization for the disaggregated architecture adopted by Trinity (and many other RL systems). |
| 11 | + |
| 12 | +### Implementation and configuration |
| 13 | + |
| 14 | +The priority queue buffer in Trinity offers seamless support for experience replay. |
| 15 | +Whenever a batch of highest-priority samples are retrieved from the buffer, |
| 16 | +a **priority function** updates their priority scores and decide which one should be put back into the buffer (after `reuse_cooldown_time` seconds have passed) for replay. |
| 17 | +Users of Trinity can implement and register their own customized priority functions, |
| 18 | +which can then be called by setting the `priority_fn` field in the yaml config. |
| 19 | + |
| 20 | +We present an example config file in [`countdown.yaml`](./countdown.yaml), |
| 21 | +where 1 GPU is allocated to the explorer and 6 GPUs to the trainer, |
| 22 | +simulating a scenario where agent-environment interaction is slow and rollout data is scarce. |
| 23 | +Important config parameters for experience replay include: |
| 24 | +* `buffer.trainer_input.experience_buffer.storage_type`: set to `queue` |
| 25 | +* `buffer.trainer_input.experience_buffer.replay_buffer` |
| 26 | + * `enable`: set to `true` for enabling priority queue buffer |
| 27 | + * `reuse_cooldown_time`: delay time (in seconds) before putting sample back into the buffer; must be set explicitly |
| 28 | + * `priority_fn`: name of the priority function |
| 29 | + * `priority_fn_args`: additional args for the priority function |
| 30 | +* `synchronizer.sync_style`: set to `dynamic_by_explorer`, which allows the trainer to run more training steps as long as the priority queue buffer is non-empty |
| 31 | + |
| 32 | +The priority function used in this example is named `decay_limit_randomization`. |
| 33 | +The logic behind it: |
| 34 | +* Priority score is calculated as `model_version - decay * use_count`, i.e., fresher and less used samples are prioritized; |
| 35 | +* If `sigma` is non-zero, priority score is further perturbed by random Gaussian noise with standard deviation `sigma`; |
| 36 | +* A retrieved sample will be put back into the buffer if and only if its use count has not exceeded `use_count_limit`. |
| 37 | + |
| 38 | + |
| 39 | +### Experimental results |
| 40 | + |
| 41 | +We conduct experiment for this config, and compare it with a baseline config that uses each rollout sample exactly once for training. |
| 42 | +The first and second figures below --- using rollout step or wall-clock time as the X-axis --- confirms the benefits brought by experience replay (with default hyperparameters). |
| 43 | +This is partly because more training steps can be taken, as shown in the third figure (where X-axis represents rollout step). |
| 44 | + |
| 45 | + |
| 46 | + |
| 47 | +<img src="../../docs/sphinx_doc/assets/example_experience_replay/exp_replay_X_explore_step.png" alt="score-vs-explore-step" width="600" /> |
| 48 | + |
| 49 | +<img src="../../docs/sphinx_doc/assets/example_experience_replay/exp_replay_X_time.png" alt="score-vs-wall-clock-time" width="600" /> |
| 50 | + |
| 51 | +<img src="../../docs/sphinx_doc/assets/example_experience_replay/exp_replay_model_version.png" alt="model-version" width="600" /> |
0 commit comments