Skip to content

Commit f59daa6

Browse files
authored
🐛 FIX: Cross-reference for training agents not working fixed (#1493)
1 parent 9e20c9f commit f59daa6

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

docs/introduction/record_agent.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -299,7 +299,7 @@ for episode_num in range(num_training_episodes):
299299

300300
* [Training an agent](train_agent) - Learn how to build the agents you're recording
301301
* [Basic usage](basic_usage) - Understand Gymnasium fundamentals
302-
* [More training tutorials](../tutorials/training_agents) - Advanced training techniques
302+
* {doc}`More training tutorials </tutorials/training_agents/index>` - Advanced training techniques
303303
* [Custom environments](create_custom_env) - Create your own environments to record
304304

305305
Recording agent behavior is an essential skill for RL practitioners. It helps you understand what your agent is actually learning, debug training issues, and communicate results effectively. Start with simple recording setups and gradually add more sophisticated analysis as your projects grow in complexity!

docs/introduction/train_agent.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ For Blackjack:
3535

3636
---
3737

38-
This page provides a short outline of how to train an agent for a Gymnasium environment. We'll use tabular Q-learning to solve Blackjack-v1. For complete tutorials with other environments and algorithms, see [training tutorials](../tutorials/training_agents). Please read [basic usage](basic_usage) before this page.
38+
This page provides a short outline of how to train an agent for a Gymnasium environment. We'll use tabular Q-learning to solve Blackjack-v1. For complete tutorials with other environments and algorithms, see {doc}`training tutorials </tutorials/training_agents/index>`. Please read [basic usage](basic_usage) before this page.
3939

4040
## About the Environment: Blackjack
4141

@@ -438,7 +438,7 @@ For more information, see:
438438

439439
* [Basic Usage](basic_usage) - Understanding Gymnasium fundamentals
440440
* [Custom Environments](create_custom_env) - Building your own RL problems
441-
* [Complete Training Tutorials](../tutorials/training_agents) - More algorithms and environments
441+
* {doc}`Complete Training Tutorials </tutorials/training_agents/index>` - More algorithms and environments
442442
* [Recording Agent Behavior](record_agent) - Saving videos and performance data
443443

444444
The key insight from this tutorial is that RL agents learn through trial and error, gradually building up knowledge about what actions work best in different situations. Q-learning provides a systematic way to learn this knowledge, balancing exploration of new possibilities with exploitation of current knowledge.

0 commit comments

Comments
 (0)