Skip to content

Commit 44ed927

Browse files
authored
Fix typos in documentation and JSON spec (#1530)
Description: This pull request corrects minor typos in two files: - In packages/tasks/src/tasks/placeholder/spec/output.json, the word "outputed" was corrected to "outputted" in the description field. - In packages/tasks/src/tasks/reinforcement-learning/about.md, the word "evalute" was corrected to "evaluate" in the explanation of the training cycle.
1 parent d5e865f commit 44ed927

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

packages/tasks/src/tasks/placeholder/spec/output.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
"properties": {
1010
"meaningful_output_name": {
1111
"type": "string",
12-
"description": "TODO: Describe what is outputed by the inference here"
12+
"description": "TODO: Describe what is outputted by the inference here"
1313
}
1414
},
1515
"required": ["meaningfulOutputName"]

packages/tasks/src/tasks/reinforcement-learning/about.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ Observations and states are the information our agent gets from the environment.
4848

4949
Inference in reinforcement learning differs from other modalities, in which there's a model and test data. In reinforcement learning, once you have trained an agent in an environment, you try to run the trained agent for additional steps to get the average reward.
5050

51-
A typical training cycle consists of gathering experience from the environment, training the agent, and running the agent on a test environment to obtain average reward. Below there's a snippet on how you can interact with the environment using the `gymnasium` library, train an agent using `stable-baselines3`, evalute the agent on test environment and infer actions from the trained agent.
51+
A typical training cycle consists of gathering experience from the environment, training the agent, and running the agent on a test environment to obtain average reward. Below there's a snippet on how you can interact with the environment using the `gymnasium` library, train an agent using `stable-baselines3`, evaluate the agent on test environment and infer actions from the trained agent.
5252

5353
```python
5454
# Here we are running 20 episodes of CartPole-v1 environment, taking random actions

0 commit comments

Comments
 (0)