Skip to content

Commit 2034ac0

Browse files
kevinzakkacopybara-github
authored andcommitted
Improve README and CONTRIBUTING.
PiperOrigin-RevId: 718055094 Change-Id: I94ce4c8495794d6f7313e9f2f0dc72c9cc19fbc2
1 parent 7dc538b commit 2034ac0

File tree

2 files changed

+50
-34
lines changed

2 files changed

+50
-34
lines changed

CONTRIBUTING.md

Lines changed: 30 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,26 @@ You generally only need to submit a CLA once, so if you've already submitted one
1212
(even if it was for a different project), you probably don't need to do it
1313
again.
1414

15+
## Adding New Tasks
16+
17+
We welcome contributions of new tasks, particularly those that demonstrate:
18+
19+
- Impressive capabilities in simulation
20+
- Tasks that have been successfully transferred to real robots
21+
22+
When submitting a new task, please ensure:
23+
24+
1. The task is well-documented with clear objectives and reward structure
25+
2. Add the relevant RL hyperparameters to the config file so that it can be
26+
independently reproduced
27+
3. Ensure that it works across at least 3 seeds
28+
4. Show a video of the behavior
29+
5. Make sure your new task passes all the tests
30+
31+
For an example of a well-structured task contribution, see @Andrew-Luo1's
32+
excellent [ALOHA Handover Task
33+
PR](https://github.com/google-deepmind/mujoco_playground/pull/29).
34+
1535
## Code reviews
1636

1737
All submissions, including submissions by project members, require review. We
@@ -24,7 +44,6 @@ information on using pull requests.
2444
This project follows [Google's Open Source Community
2545
Guidelines](https://opensource.google/conduct/).
2646

27-
2847
## Linting and Code Health
2948

3049
Before submitting a PR, please run:
@@ -39,12 +58,18 @@ or you can run manually
3958

4059
```shell
4160
pyink .
42-
4361
isort .
44-
4562
pylint . --rcfile=pylintrc
46-
4763
pytype .
4864
```
4965

50-
And resolve any issues that pop up.
66+
and resolve any issues that pop up.
67+
68+
## Testing
69+
70+
To run the tests, use the following command:
71+
72+
```shell
73+
pytest
74+
```
75+

README.md

Lines changed: 20 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -40,38 +40,29 @@ For vision-based environments, please refer to the installation instructions in
4040

4141
## Getting started
4242

43-
To try out MuJoCo Playground locally on a simple locomotion environment, you can run the following:
44-
45-
```py
46-
import jax
47-
import jax.numpy as jp
48-
from mujoco_playground import registry
49-
50-
env = registry.load('Go1JoystickFlatTerrain')
51-
state = jax.jit(env.reset)(jax.random.PRNGKey(0))
52-
print(state.obs)
53-
state = jax.jit(env.step)(state, jp.zeros(env.action_size))
54-
print(state.obs)
55-
```
56-
57-
For detailed tutorials on using MuJoCo Playground, see:
58-
59-
1. [Intro. to the Playground with DM Control Suite](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/dm_control_suite.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/dm_control_suite.ipynb)
60-
2. [Locomotion Environments](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/locomotion.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/locomotion.ipynb)
61-
3. [Manipulation Environments](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/manipulation.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/manipulation.ipynb)
62-
63-
For tutorials on MuJoCo Playground with Madrona-MJX batch rendering, we offer two types of colabs. The first allows you to install Madrona-MJX directly in a GPU colab instance and run vision-based cartpole!
64-
65-
1. [Training CartPole from Vision](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/training_vision_1_t4.ipynb) on a Colab T4 Instance [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/training_vision_1_t4.ipynb)
66-
67-
Two additional colabs require local runtimes with Madrona-MJX installed locally (see [Madrona-MJX](https://github.com/shacklettbp/madrona_mjx?tab=readme-ov-file#installation) for installation instructions):
68-
69-
1. [Training CartPole from Vision (Local Runtime)](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/training_vision_1.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/training_vision_1.ipynb)
70-
2. [Robotic Manipulation from Vision (Local Runtime)](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/training_vision_2.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/training_vision_2.ipynb)
43+
### Basic Tutorials
44+
| Colab | Description |
45+
|-------|-------------|
46+
| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/dm_control_suite.ipynb) | Introduction to the Playground with DM Control Suite |
47+
| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/locomotion.ipynb) | Locomotion Environments |
48+
| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/manipulation.ipynb) | Manipulation Environments |
49+
50+
### Vision-Based Tutorials (GPU Colab)
51+
| Colab | Description |
52+
|-------|-------------|
53+
| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/training_vision_1_t4.ipynb) | Training CartPole from Vision (T4 Instance) |
54+
55+
### Local Runtime Tutorials
56+
*Requires local Madrona-MJX installation*
57+
58+
| Colab | Description |
59+
|-------|-------------|
60+
| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/training_vision_1.ipynb) | Training CartPole from Vision |
61+
| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/mujoco_playground/blob/main/learning/notebooks/training_vision_2.ipynb) | Robotic Manipulation from Vision |
7162

7263
## How can I contribute?
7364

74-
Get started by installing the library and exploring its features! Found a bug? Report it in the issue tracker. Interested in contributing? If you’re a developer with robotics experience, we’d love your help—check out the [contribution guidelines](CONTRIBUTING.md) for more details.
65+
Get started by installing the library and exploring its features! Found a bug? Report it in the issue tracker. Interested in contributing? If you are a developer with robotics experience, we would love your help—check out the [contribution guidelines](CONTRIBUTING.md) for more details.
7566

7667
## Citation
7768

0 commit comments

Comments
 (0)