Skip to content

Commit 2083a47

Browse files
author
The TensorFlow Datasets Authors
committed
Automated documentation update.
PiperOrigin-RevId: 627819692
1 parent 3058096 commit 2083a47

File tree

4 files changed

+263
-0
lines changed

4 files changed

+263
-0
lines changed

docs/catalog/_toc.yaml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1062,11 +1062,17 @@ toc:
10621062
title: nyu_franka_play_dataset_converted_externally_to_rlds
10631063
- path: /datasets/catalog/nyu_rot_dataset_converted_externally_to_rlds
10641064
title: nyu_rot_dataset_converted_externally_to_rlds
1065+
- path: /datasets/catalog/plex_robosuite
1066+
status: nightly
1067+
title: plex_robosuite
10651068
- path: /datasets/catalog/robo_set
10661069
status: nightly
10671070
title: robo_set
10681071
- path: /datasets/catalog/roboturk
10691072
title: roboturk
1073+
- path: /datasets/catalog/spoc
1074+
status: nightly
1075+
title: spoc
10701076
- path: /datasets/catalog/stanford_hydra_dataset_converted_externally_to_rlds
10711077
title: stanford_hydra_dataset_converted_externally_to_rlds
10721078
- path: /datasets/catalog/stanford_kuka_multimodal_dataset_converted_externally_to_rlds

docs/catalog/overview.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -670,9 +670,13 @@ for ex in tfds.load('cifar10', split='train'):
670670
* [`nyu_door_opening_surprising_effectiveness`](nyu_door_opening_surprising_effectiveness.md)
671671
* [`nyu_franka_play_dataset_converted_externally_to_rlds`](nyu_franka_play_dataset_converted_externally_to_rlds.md)
672672
* [`nyu_rot_dataset_converted_externally_to_rlds`](nyu_rot_dataset_converted_externally_to_rlds.md)
673+
* [`plex_robosuite`](plex_robosuite.md)
674+
<span class="material-icons" title="Available only in the tfds-nightly package">nights_stay</span>
673675
* [`robo_set`](robo_set.md)
674676
<span class="material-icons" title="Available only in the tfds-nightly package">nights_stay</span>
675677
* [`roboturk`](roboturk.md)
678+
* [`spoc`](spoc.md)
679+
<span class="material-icons" title="Available only in the tfds-nightly package">nights_stay</span>
676680
* [`stanford_hydra_dataset_converted_externally_to_rlds`](stanford_hydra_dataset_converted_externally_to_rlds.md)
677681
* [`stanford_kuka_multimodal_dataset_converted_externally_to_rlds`](stanford_kuka_multimodal_dataset_converted_externally_to_rlds.md)
678682
* [`stanford_mask_vit_converted_externally_to_rlds`](stanford_mask_vit_converted_externally_to_rlds.md)

docs/catalog/plex_robosuite.md

Lines changed: 111 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,111 @@
1+
<div itemscope itemtype="http://schema.org/Dataset">
2+
<div itemscope itemprop="includedInDataCatalog" itemtype="http://schema.org/DataCatalog">
3+
<meta itemprop="name" content="TensorFlow Datasets" />
4+
</div>
5+
<meta itemprop="name" content="plex_robosuite" />
6+
<meta itemprop="description" content="A dataset of high-qualty demonstration trajectories for Robosuite&#x27;s Door, Stack, PickPlaceMilk, PickPlaceBread, PickPlaceCereal, and NutAssemblyRound tasks, 75 demonstrations per each.&#10;&#10;To use this dataset:&#10;&#10;```python&#10;import tensorflow_datasets as tfds&#10;&#10;ds = tfds.load(&#x27;plex_robosuite&#x27;, split=&#x27;train&#x27;)&#10;for ex in ds.take(4):&#10; print(ex)&#10;```&#10;&#10;See [the guide](https://www.tensorflow.org/datasets/overview) for more&#10;informations on [tensorflow_datasets](https://www.tensorflow.org/datasets).&#10;&#10;" />
7+
<meta itemprop="url" content="https://www.tensorflow.org/datasets/catalog/plex_robosuite" />
8+
<meta itemprop="sameAs" content="https://microsoft.github.io/PLEX/" />
9+
<meta itemprop="citation" content="https://doi.org/10.48550/arXiv.2303.08789" />
10+
</div>
11+
12+
# `plex_robosuite`
13+
14+
15+
Note: This dataset was added recently and is only available in our
16+
`tfds-nightly` package
17+
<span class="material-icons" title="Available only in the tfds-nightly package">nights_stay</span>.
18+
19+
* **Description**:
20+
21+
A dataset of high-qualty demonstration trajectories for Robosuite's Door, Stack,
22+
PickPlaceMilk, PickPlaceBread, PickPlaceCereal, and NutAssemblyRound tasks, 75
23+
demonstrations per each.
24+
25+
* **Homepage**:
26+
[https://microsoft.github.io/PLEX/](https://microsoft.github.io/PLEX/)
27+
28+
* **Source code**:
29+
[`tfds.robotics.rtx.PlexRobosuite`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)
30+
31+
* **Versions**:
32+
33+
* **`0.1.0`** (default): Initial release.
34+
35+
* **Download size**: `Unknown size`
36+
37+
* **Dataset size**: `Unknown size`
38+
39+
* **Auto-cached**
40+
([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):
41+
Unknown
42+
43+
* **Splits**:
44+
45+
Split | Examples
46+
:---- | -------:
47+
48+
* **Feature structure**:
49+
50+
```python
51+
FeaturesDict({
52+
'episode_metadata': FeaturesDict({
53+
'episode_id': string,
54+
}),
55+
'steps': Dataset({
56+
'action': Tensor(shape=(7,), dtype=float64),
57+
'discount': Scalar(shape=(), dtype=float32),
58+
'is_first': bool,
59+
'is_last': bool,
60+
'is_terminal': bool,
61+
'language_embedding': Tensor(shape=(512,), dtype=float32),
62+
'language_instruction': string,
63+
'observation': FeaturesDict({
64+
'image': Image(shape=(128, 128, 3), dtype=uint8),
65+
'state': Tensor(shape=(32,), dtype=float64),
66+
'wrist_image': Image(shape=(128, 128, 3), dtype=uint8),
67+
}),
68+
'reward': Scalar(shape=(), dtype=float32),
69+
}),
70+
})
71+
```
72+
73+
* **Feature documentation**:
74+
75+
Feature | Class | Shape | Dtype | Description
76+
:---------------------------- | :----------- | :------------ | :------ | :----------
77+
| FeaturesDict | | |
78+
episode_metadata | FeaturesDict | | |
79+
episode_metadata/episode_id | Tensor | | string |
80+
steps | Dataset | | |
81+
steps/action | Tensor | (7,) | float64 |
82+
steps/discount | Scalar | | float32 |
83+
steps/is_first | Tensor | | bool |
84+
steps/is_last | Tensor | | bool |
85+
steps/is_terminal | Tensor | | bool |
86+
steps/language_embedding | Tensor | (512,) | float32 |
87+
steps/language_instruction | Tensor | | string |
88+
steps/observation | FeaturesDict | | |
89+
steps/observation/image | Image | (128, 128, 3) | uint8 |
90+
steps/observation/state | Tensor | (32,) | float64 |
91+
steps/observation/wrist_image | Image | (128, 128, 3) | uint8 |
92+
steps/reward | Scalar | | float32 |
93+
94+
* **Supervised keys** (See
95+
[`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):
96+
`None`
97+
98+
* **Figure**
99+
([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):
100+
Not supported.
101+
102+
* **Examples**
103+
([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):
104+
Missing.
105+
106+
* **Citation**:
107+
108+
```
109+
https://doi.org/10.48550/arXiv.2303.08789
110+
```
111+

docs/catalog/spoc.md

Lines changed: 142 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,142 @@
1+
<div itemscope itemtype="http://schema.org/Dataset">
2+
<div itemscope itemprop="includedInDataCatalog" itemtype="http://schema.org/DataCatalog">
3+
<meta itemprop="name" content="TensorFlow Datasets" />
4+
</div>
5+
<meta itemprop="name" content="spoc" />
6+
<meta itemprop="description" content="&#10;&#10;To use this dataset:&#10;&#10;```python&#10;import tensorflow_datasets as tfds&#10;&#10;ds = tfds.load(&#x27;spoc&#x27;, split=&#x27;train&#x27;)&#10;for ex in ds.take(4):&#10; print(ex)&#10;```&#10;&#10;See [the guide](https://www.tensorflow.org/datasets/overview) for more&#10;informations on [tensorflow_datasets](https://www.tensorflow.org/datasets).&#10;&#10;" />
7+
<meta itemprop="url" content="https://www.tensorflow.org/datasets/catalog/spoc" />
8+
<meta itemprop="sameAs" content="https://spoc-robot.github.io/" />
9+
<meta itemprop="citation" content="@article{spoc2023,&#10; author = {Kiana Ehsani, Tanmay Gupta, Rose Hendrix, Jordi Salvador, Luca Weihs, Kuo-Hao Zeng, Kunal Pratap Singh, Yejin Kim, Winson Han, Alvaro Herrasti, Ranjay Krishna, Dustin Schwenk, Eli VanderBilt, Aniruddha Kembhavi},&#10; title = {Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World},&#10; journal = {arXiv},&#10; year = {2023},&#10; eprint = {2312.02976},&#10;}" />
10+
</div>
11+
12+
# `spoc`
13+
14+
15+
Note: This dataset was added recently and is only available in our
16+
`tfds-nightly` package
17+
<span class="material-icons" title="Available only in the tfds-nightly package">nights_stay</span>.
18+
19+
* **Description**:
20+
21+
* **Homepage**: [https://spoc-robot.github.io/](https://spoc-robot.github.io/)
22+
23+
* **Source code**:
24+
[`tfds.robotics.rtx.Spoc`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)
25+
26+
* **Versions**:
27+
28+
* **`0.1.0`** (default): Initial release.
29+
30+
* **Download size**: `Unknown size`
31+
32+
* **Dataset size**: `Unknown size`
33+
34+
* **Auto-cached**
35+
([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):
36+
Unknown
37+
38+
* **Splits**:
39+
40+
Split | Examples
41+
:---- | -------:
42+
43+
* **Feature structure**:
44+
45+
```python
46+
FeaturesDict({
47+
'episode_metadata': FeaturesDict({
48+
'file_path': string,
49+
'task_target_split': string,
50+
'task_type': string,
51+
}),
52+
'steps': Dataset({
53+
'action': Tensor(shape=(9,), dtype=float32),
54+
'discount': Scalar(shape=(), dtype=float32),
55+
'is_first': bool,
56+
'is_last': bool,
57+
'is_terminal': bool,
58+
'language_instruction': string,
59+
'observation': FeaturesDict({
60+
'an_object_is_in_hand': Scalar(shape=(), dtype=bool),
61+
'house_index': Scalar(shape=(), dtype=int64),
62+
'hypothetical_task_success': Scalar(shape=(), dtype=bool),
63+
'image': Image(shape=(224, 384, 3), dtype=uint8),
64+
'image_manipulation': Image(shape=(224, 384, 3), dtype=uint8),
65+
'last_action_is_random': Scalar(shape=(), dtype=bool),
66+
'last_action_str': string,
67+
'last_action_success': Scalar(shape=(), dtype=bool),
68+
'last_agent_location': Tensor(shape=(6,), dtype=float32),
69+
'manip_object_bbox': Tensor(shape=(10,), dtype=float32),
70+
'minimum_l2_target_distance': Scalar(shape=(), dtype=float32),
71+
'minimum_visible_target_alignment': Scalar(shape=(), dtype=float32),
72+
'nav_object_bbox': Tensor(shape=(10,), dtype=float32),
73+
'relative_arm_location_metadata': Tensor(shape=(4,), dtype=float32),
74+
'room_current_seen': Scalar(shape=(), dtype=bool),
75+
'rooms_seen': Scalar(shape=(), dtype=int64),
76+
'visible_target_4m_count': Scalar(shape=(), dtype=int64),
77+
}),
78+
'reward': Scalar(shape=(), dtype=float32),
79+
}),
80+
})
81+
```
82+
83+
* **Feature documentation**:
84+
85+
Feature | Class | Shape | Dtype | Description
86+
:------------------------------------------------- | :----------- | :------------ | :------ | :----------
87+
| FeaturesDict | | |
88+
episode_metadata | FeaturesDict | | |
89+
episode_metadata/file_path | Tensor | | string |
90+
episode_metadata/task_target_split | Tensor | | string |
91+
episode_metadata/task_type | Tensor | | string |
92+
steps | Dataset | | |
93+
steps/action | Tensor | (9,) | float32 |
94+
steps/discount | Scalar | | float32 |
95+
steps/is_first | Tensor | | bool |
96+
steps/is_last | Tensor | | bool |
97+
steps/is_terminal | Tensor | | bool |
98+
steps/language_instruction | Tensor | | string |
99+
steps/observation | FeaturesDict | | |
100+
steps/observation/an_object_is_in_hand | Scalar | | bool |
101+
steps/observation/house_index | Scalar | | int64 |
102+
steps/observation/hypothetical_task_success | Scalar | | bool |
103+
steps/observation/image | Image | (224, 384, 3) | uint8 |
104+
steps/observation/image_manipulation | Image | (224, 384, 3) | uint8 |
105+
steps/observation/last_action_is_random | Scalar | | bool |
106+
steps/observation/last_action_str | Tensor | | string |
107+
steps/observation/last_action_success | Scalar | | bool |
108+
steps/observation/last_agent_location | Tensor | (6,) | float32 |
109+
steps/observation/manip_object_bbox | Tensor | (10,) | float32 |
110+
steps/observation/minimum_l2_target_distance | Scalar | | float32 |
111+
steps/observation/minimum_visible_target_alignment | Scalar | | float32 |
112+
steps/observation/nav_object_bbox | Tensor | (10,) | float32 |
113+
steps/observation/relative_arm_location_metadata | Tensor | (4,) | float32 |
114+
steps/observation/room_current_seen | Scalar | | bool |
115+
steps/observation/rooms_seen | Scalar | | int64 |
116+
steps/observation/visible_target_4m_count | Scalar | | int64 |
117+
steps/reward | Scalar | | float32 |
118+
119+
* **Supervised keys** (See
120+
[`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):
121+
`None`
122+
123+
* **Figure**
124+
([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):
125+
Not supported.
126+
127+
* **Examples**
128+
([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):
129+
Missing.
130+
131+
* **Citation**:
132+
133+
```
134+
@article{spoc2023,
135+
author = {Kiana Ehsani, Tanmay Gupta, Rose Hendrix, Jordi Salvador, Luca Weihs, Kuo-Hao Zeng, Kunal Pratap Singh, Yejin Kim, Winson Han, Alvaro Herrasti, Ranjay Krishna, Dustin Schwenk, Eli VanderBilt, Aniruddha Kembhavi},
136+
title = {Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World},
137+
journal = {arXiv},
138+
year = {2023},
139+
eprint = {2312.02976},
140+
}
141+
```
142+

0 commit comments

Comments
 (0)