|
| 1 | +<div itemscope itemtype="http://schema.org/Dataset"> |
| 2 | + <div itemscope itemprop="includedInDataCatalog" itemtype="http://schema.org/DataCatalog"> |
| 3 | + <meta itemprop="name" content="TensorFlow Datasets" /> |
| 4 | + </div> |
| 5 | + <meta itemprop="name" content="spoc" /> |
| 6 | + <meta itemprop="description" content=" To use this dataset: ```python import tensorflow_datasets as tfds ds = tfds.load('spoc', split='train') for ex in ds.take(4): print(ex) ``` See [the guide](https://www.tensorflow.org/datasets/overview) for more informations on [tensorflow_datasets](https://www.tensorflow.org/datasets). " /> |
| 7 | + <meta itemprop="url" content="https://www.tensorflow.org/datasets/catalog/spoc" /> |
| 8 | + <meta itemprop="sameAs" content="https://spoc-robot.github.io/" /> |
| 9 | + <meta itemprop="citation" content="@article{spoc2023, author = {Kiana Ehsani, Tanmay Gupta, Rose Hendrix, Jordi Salvador, Luca Weihs, Kuo-Hao Zeng, Kunal Pratap Singh, Yejin Kim, Winson Han, Alvaro Herrasti, Ranjay Krishna, Dustin Schwenk, Eli VanderBilt, Aniruddha Kembhavi}, title = {Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World}, journal = {arXiv}, year = {2023}, eprint = {2312.02976}, }" /> |
| 10 | +</div> |
| 11 | + |
| 12 | +# `spoc` |
| 13 | + |
| 14 | + |
| 15 | +Note: This dataset was added recently and is only available in our |
| 16 | +`tfds-nightly` package |
| 17 | +<span class="material-icons" title="Available only in the tfds-nightly package">nights_stay</span>. |
| 18 | + |
| 19 | +* **Description**: |
| 20 | + |
| 21 | +* **Homepage**: [https://spoc-robot.github.io/](https://spoc-robot.github.io/) |
| 22 | + |
| 23 | +* **Source code**: |
| 24 | + [`tfds.robotics.rtx.Spoc`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py) |
| 25 | + |
| 26 | +* **Versions**: |
| 27 | + |
| 28 | + * **`0.1.0`** (default): Initial release. |
| 29 | + |
| 30 | +* **Download size**: `Unknown size` |
| 31 | + |
| 32 | +* **Dataset size**: `Unknown size` |
| 33 | + |
| 34 | +* **Auto-cached** |
| 35 | + ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)): |
| 36 | + Unknown |
| 37 | + |
| 38 | +* **Splits**: |
| 39 | + |
| 40 | +Split | Examples |
| 41 | +:---- | -------: |
| 42 | + |
| 43 | +* **Feature structure**: |
| 44 | + |
| 45 | +```python |
| 46 | +FeaturesDict({ |
| 47 | + 'episode_metadata': FeaturesDict({ |
| 48 | + 'file_path': string, |
| 49 | + 'task_target_split': string, |
| 50 | + 'task_type': string, |
| 51 | + }), |
| 52 | + 'steps': Dataset({ |
| 53 | + 'action': Tensor(shape=(9,), dtype=float32), |
| 54 | + 'discount': Scalar(shape=(), dtype=float32), |
| 55 | + 'is_first': bool, |
| 56 | + 'is_last': bool, |
| 57 | + 'is_terminal': bool, |
| 58 | + 'language_instruction': string, |
| 59 | + 'observation': FeaturesDict({ |
| 60 | + 'an_object_is_in_hand': Scalar(shape=(), dtype=bool), |
| 61 | + 'house_index': Scalar(shape=(), dtype=int64), |
| 62 | + 'hypothetical_task_success': Scalar(shape=(), dtype=bool), |
| 63 | + 'image': Image(shape=(224, 384, 3), dtype=uint8), |
| 64 | + 'image_manipulation': Image(shape=(224, 384, 3), dtype=uint8), |
| 65 | + 'last_action_is_random': Scalar(shape=(), dtype=bool), |
| 66 | + 'last_action_str': string, |
| 67 | + 'last_action_success': Scalar(shape=(), dtype=bool), |
| 68 | + 'last_agent_location': Tensor(shape=(6,), dtype=float32), |
| 69 | + 'manip_object_bbox': Tensor(shape=(10,), dtype=float32), |
| 70 | + 'minimum_l2_target_distance': Scalar(shape=(), dtype=float32), |
| 71 | + 'minimum_visible_target_alignment': Scalar(shape=(), dtype=float32), |
| 72 | + 'nav_object_bbox': Tensor(shape=(10,), dtype=float32), |
| 73 | + 'relative_arm_location_metadata': Tensor(shape=(4,), dtype=float32), |
| 74 | + 'room_current_seen': Scalar(shape=(), dtype=bool), |
| 75 | + 'rooms_seen': Scalar(shape=(), dtype=int64), |
| 76 | + 'visible_target_4m_count': Scalar(shape=(), dtype=int64), |
| 77 | + }), |
| 78 | + 'reward': Scalar(shape=(), dtype=float32), |
| 79 | + }), |
| 80 | +}) |
| 81 | +``` |
| 82 | + |
| 83 | +* **Feature documentation**: |
| 84 | + |
| 85 | +Feature | Class | Shape | Dtype | Description |
| 86 | +:------------------------------------------------- | :----------- | :------------ | :------ | :---------- |
| 87 | + | FeaturesDict | | | |
| 88 | +episode_metadata | FeaturesDict | | | |
| 89 | +episode_metadata/file_path | Tensor | | string | |
| 90 | +episode_metadata/task_target_split | Tensor | | string | |
| 91 | +episode_metadata/task_type | Tensor | | string | |
| 92 | +steps | Dataset | | | |
| 93 | +steps/action | Tensor | (9,) | float32 | |
| 94 | +steps/discount | Scalar | | float32 | |
| 95 | +steps/is_first | Tensor | | bool | |
| 96 | +steps/is_last | Tensor | | bool | |
| 97 | +steps/is_terminal | Tensor | | bool | |
| 98 | +steps/language_instruction | Tensor | | string | |
| 99 | +steps/observation | FeaturesDict | | | |
| 100 | +steps/observation/an_object_is_in_hand | Scalar | | bool | |
| 101 | +steps/observation/house_index | Scalar | | int64 | |
| 102 | +steps/observation/hypothetical_task_success | Scalar | | bool | |
| 103 | +steps/observation/image | Image | (224, 384, 3) | uint8 | |
| 104 | +steps/observation/image_manipulation | Image | (224, 384, 3) | uint8 | |
| 105 | +steps/observation/last_action_is_random | Scalar | | bool | |
| 106 | +steps/observation/last_action_str | Tensor | | string | |
| 107 | +steps/observation/last_action_success | Scalar | | bool | |
| 108 | +steps/observation/last_agent_location | Tensor | (6,) | float32 | |
| 109 | +steps/observation/manip_object_bbox | Tensor | (10,) | float32 | |
| 110 | +steps/observation/minimum_l2_target_distance | Scalar | | float32 | |
| 111 | +steps/observation/minimum_visible_target_alignment | Scalar | | float32 | |
| 112 | +steps/observation/nav_object_bbox | Tensor | (10,) | float32 | |
| 113 | +steps/observation/relative_arm_location_metadata | Tensor | (4,) | float32 | |
| 114 | +steps/observation/room_current_seen | Scalar | | bool | |
| 115 | +steps/observation/rooms_seen | Scalar | | int64 | |
| 116 | +steps/observation/visible_target_4m_count | Scalar | | int64 | |
| 117 | +steps/reward | Scalar | | float32 | |
| 118 | + |
| 119 | +* **Supervised keys** (See |
| 120 | + [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)): |
| 121 | + `None` |
| 122 | + |
| 123 | +* **Figure** |
| 124 | + ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)): |
| 125 | + Not supported. |
| 126 | + |
| 127 | +* **Examples** |
| 128 | + ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)): |
| 129 | + Missing. |
| 130 | + |
| 131 | +* **Citation**: |
| 132 | + |
| 133 | +``` |
| 134 | +@article{spoc2023, |
| 135 | + author = {Kiana Ehsani, Tanmay Gupta, Rose Hendrix, Jordi Salvador, Luca Weihs, Kuo-Hao Zeng, Kunal Pratap Singh, Yejin Kim, Winson Han, Alvaro Herrasti, Ranjay Krishna, Dustin Schwenk, Eli VanderBilt, Aniruddha Kembhavi}, |
| 136 | + title = {Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World}, |
| 137 | + journal = {arXiv}, |
| 138 | + year = {2023}, |
| 139 | + eprint = {2312.02976}, |
| 140 | +} |
| 141 | +``` |
| 142 | + |
0 commit comments