The modality configuration defines how your robot's data should be loaded, processed, and interpreted by the model. This configuration bridges your dataset's physical structure (defined in meta/modality.json) and the model's data processing pipeline.
Each embodiment requires a Python configuration file that specifies:
- Which observations to use (video cameras, proprioceptive states)
- How to sample data temporally (current frame, historical frames, future action horizons)
- How actions should be interpreted and transformed
- Which language annotations to use
A modality configuration is a Python dictionary containing four top-level keys: "video", "state", "action", and "language". Each key maps to a ModalityConfig object.
Here's the SO-100 example:
from gr00t.configs.data.embodiment_configs import register_modality_config
from gr00t.data.types import ModalityConfig, ActionConfig, ActionRepresentation, ActionType, ActionFormat
so100_config = {
"video": ModalityConfig(...),
"state": ModalityConfig(...),
"action": ModalityConfig(...),
"language": ModalityConfig(...),
}
register_modality_config(so100_config)Each ModalityConfig specifies two required fields and several optional ones:
1. delta_indices (list[int])
Defines which temporal offsets to sample relative to the current timestep. This enables:
- Historical context: Use negative indices (e.g.,
[-2, -1, 0]) to include past observations - Current observation: Use
[0]for the current timestep - Future actions: Use positive indices (e.g.,
list(range(0, 16))) for action prediction horizons
Examples:
# Single current frame for video
delta_indices=[0]
# Last 3 frames for video (temporal stacking)
delta_indices=[-2, -1, 0]
# 16-step action prediction horizon
delta_indices=list(range(0, 16))Note: If you modify
delta_indicesfor the action modality (e.g., changing the action horizon from 16 to 8), you must regenerate the dataset statistics by re-runningpython gr00t/data/stats.py <dataset_path> <embodiment_tag>. The normalization statistics (especiallymeta/relative_stats.json) are computed based on the originaldelta_indiceslength, and a mismatch will cause errors during training.
Example: What happens if you change delta_indices without regenerating stats?
Suppose your action config originally uses a 16-step horizon:
"action": ModalityConfig(
delta_indices=list(range(0, 16)), # 16 steps
...
)Running python gr00t/data/stats.py generates meta/relative_stats.json with per-step statistics of shape (16, D), where D is the action dimension.
If you later change the horizon to 8 steps:
"action": ModalityConfig(
delta_indices=list(range(0, 8)), # 8 steps
...
)The training data will now have shape (8, D), but the normalization parameters from relative_stats.json still have shape (16, D). This dimension mismatch causes an IndexError during normalization:
IndexError: boolean index did not match indexed array along dimension 0;
dimension is 8 but corresponding boolean dimension is 16
Fix: Re-run python gr00t/data/stats.py <dataset_path> <embodiment_tag> after changing delta_indices to regenerate matching statistics.
2. modality_keys (list[str])
Specifies which keys to load from your dataset. These keys must match the keys defined in your meta/modality.json file.
For the SO-100 example:
- Video keys: Must match keys in
meta/modality.jsonunder"video"(e.g.,"front","wrist") - State keys: Must match keys in
meta/modality.jsonunder"state"(e.g.,"single_arm","gripper") - Action keys: Must match keys in
meta/modality.jsonunder"action"(e.g.,"single_arm","gripper") - Language keys: Must match keys in
meta/modality.jsonunder"annotation"(e.g.,"annotation.human.action.task_description")
3. sin_cos_embedding_keys (list[str] | None)
Specifies which state keys should use sine/cosine encoding. Best for dimensions that are in radians (e.g., joint angles). If not specified, min-max normalization is used. Note that this will duplicate the number of dimensions by 2, and is only recommended for proprioceptive states.
"state": ModalityConfig(
delta_indices=[0],
modality_keys=["single_arm", "gripper"],
sin_cos_embedding_keys=["single_arm"], # Apply sin/cos to joint angles
)4. mean_std_embedding_keys (list[str] | None)
Specifies which keys should use mean/standard deviation normalization instead of min-max normalization.
5. action_configs (list[ActionConfig] | None)
Required for the "action" modality. Defines how each action modality should be interpreted and transformed. The list must have the same length as modality_keys, and each element corresponds to the action modality for the corresponding modality_key. See more details in the Action Modality section.
Defines which camera views to use:
"video": ModalityConfig(
delta_indices=[0], # Current frame only
modality_keys=[
"front", # Must match a key in meta/modality.json under "video"
],
)For multiple cameras:
"video": ModalityConfig(
delta_indices=[0],
modality_keys=["front", "wrist"],
)Defines proprioceptive observations (joint positions, gripper states, etc.):
"state": ModalityConfig(
delta_indices=[0], # Current state
modality_keys=[
"single_arm", # Must match keys in meta/modality.json under "state"
"gripper",
],
)Defines the action space and prediction horizon:
"action": ModalityConfig(
delta_indices=list(range(0, 16)), # Predict 16 steps into the future
modality_keys=[
"single_arm", # Must match keys in meta/modality.json under "action"
"gripper",
],
action_configs=[
# One ActionConfig per modality_key
# single_arm
ActionConfig(
rep=ActionRepresentation.RELATIVE, # relative control of the single arm
type=ActionType.NON_EEF,
format=ActionFormat.DEFAULT,
),
# gripper
ActionConfig(
rep=ActionRepresentation.ABSOLUTE, # absolute control of the gripper
type=ActionType.NON_EEF,
format=ActionFormat.DEFAULT,
),
],
)Each ActionConfig has three required fields and one optional field:
1. rep (ActionRepresentation)
Defines how actions should be interpreted:
RELATIVE: Actions are deltas from the current state (introduced in the UMI paper)ABSOLUTE: Actions are target positions
Using relative actions will lead to smoother actions, but might suffer from drifting. If you want to use relative actions, please make sure the state and action stored in the dataset are absolute, and the absolute to relative will be handled in the processor.
2. type (ActionType)
Specifies the control space:
EEF: End-effector/Cartesian space control (Expecting a 9-dimensional vector: x, y, z positions + rotation 6D)NON_EEF: Joint space control and other non-EEF control spaces (joint angles, positions, gripper positions, etc.)
3. format (ActionFormat)
Defines the action representation format:
DEFAULT: Standard format (e.g., joint angles, gripper positions)XYZ_ROT6D: 3D position + 6D rotation representation for end-effector controlXYZ_ROTVEC: 3D position + rotation vector for end-effector control
4. state_key (str | None)
Optional. Specifies the corresponding reference state key for computing relative actions when rep=RELATIVE. If not provided, the system will use the action key as the reference state key.
Example with state_key:
"joint_pos_action_left": ActionConfig(
rep=ActionRepresentation.RELATIVE,
type=ActionType.NON_EEF,
format=ActionFormat.DEFAULT,
state_key="joint_pos_obs_left", # Use this state to compute relative action
)Defines which language annotations to use:
"language": ModalityConfig(
delta_indices=[0],
modality_keys=["annotation.human.action.task_description"], # Must match annotation keys in meta/modality.json
)Here's the complete SO-100 configuration with explanations:
so100_config = {
"video": ModalityConfig(
delta_indices=[0],
modality_keys=["front", "wrist"],
),
"state": ModalityConfig(
delta_indices=[0],
modality_keys=[
"single_arm",
"gripper",
],
),
"action": ModalityConfig(
delta_indices=list(range(0, 16)),
modality_keys=[
"single_arm",
"gripper",
],
action_configs=[
ActionConfig(
rep=ActionRepresentation.RELATIVE,
type=ActionType.NON_EEF,
format=ActionFormat.DEFAULT,
),
ActionConfig(
rep=ActionRepresentation.ABSOLUTE,
type=ActionType.NON_EEF,
format=ActionFormat.DEFAULT,
),
],
),
"language": ModalityConfig(
delta_indices=[0],
modality_keys=["annotation.human.task_description"],
),
}The modality configuration's modality_keys must reference keys that exist in your dataset's meta/modality.json:
Example meta/modality.json:
{
"state": {
"single_arm": {"start": 0, "end": 5},
"gripper": {"start": 5, "end": 6},
},
"action": {
"single_arm": {"start": 0, "end": 5},
"gripper": {"start": 5, "end": 6},
},
"video": {
"front": {"original_key": "observation.images.front"},
"wrist": {"original_key": "observation.images.wrist"},
},
"annotation": {
"human.task_description": {
"original_key": "task_index"
}
}
}The system will:
- Use
modality_keysto look up the corresponding entries inmeta/modality.json - Extract the correct slices from the concatenated state/action arrays
- Apply the specified transformations (normalization, action representation conversion)
After defining your configuration, register it so it's available to the training and inference pipelines:
from gr00t.configs.data.embodiment_configs import register_modality_config
your_modality_config = {
...
}
register_modality_config(your_modality_config)Save your configuration to a Python file and pass the path to the modality_config_path argument when running the finetuning script.