Skip to content

[Question]: Isaacsim simulation error [10:05:34.533117] s2 infer error: 'InternVLAN1Model' object has no attribute 'embed_tokens' #232

@ssssuxin

Description

@ssssuxin

Question

  1. First, cannot import name 'Qwen2_5_VLConfig' from 'transformers', And then I update transformers.

  2. Then I got
    mportError: cannot import name 'apply_chunking_to_forward' from 'transformers.modeling_utils' (/workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/transformers/modeling_utils.py)

  3. Then GPT as me to use the following instead as monkey patch.
    `def apply_chunking_to_forward(
    forward_fn, chunk_size: int, chunk_dim: int, *input_tensors
    ):

    assert len(input_tensors) > 0, f"{input_tensors} 必须至少有一个张量"

    tensor_shape = input_tensors[0].shape[chunk_dim]
    for start_idx in range(0, tensor_shape, chunk_size):
    end_idx = min(start_idx + chunk_size, tensor_shape)

    # 切片输入张量
    input_tensors_chunk = []
    for input_tensor in input_tensors:
        if input_tensor is not None:
            # 创建切片
            slices = [slice(None)] * len(input_tensor.shape)
            slices[chunk_dim] = slice(start_idx, end_idx)
            input_tensors_chunk.append(input_tensor[slices])
        else:
            input_tensors_chunk.append(None)
    
    # 调用前向函数
    forward_fn(*input_tensors_chunk)`
    
  4. As I run https://internrobotics.github.io/user_guide/internnav/quick_start/evaluation.html# python scripts/eval/eval.py --config scripts/eval/configs/h1_internvla_n1_async_cfg.py
    I got below

Here is my config
`# from scripts.eval.configs.agent import *
from internnav.configs.agent import AgentCfg
from internnav.configs.evaluator import (
EnvCfg,
EvalCfg,
EvalDatasetCfg,
SceneCfg,
TaskCfg,
)

eval_cfg = EvalCfg(
agent=AgentCfg(
server_port=8023,
model_name='internvla_n1',
ckpt_path='',
model_settings={
'env_num': 1,
'sim_num': 1,
# 'model_path': "checkpoints/InternVLA-N1",
'model_path': "/workspace/Model/InternRobotics/InternVLA-N1-DualVLN",
'camera_intrinsic': [[585.0, 0.0, 320.0], [0.0, 585.0, 240.0], [0.0, 0.0, 1.0]],
'width': 640,
'height': 480,
'hfov': 79,
'resize_w': 384,
'resize_h': 384,
'max_new_tokens': 1024,
'num_frames': 32,
'num_history': 8,
'num_future_steps': 4,
'device': 'cuda:0',
'predict_step_nums': 32,
'continuous_traj': True,
'infer_mode': 'partial_async', # You can choose "sync" or "partial_async", but for this model, "partial_async" is better.
# debug
'vis_debug': True, # If vis_debug=True, you can get visualization results
'vis_debug_path': './logs/test/vis_debug',
},
),
env=EnvCfg(
env_type='internutopia',
env_settings={
'use_fabric': False, # Please set use_fabric=False due to the render delay;
'headless': True,
},
),
task=TaskCfg(
task_name='test',
task_settings={
'env_num': 1,
'use_distributed': False, # If the others setting in task_settings, please set use_distributed = False.
'proc_num': 1,
'max_step': 1000, # If use flash mode,default 1000; descrete mode, set 50000
},
scene=SceneCfg(
scene_type='mp3d',
scene_data_dir='data/scene_data/mp3d_pe',
),
robot_name='h1',
robot_flash=True, # If robot_flash is True, the mode is flash (set world_pose directly); else you choose physical mode.
robot_usd_path='data/Embodiments/vln-pe/h1/h1_internvla.usd',
camera_resolution=[640, 480], # (W,H)
camera_prim_path='torso_link/h1_1_25_down_30',
one_step_stand_still=True, # For dual-system, please keep this param True.
),
dataset=EvalDatasetCfg(
dataset_type="mp3d",
dataset_settings={
'base_data_dir': 'data/vln_pe/raw_data/r2r',
'split_data_types': ['val_unseen'], # 'val_seen'
'filter_stairs': True, # For iros challenge, this is False; For results in the paper, this is True.
# 'selected_scans': ['zsNo4HB9uLZ'],
# 'selected_scans': ['8194nk5LbLH', 'pLe4wQe7qrG'],
},
),
eval_type='vln_distributed',
eval_settings={
'save_to_json': True,
'vis_output': True,
'use_agent_server': False, # If use_agent_server=True, please start the agent server first.
},
)
`

2026-01-05 10:05:12 [9,359ms] [Error] [carb.graphics-vulkan.plugin] Could not get NGX parameters block because NGX isn't enabled. 2026-01-05 10:05:12 [9,359ms] [Error] [carb.graphics-vulkan.plugin] Failed to create NGX context. 2026-01-05 10:05:12 [9,359ms] [Warning] [carb.scenerenderer-rtx.plugin] Failed to create NGX context. 2026-01-05 10:05:12 [9,676ms] [Warning] [omni.usd-abi.plugin] No setting was found for '/rtx-defaults-transient/meshlights/forceDisable' 2026-01-05 10:05:12 [9,824ms] [Warning] [omni.usd-abi.plugin] No setting was found for '/rtx-defaults/post/dlss/execMode' [16.691s] Simulation App Startup Complete [2026-01-05 10:05:19,379][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/runner.py[line:488] -: SimulationApp init done [2026-01-05 10:05:19,379][INFO] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/runner.py[line:467] -: simulator params: physics dt=0.005, rendering dt=0.005, use_fabric=False 2026-01-05 10:05:19 [16,378ms] [Warning] [omni.isaac.core.scenes.scene] omni.isaac.core.scenes.scene has been deprecated in favor of isaacsim.core.api.scenes.scene. Please update your code accordingly. 2026-01-05 10:05:19 [16,378ms] [Warning] [omni.isaac.core.scenes.scene_registry] omni.isaac.core.scenes.scene_registry has been deprecated in favor of isaacsim.core.api.scenes.scene_registry. Please update your code accordingly. [2026-01-05 10:05:19,406][INFO] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/runner.py[line:47] -: rendering interval: 5 2026-01-05 10:05:22 [19,138ms] [Error] [omni.kit.app._impl] [py stderr]: /workspace/CODE_11_20/InternNav/InternNav/internnav/model/basemodel/LongCLIP/model/longclip.py:6: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. from pkg_resources import packaging [2026-01-05 10:05:22,349][WARNING] /workspace/CODE_11_20/InternNav/InternNav/internnav/model/encoder/depth_anything/depth_anything_v2/dinov2_layers/attention.py[line:25] -: xFormers not available 2026-01-05 10:05:22 [19,321ms] [Warning] [dinov2] xFormers not available [2026-01-05 10:05:22,349][WARNING] /workspace/CODE_11_20/InternNav/InternNav/internnav/model/encoder/depth_anything/depth_anything_v2/dinov2_layers/block.py[line:32] -: xFormers not available 2026-01-05 10:05:22 [19,321ms] [Warning] [dinov2] xFormers not available 2026-01-05 10:05:22 [19,325ms] [Error] [omni.kit.app._impl] [py stderr]: torch_dtypeis deprecated! Usedtype` instead!
Loading checkpoint shards: 100%|████████████████████████████████████| 4/4 [00:03<00:00, 1.02it/s]
2026-01-05 10:05:26 [23,544ms] [Error] [omni.kit.app._impl] [py stderr]: Some weights of the model checkpoint at /workspace/Model/InternRobotics/InternVLA-N1-DualVLN were not used when initializing InternVLAN1ForCausalLM: ['model.language_model.action_decoder.bias', 'model.language_model.action_decoder.weight', 'model.language_model.action_encoder.bias', 'model.language_model.action_encoder.weight', 'model.language_model.cond_projector.0.bias', 'model.language_model.cond_projector.0.weight', 'model.language_model.cond_projector.2.bias', 'model.language_model.cond_projector.2.weight', 'model.language_model.latent_queries', 'model.language_model.memory_encoder.encoder.layers.0.linear1.bias', 'model.language_model.memory_encoder.encoder.layers.0.linear1.weight', 'model.language_model.memory_encoder.encoder.layers.0.linear2.bias', 'model.language_model.memory_encoder.encoder.layers.0.linear2.weight', 'model.language_model.memory_encoder.encoder.layers.0.norm1.bias', 'model.language_model.memory_encoder.encoder.layers.0.norm1.weight', 'model.language_model.memory_encoder.encoder.layers.0.norm2.bias', 'model.language_model.memory_encoder.encoder.layers.0.norm2.weight', 'model.language_model.memory_encoder.encoder.layers.0.self_attn.in_proj_bias', 'model.language_model.memory_encoder.encoder.layers.0.self_attn.in_proj_weight', 'model.language_model.memory_encoder.encoder.layers.0.self_attn.out_proj.bias', 'model.language_model.memory_encoder.encoder.layers.0.self_attn.out_proj.weight', 'model.language_model.memory_encoder.encoder.layers.1.linear1.bias', 'model.language_model.memory_encoder.encoder.layers.1.linear1.weight', 'model.language_model.memory_encoder.encoder.layers.1.linear2.bias', 'model.language_model.memory_encoder.encoder.layers.1.linear2.weight', 'model.language_model.memory_encoder.encoder.layers.1.norm1.bias', 'model.language_model.memory_encoder.encoder.layers.1.norm1.weight', 'model.language_model.memory_encoder.encoder.layers.1.norm2.bias', 'model.language_model.memory_encoder.encoder.layers.1.norm2.weight', 'model.language_model.memory_encoder.encoder.layers.1.self_attn.in_proj_bias', 'model.language_model.memory_encoder.encoder.layers.1.self_attn.in_proj_weight', 'model.language_model.memory_encoder.encoder.layers.1.self_attn.out_proj.bias', 'model.language_model.memory_encoder.encoder.layers.1.self_attn.out_proj.weight', 'model.language_model.memory_encoder.encoder.layers.2.linear1.bias', 'model.language_model.memory_encoder.encoder.layers.2.linear1.weight', 'model.language_model.memory_encoder.encoder.layers.2.linear2.bias', 'model.language_model.memory_encoder.encoder.layers.2.linear2.weight', 'model.language_model.memory_encoder.encoder.layers.2.norm1.bias', 'model.language_model.memory_encoder.encoder.layers.2.norm1.weight', 'model.language_model.memory_encoder.encoder.layers.2.norm2.bias', 'model.language_model.memory_encoder.encoder.layers.2.norm2.weight', 'model.language_model.memory_encoder.encoder.layers.2.self_attn.in_proj_bias', 'model.language_model.memory_encoder.encoder.layers.2.self_attn.in_proj_weight', 'model.language_model.memory_encoder.encoder.layers.2.self_attn.out_proj.bias', 'model.language_model.memory_encoder.encoder.layers.2.self_attn.out_proj.weight', 'model.language_model.memory_encoder.memory_pos', 'model.language_model.rgb_model.blocks.0.attn.proj.bias', 'model.language_model.rgb_model.blocks.0.attn.proj.weight', 'model.language_model.rgb_model.blocks.0.attn.qkv.bias', 'model.language_model.rgb_model.blocks.0.attn.qkv.weight', 'model.language_model.rgb_model.blocks.0.ls1.gamma', 'model.language_model.rgb_model.blocks.0.ls2.gamma', 'model.language_model.rgb_model.blocks.0.mlp.fc1.bias', 'model.language_model.rgb_model.blocks.0.mlp.fc1.weight', 'model.language_model.rgb_model.blocks.0.mlp.fc2.bias', 'model.language_model.rgb_model.blocks.0.mlp.fc2.weight', 'model.language_model.rgb_model.blocks.0.norm1.bias', 'model.language_model.rgb_model.blocks.0.norm1.weight', 'model.language_model.rgb_model.blocks.0.norm2.bias', 'model.language_model.rgb_model.blocks.0.norm2.weight', 'model.language_model.rgb_model.blocks.1.attn.proj.bias', 'model.language_model.rgb_model.blocks.1.attn.proj.weight', 'model.language_model.rgb_model.blocks.1.attn.qkv.bias', 'model.language_model.rgb_model.blocks.1.attn.qkv.weight', 'model.language_model.rgb_model.blocks.1.ls1.gamma', 'model.language_model.rgb_model.blocks.1.ls2.gamma', 'model.language_model.rgb_model.blocks.1.mlp.fc1.bias', 'model.language_model.rgb_model.blocks.1.mlp.fc1.weight', 'model.language_model.rgb_model.blocks.1.mlp.fc2.bias', 'model.language_model.rgb_model.blocks.1.mlp.fc2.weight', 'model.language_model.rgb_model.blocks.1.norm1.bias', 'model.language_model.rgb_model.blocks.1.norm1.weight', 'model.language_model.rgb_model.blocks.1.norm2.bias', 'model.language_model.rgb_model.blocks.1.norm2.weight', 'model.language_model.rgb_model.blocks.10.attn.proj.bias', 'model.language_model.rgb_model.blocks.10.attn.proj.weight', 'model.language_model.rgb_model.blocks.10.attn.qkv.bias', 'model.language_model.rgb_model.blocks.10.attn.qkv.weight', 'model.language_model.rgb_model.blocks.10.ls1.gamma', 'model.language_model.rgb_model.blocks.10.ls2.gamma', 'model.language_model.rgb_model.blocks.10.mlp.fc1.bias', 'model.language_model.rgb_model.blocks.10.mlp.fc1.weight', 'model.language_model.rgb_model.blocks.10.mlp.fc2.bias', 'model.language_model.rgb_model.blocks.10.mlp.fc2.weight', 'model.language_model.rgb_model.blocks.10.norm1.bias', 'model.language_model.rgb_model.blocks.10.norm1.weight', 'model.language_model.rgb_model.blocks.10.norm2.bias', 'model.language_model.rgb_model.blocks.10.norm2.weight', 'model.language_model.rgb_model.blocks.11.attn.proj.bias', 'model.language_model.rgb_model.blocks.11.attn.proj.weight', 'model.language_model.rgb_model.blocks.11.attn.qkv.bias', 'model.language_model.rgb_model.blocks.11.attn.qkv.weight', 'model.language_model.rgb_model.blocks.11.ls1.gamma', 'model.language_model.rgb_model.blocks.11.ls2.gamma', 'model.language_model.rgb_model.blocks.11.mlp.fc1.bias', 'model.language_model.rgb_model.blocks.11.mlp.fc1.weight', 'model.language_model.rgb_model.blocks.11.mlp.fc2.bias', 'model.language_model.rgb_model.blocks.11.mlp.fc2.weight', 'model.language_model.rgb_model.blocks.11.norm1.bias', 'model.language_model.rgb_model.blocks.11.norm1.weight', 'model.language_model.rgb_model.blocks.11.norm2.bias', 'model.language_model.rgb_model.blocks.11.norm2.weight', 'model.language_model.rgb_model.blocks.2.attn.proj.bias', 'model.language_model.rgb_model.blocks.2.attn.proj.weight', 'model.language_model.rgb_model.blocks.2.attn.qkv.bias', 'model.language_model.rgb_model.blocks.2.attn.qkv.weight', 'model.language_model.rgb_model.blocks.2.ls1.gamma', 'model.language_model.rgb_model.blocks.2.ls2.gamma', 'model.language_model.rgb_model.blocks.2.mlp.fc1.bias', 'model.language_model.rgb_model.blocks.2.mlp.fc1.weight', 'model.language_model.rgb_model.blocks.2.mlp.fc2.bias', 'model.language_model.rgb_model.blocks.2.mlp.fc2.weight', 'model.language_model.rgb_model.blocks.2.norm1.bias', 'model.language_model.rgb_model.blocks.2.norm1.weight', 'model.language_model.rgb_model.blocks.2.norm2.bias', 'model.language_model.rgb_model.blocks.2.norm2.weight', 'model.language_model.rgb_model.blocks.3.attn.proj.bias', 'model.language_model.rgb_model.blocks.3.attn.proj.weight', 'model.language_model.rgb_model.blocks.3.attn.qkv.bias', 'model.language_model.rgb_model.blocks.3.attn.qkv.weight', 'model.language_model.rgb_model.blocks.3.ls1.gamma', 'model.language_model.rgb_model.blocks.3.ls2.gamma', 'model.language_model.rgb_model.blocks.3.mlp.fc1.bias', 'model.language_model.rgb_model.blocks.3.mlp.fc1.weight', 'model.language_model.rgb_model.blocks.3.mlp.fc2.bias', 'model.language_model.rgb_model.blocks.3.mlp.fc2.weight', 'model.language_model.rgb_model.blocks.3.norm1.bias', 'model.language_model.rgb_model.blocks.3.norm1.weight', 'model.language_model.rgb_model.blocks.3.norm2.bias', 'model.language_model.rgb_model.blocks.3.norm2.weight', 'model.language_model.rgb_model.blocks.4.attn.proj.bias', 'model.language_model.rgb_model.blocks.4.attn.proj.weight', 'model.language_model.rgb_model.blocks.4.attn.qkv.bias', 'model.language_model.rgb_model.blocks.4.attn.qkv.weight', 'model.language_model.rgb_model.blocks.4.ls1.gamma', 'model.language_model.rgb_model.blocks.4.ls2.gamma', 'model.language_model.rgb_model.blocks.4.mlp.fc1.bias', 'model.language_model.rgb_model.blocks.4.mlp.fc1.weight', 'model.language_model.rgb_model.blocks.4.mlp.fc2.bias', 'model.language_model.rgb_model.blocks.4.mlp.fc2.weight', 'model.language_model.rgb_model.blocks.4.norm1.bias', 'model.language_model.rgb_model.blocks.4.norm1.weight', 'model.language_model.rgb_model.blocks.4.norm2.bias', 'model.language_model.rgb_model.blocks.4.norm2.weight', 'model.language_model.rgb_model.blocks.5.attn.proj.bias', 'model.language_model.rgb_model.blocks.5.attn.proj.weight', 'model.language_model.rgb_model.blocks.5.attn.qkv.bias', 'model.language_model.rgb_model.blocks.5.attn.qkv.weight', 'model.language_model.rgb_model.blocks.5.ls1.gamma', 'model.language_model.rgb_model.blocks.5.ls2.gamma', 'model.language_model.rgb_model.blocks.5.mlp.fc1.bias', 'model.language_model.rgb_model.blocks.5.mlp.fc1.weight', 'model.language_model.rgb_model.blocks.5.mlp.fc2.bias', 'model.language_model.rgb_model.blocks.5.mlp.fc2.weight', 'model.language_model.rgb_model.blocks.5.norm1.bias', 'model.language_model.rgb_model.blocks.5.norm1.weight', 'model.language_model.rgb_model.blocks.5.norm2.bias', 'model.language_model.rgb_model.blocks.5.norm2.weight', 'model.language_model.rgb_model.blocks.6.attn.proj.bias', 'model.language_model.rgb_model.blocks.6.attn.proj.weight', 'model.language_model.rgb_model.blocks.6.attn.qkv.bias', 'model.language_model.rgb_model.blocks.6.attn.qkv.weight', 'model.language_model.rgb_model.blocks.6.ls1.gamma', 'model.language_model.rgb_model.blocks.6.ls2.gamma', 'model.language_model.rgb_model.blocks.6.mlp.fc1.bias', 'model.language_model.rgb_model.blocks.6.mlp.fc1.weight', 'model.language_model.rgb_model.blocks.6.mlp.fc2.bias', 'model.language_model.rgb_model.blocks.6.mlp.fc2.weight', 'model.language_model.rgb_model.blocks.6.norm1.bias', 'model.language_model.rgb_model.blocks.6.norm1.weight', 'model.language_model.rgb_model.blocks.6.norm2.bias', 'model.language_model.rgb_model.blocks.6.norm2.weight', 'model.language_model.rgb_model.blocks.7.attn.proj.bias', 'model.language_model.rgb_model.blocks.7.attn.proj.weight', 'model.language_model.rgb_model.blocks.7.attn.qkv.bias', 'model.language_model.rgb_model.blocks.7.attn.qkv.weight', 'model.language_model.rgb_model.blocks.7.ls1.gamma', 'model.language_model.rgb_model.blocks.7.ls2.gamma', 'model.language_model.rgb_model.blocks.7.mlp.fc1.bias', 'model.language_model.rgb_model.blocks.7.mlp.fc1.weight', 'model.language_model.rgb_model.blocks.7.mlp.fc2.bias', 'model.language_model.rgb_model.blocks.7.mlp.fc2.weight', 'model.language_model.rgb_model.blocks.7.norm1.bias', 'model.language_model.rgb_model.blocks.7.norm1.weight', 'model.language_model.rgb_model.blocks.7.norm2.bias', 'model.language_model.rgb_model.blocks.7.norm2.weight', 'model.language_model.rgb_model.blocks.8.attn.proj.bias', 'model.language_model.rgb_model.blocks.8.attn.proj.weight', 'model.language_model.rgb_model.blocks.8.attn.qkv.bias', 'model.language_model.rgb_model.blocks.8.attn.qkv.weight', 'model.language_model.rgb_model.blocks.8.ls1.gamma', 'model.language_model.rgb_model.blocks.8.ls2.gamma', 'model.language_model.rgb_model.blocks.8.mlp.fc1.bias', 'model.language_model.rgb_model.blocks.8.mlp.fc1.weight', 'model.language_model.rgb_model.blocks.8.mlp.fc2.bias', 'model.language_model.rgb_model.blocks.8.mlp.fc2.weight', 'model.language_model.rgb_model.blocks.8.norm1.bias', 'model.language_model.rgb_model.blocks.8.norm1.weight', 'model.language_model.rgb_model.blocks.8.norm2.bias', 'model.language_model.rgb_model.blocks.8.norm2.weight', 'model.language_model.rgb_model.blocks.9.attn.proj.bias', 'model.language_model.rgb_model.blocks.9.attn.proj.weight', 'model.language_model.rgb_model.blocks.9.attn.qkv.bias', 'model.language_model.rgb_model.blocks.9.attn.qkv.weight', 'model.language_model.rgb_model.blocks.9.ls1.gamma', 'model.language_model.rgb_model.blocks.9.ls2.gamma', 'model.language_model.rgb_model.blocks.9.mlp.fc1.bias', 'model.language_model.rgb_model.blocks.9.mlp.fc1.weight', 'model.language_model.rgb_model.blocks.9.mlp.fc2.bias', 'model.language_model.rgb_model.blocks.9.mlp.fc2.weight', 'model.language_model.rgb_model.blocks.9.norm1.bias', 'model.language_model.rgb_model.blocks.9.norm1.weight', 'model.language_model.rgb_model.blocks.9.norm2.bias', 'model.language_model.rgb_model.blocks.9.norm2.weight', 'model.language_model.rgb_model.cls_token', 'model.language_model.rgb_model.mask_token', 'model.language_model.rgb_model.norm.bias', 'model.language_model.rgb_model.norm.weight', 'model.language_model.rgb_model.patch_embed.proj.bias', 'model.language_model.rgb_model.patch_embed.proj.weight', 'model.language_model.rgb_model.pos_embed', 'model.language_model.rgb_resampler.decoder.layers.0.linear1.bias', 'model.language_model.rgb_resampler.decoder.layers.0.linear1.weight', 'model.language_model.rgb_resampler.decoder.layers.0.linear2.bias', 'model.language_model.rgb_resampler.decoder.layers.0.linear2.weight', 'model.language_model.rgb_resampler.decoder.layers.0.multihead_attn.in_proj_bias', 'model.language_model.rgb_resampler.decoder.layers.0.multihead_attn.in_proj_weight', 'model.language_model.rgb_resampler.decoder.layers.0.multihead_attn.out_proj.bias', 'model.language_model.rgb_resampler.decoder.layers.0.multihead_attn.out_proj.weight', 'model.language_model.rgb_resampler.decoder.layers.0.norm1.bias', 'model.language_model.rgb_resampler.decoder.layers.0.norm1.weight', 'model.language_model.rgb_resampler.decoder.layers.0.norm2.bias', 'model.language_model.rgb_resampler.decoder.layers.0.norm2.weight', 'model.language_model.rgb_resampler.decoder.layers.0.norm3.bias', 'model.language_model.rgb_resampler.decoder.layers.0.norm3.weight', 'model.language_model.rgb_resampler.decoder.layers.0.self_attn.in_proj_bias', 'model.language_model.rgb_resampler.decoder.layers.0.self_attn.in_proj_weight', 'model.language_model.rgb_resampler.decoder.layers.0.self_attn.out_proj.bias', 'model.language_model.rgb_resampler.decoder.layers.0.self_attn.out_proj.weight', 'model.language_model.rgb_resampler.decoder.layers.1.linear1.bias', 'model.language_model.rgb_resampler.decoder.layers.1.linear1.weight', 'model.language_model.rgb_resampler.decoder.layers.1.linear2.bias', 'model.language_model.rgb_resampler.decoder.layers.1.linear2.weight', 'model.language_model.rgb_resampler.decoder.layers.1.multihead_attn.in_proj_bias', 'model.language_model.rgb_resampler.decoder.layers.1.multihead_attn.in_proj_weight', 'model.language_model.rgb_resampler.decoder.layers.1.multihead_attn.out_proj.bias', 'model.language_model.rgb_resampler.decoder.layers.1.multihead_attn.out_proj.weight', 'model.language_model.rgb_resampler.decoder.layers.1.norm1.bias', 'model.language_model.rgb_resampler.decoder.layers.1.norm1.weight', 'model.language_model.rgb_resampler.decoder.layers.1.norm2.bias', 'model.language_model.rgb_resampler.decoder.layers.1.norm2.weight', 'model.language_model.rgb_resampler.decoder.layers.1.norm3.bias', 'model.language_model.rgb_resampler.decoder.layers.1.norm3.weight', 'model.language_model.rgb_resampler.decoder.layers.1.self_attn.in_proj_bias', 'model.language_model.rgb_resampler.decoder.layers.1.self_attn.in_proj_weight', 'model.language_model.rgb_resampler.decoder.layers.1.self_attn.out_proj.bias', 'model.language_model.rgb_resampler.decoder.layers.1.self_attn.out_proj.weight', 'model.language_model.rgb_resampler.decoder.layers.2.linear1.bias', 'model.language_model.rgb_resampler.decoder.layers.2.linear1.weight', 'model.language_model.rgb_resampler.decoder.layers.2.linear2.bias', 'model.language_model.rgb_resampler.decoder.layers.2.linear2.weight', 'model.language_model.rgb_resampler.decoder.layers.2.multihead_attn.in_proj_bias', 'model.language_model.rgb_resampler.decoder.layers.2.multihead_attn.in_proj_weight', 'model.language_model.rgb_resampler.decoder.layers.2.multihead_attn.out_proj.bias', 'model.language_model.rgb_resampler.decoder.layers.2.multihead_attn.out_proj.weight', 'model.language_model.rgb_resampler.decoder.layers.2.norm1.bias', 'model.language_model.rgb_resampler.decoder.layers.2.norm1.weight', 'model.language_model.rgb_resampler.decoder.layers.2.norm2.bias', 'model.language_model.rgb_resampler.decoder.layers.2.norm2.weight', 'model.language_model.rgb_resampler.decoder.layers.2.norm3.bias', 'model.language_model.rgb_resampler.decoder.layers.2.norm3.weight', 'model.language_model.rgb_resampler.decoder.layers.2.self_attn.in_proj_bias', 'model.language_model.rgb_resampler.decoder.layers.2.self_attn.in_proj_weight', 'model.language_model.rgb_resampler.decoder.layers.2.self_attn.out_proj.bias', 'model.language_model.rgb_resampler.decoder.layers.2.self_attn.out_proj.weight', 'model.language_model.rgb_resampler.query_pos', 'model.language_model.rgb_resampler.query_tokens', 'model.language_model.rgb_resampler.visual_proj.bias', 'model.language_model.rgb_resampler.visual_proj.weight', 'model.language_model.traj_dit.model.caption_projection.linear_1.bias', 'model.language_model.traj_dit.model.caption_projection.linear_1.weight', 'model.language_model.traj_dit.model.caption_projection.linear_2.bias', 'model.language_model.traj_dit.model.caption_projection.linear_2.weight', 'model.language_model.traj_dit.model.layers.0.attn1.norm_k.bias', 'model.language_model.traj_dit.model.layers.0.attn1.norm_k.weight', 'model.language_model.traj_dit.model.layers.0.attn1.norm_q.bias', 'model.language_model.traj_dit.model.layers.0.attn1.norm_q.weight', 'model.language_model.traj_dit.model.layers.0.attn1.to_k.weight', 'model.language_model.traj_dit.model.layers.0.attn1.to_q.weight', 'model.language_model.traj_dit.model.layers.0.attn1.to_v.weight', 'model.language_model.traj_dit.model.layers.0.attn2.norm_k.bias', 'model.language_model.traj_dit.model.layers.0.attn2.norm_k.weight', 'model.language_model.traj_dit.model.layers.0.attn2.norm_q.bias', 'model.language_model.traj_dit.model.layers.0.attn2.norm_q.weight', 'model.language_model.traj_dit.model.layers.0.attn2.to_k.weight', 'model.language_model.traj_dit.model.layers.0.attn2.to_out.0.weight', 'model.language_model.traj_dit.model.layers.0.attn2.to_q.weight', 'model.language_model.traj_dit.model.layers.0.attn2.to_v.weight', 'model.language_model.traj_dit.model.layers.0.feed_forward.linear_1.weight', 'model.language_model.traj_dit.model.layers.0.feed_forward.linear_2.weight', 'model.language_model.traj_dit.model.layers.0.feed_forward.linear_3.weight', 'model.language_model.traj_dit.model.layers.0.ffn_norm1.weight', 'model.language_model.traj_dit.model.layers.0.ffn_norm2.weight', 'model.language_model.traj_dit.model.layers.0.gate', 'model.language_model.traj_dit.model.layers.0.norm1.linear.bias', 'model.language_model.traj_dit.model.layers.0.norm1.linear.weight', 'model.language_model.traj_dit.model.layers.0.norm1.norm.weight', 'model.language_model.traj_dit.model.layers.0.norm1_context.weight', 'model.language_model.traj_dit.model.layers.0.norm2.weight', 'model.language_model.traj_dit.model.layers.1.attn1.norm_k.bias', 'model.language_model.traj_dit.model.layers.1.attn1.norm_k.weight', 'model.language_model.traj_dit.model.layers.1.attn1.norm_q.bias', 'model.language_model.traj_dit.model.layers.1.attn1.norm_q.weight', 'model.language_model.traj_dit.model.layers.1.attn1.to_k.weight', 'model.language_model.traj_dit.model.layers.1.attn1.to_q.weight', 'model.language_model.traj_dit.model.layers.1.attn1.to_v.weight', 'model.language_model.traj_dit.model.layers.1.attn2.norm_k.bias', 'model.language_model.traj_dit.model.layers.1.attn2.norm_k.weight', 'model.language_model.traj_dit.model.layers.1.attn2.norm_q.bias', 'model.language_model.traj_dit.model.layers.1.attn2.norm_q.weight', 'model.language_model.traj_dit.model.layers.1.attn2.to_k.weight', 'model.language_model.traj_dit.model.layers.1.attn2.to_out.0.weight', 'model.language_model.traj_dit.model.layers.1.attn2.to_q.weight', 'model.language_model.traj_dit.model.layers.1.attn2.to_v.weight', 'model.language_model.traj_dit.model.layers.1.feed_forward.linear_1.weight', 'model.language_model.traj_dit.model.layers.1.feed_forward.linear_2.weight', 'model.language_model.traj_dit.model.layers.1.feed_forward.linear_3.weight', 'model.language_model.traj_dit.model.layers.1.ffn_norm1.weight', 'model.language_model.traj_dit.model.layers.1.ffn_norm2.weight', 'model.language_model.traj_dit.model.layers.1.gate', 'model.language_model.traj_dit.model.layers.1.norm1.linear.bias', 'model.language_model.traj_dit.model.layers.1.norm1.linear.weight', 'model.language_model.traj_dit.model.layers.1.norm1.norm.weight', 'model.language_model.traj_dit.model.layers.1.norm1_context.weight', 'model.language_model.traj_dit.model.layers.1.norm2.weight', 'model.language_model.traj_dit.model.layers.10.attn1.norm_k.bias', 'model.language_model.traj_dit.model.layers.10.attn1.norm_k.weight', 'model.language_model.traj_dit.model.layers.10.attn1.norm_q.bias', 'model.language_model.traj_dit.model.layers.10.attn1.norm_q.weight', 'model.language_model.traj_dit.model.layers.10.attn1.to_k.weight', 'model.language_model.traj_dit.model.layers.10.attn1.to_q.weight', 'model.language_model.traj_dit.model.layers.10.attn1.to_v.weight', 'model.language_model.traj_dit.model.layers.10.attn2.norm_k.bias', 'model.language_model.traj_dit.model.layers.10.attn2.norm_k.weight', 'model.language_model.traj_dit.model.layers.10.attn2.norm_q.bias', 'model.language_model.traj_dit.model.layers.10.attn2.norm_q.weight', 'model.language_model.traj_dit.model.layers.10.attn2.to_k.weight', 'model.language_model.traj_dit.model.layers.10.attn2.to_out.0.weight', 'model.language_model.traj_dit.model.layers.10.attn2.to_q.weight', 'model.language_model.traj_dit.model.layers.10.attn2.to_v.weight', 'model.language_model.traj_dit.model.layers.10.feed_forward.linear_1.weight', 'model.language_model.traj_dit.model.layers.10.feed_forward.linear_2.weight', 'model.language_model.traj_dit.model.layers.10.feed_forward.linear_3.weight', 'model.language_model.traj_dit.model.layers.10.ffn_norm1.weight', 'model.language_model.traj_dit.model.layers.10.ffn_norm2.weight', 'model.language_model.traj_dit.model.layers.10.gate', 'model.language_model.traj_dit.model.layers.10.norm1.linear.bias', 'model.language_model.traj_dit.model.layers.10.norm1.linear.weight', 'model.language_model.traj_dit.model.layers.10.norm1.norm.weight', 'model.language_model.traj_dit.model.layers.10.norm1_context.weight', 'model.language_model.traj_dit.model.layers.10.norm2.weight', 'model.language_model.traj_dit.model.layers.11.attn1.norm_k.bias', 'model.language_model.traj_dit.model.layers.11.attn1.norm_k.weight', 'model.language_model.traj_dit.model.layers.11.attn1.norm_q.bias', 'model.language_model.traj_dit.model.layers.11.attn1.norm_q.weight', 'model.language_model.traj_dit.model.layers.11.attn1.to_k.weight', 'model.language_model.traj_dit.model.layers.11.attn1.to_q.weight', 'model.language_model.traj_dit.model.layers.11.attn1.to_v.weight', 'model.language_model.traj_dit.model.layers.11.attn2.norm_k.bias', 'model.language_model.traj_dit.model.layers.11.attn2.norm_k.weight', 'model.language_model.traj_dit.model.layers.11.attn2.norm_q.bias', 'model.language_model.traj_dit.model.layers.11.attn2.norm_q.weight', 'model.language_model.traj_dit.model.layers.11.attn2.to_k.weight', 'model.language_model.traj_dit.model.layers.11.attn2.to_out.0.weight', 'model.language_model.traj_dit.model.layers.11.attn2.to_q.weight', 'model.language_model.traj_dit.model.layers.11.attn2.to_v.weight', 'model.language_model.traj_dit.model.layers.11.feed_forward.linear_1.weight', 'model.language_model.traj_dit.model.layers.11.feed_forward.linear_2.weight', 'model.language_model.traj_dit.model.layers.11.feed_forward.linear_3.weight', 'model.language_model.traj_dit.model.layers.11.ffn_norm1.weight', 'model.language_model.traj_dit.model.layers.11.ffn_norm2.weight', 'model.language_model.traj_dit.model.layers.11.gate', 'model.language_model.traj_dit.model.layers.11.norm1.linear.bias', 'model.language_model.traj_dit.model.layers.11.norm1.linear.weight', 'model.language_model.traj_dit.model.layers.11.norm1.norm.weight', 'model.language_model.traj_dit.model.layers.11.norm1_context.weight', 'model.language_model.traj_dit.model.layers.11.norm2.weight', 'model.language_model.traj_dit.model.layers.2.attn1.norm_k.bias', 'model.language_model.traj_dit.model.layers.2.attn1.norm_k.weight', 'model.language_model.traj_dit.model.layers.2.attn1.norm_q.bias', 'model.language_model.traj_dit.model.layers.2.attn1.norm_q.weight', 'model.language_model.traj_dit.model.layers.2.attn1.to_k.weight', 'model.language_model.traj_dit.model.layers.2.attn1.to_q.weight', 'model.language_model.traj_dit.model.layers.2.attn1.to_v.weight', 'model.language_model.traj_dit.model.layers.2.attn2.norm_k.bias', 'model.language_model.traj_dit.model.layers.2.attn2.norm_k.weight', 'model.language_model.traj_dit.model.layers.2.attn2.norm_q.bias', 'model.language_model.traj_dit.model.layers.2.attn2.norm_q.weight', 'model.language_model.traj_dit.model.layers.2.attn2.to_k.weight', 'model.language_model.traj_dit.model.layers.2.attn2.to_out.0.weight', 'model.language_model.traj_dit.model.layers.2.attn2.to_q.weight', 'model.language_model.traj_dit.model.layers.2.attn2.to_v.weight', 'model.language_model.traj_dit.model.layers.2.feed_forward.linear_1.weight', 'model.language_model.traj_dit.model.layers.2.feed_forward.linear_2.weight', 'model.language_model.traj_dit.model.layers.2.feed_forward.linear_3.weight', 'model.language_model.traj_dit.model.layers.2.ffn_norm1.weight', 'model.language_model.traj_dit.model.layers.2.ffn_norm2.weight', 'model.language_model.traj_dit.model.layers.2.gate', 'model.language_model.traj_dit.model.layers.2.norm1.linear.bias', 'model.language_model.traj_dit.model.layers.2.norm1.linear.weight', 'model.language_model.traj_dit.model.layers.2.norm1.norm.weight', 'model.language_model.traj_dit.model.layers.2.norm1_context.weight', 'model.language_model.traj_dit.model.layers.2.norm2.weight', 'model.language_model.traj_dit.model.layers.3.attn1.norm_k.bias', 'model.language_model.traj_dit.model.layers.3.attn1.norm_k.weight', 'model.language_model.traj_dit.model.layers.3.attn1.norm_q.bias', 'model.language_model.traj_dit.model.layers.3.attn1.norm_q.weight', 'model.language_model.traj_dit.model.layers.3.attn1.to_k.weight', 'model.language_model.traj_dit.model.layers.3.attn1.to_q.weight', 'model.language_model.traj_dit.model.layers.3.attn1.to_v.weight', 'model.language_model.traj_dit.model.layers.3.attn2.norm_k.bias', 'model.language_model.traj_dit.model.layers.3.attn2.norm_k.weight', 'model.language_model.traj_dit.model.layers.3.attn2.norm_q.bias', 'model.language_model.traj_dit.model.layers.3.attn2.norm_q.weight', 'model.language_model.traj_dit.model.layers.3.attn2.to_k.weight', 'model.language_model.traj_dit.model.layers.3.attn2.to_out.0.weight', 'model.language_model.traj_dit.model.layers.3.attn2.to_q.weight', 'model.language_model.traj_dit.model.layers.3.attn2.to_v.weight', 'model.language_model.traj_dit.model.layers.3.feed_forward.linear_1.weight', 'model.language_model.traj_dit.model.layers.3.feed_forward.linear_2.weight', 'model.language_model.traj_dit.model.layers.3.feed_forward.linear_3.weight', 'model.language_model.traj_dit.model.layers.3.ffn_norm1.weight', 'model.language_model.traj_dit.model.layers.3.ffn_norm2.weight', 'model.language_model.traj_dit.model.layers.3.gate', 'model.language_model.traj_dit.model.layers.3.norm1.linear.bias', 'model.language_model.traj_dit.model.layers.3.norm1.linear.weight', 'model.language_model.traj_dit.model.layers.3.norm1.norm.weight', 'model.language_model.traj_dit.model.layers.3.norm1_context.weight', 'model.language_model.traj_dit.model.layers.3.norm2.weight', 'model.language_model.traj_dit.model.layers.4.attn1.norm_k.bias', 'model.language_model.traj_dit.model.layers.4.attn1.norm_k.weight', 'model.language_model.traj_dit.model.layers.4.attn1.norm_q.bias', 'model.language_model.traj_dit.model.layers.4.attn1.norm_q.weight', 'model.language_model.traj_dit.model.layers.4.attn1.to_k.weight', 'model.language_model.traj_dit.model.layers.4.attn1.to_q.weight', 'model.language_model.traj_dit.model.layers.4.attn1.to_v.weight', 'model.language_model.traj_dit.model.layers.4.attn2.norm_k.bias', 'model.language_model.traj_dit.model.layers.4.attn2.norm_k.weight', 'model.language_model.traj_dit.model.layers.4.attn2.norm_q.bias', 'model.language_model.traj_dit.model.layers.4.attn2.norm_q.weight', 'model.language_model.traj_dit.model.layers.4.attn2.to_k.weight', 'model.language_model.traj_dit.model.layers.4.attn2.to_out.0.weight', 'model.language_model.traj_dit.model.layers.4.attn2.to_q.weight', 'model.language_model.traj_dit.model.layers.4.attn2.to_v.weight', 'model.language_model.traj_dit.model.layers.4.feed_forward.linear_1.weight', 'model.language_model.traj_dit.model.layers.4.feed_forward.linear_2.weight', 'model.language_model.traj_dit.model.layers.4.feed_forward.linear_3.weight', 'model.language_model.traj_dit.model.layers.4.ffn_norm1.weight', 'model.language_model.traj_dit.model.layers.4.ffn_norm2.weight', 'model.language_model.traj_dit.model.layers.4.gate', 'model.language_model.traj_dit.model.layers.4.norm1.linear.bias', 'model.language_model.traj_dit.model.layers.4.norm1.linear.weight', 'model.language_model.traj_dit.model.layers.4.norm1.norm.weight', 'model.language_model.traj_dit.model.layers.4.norm1_context.weight', 'model.language_model.traj_dit.model.layers.4.norm2.weight', 'model.language_model.traj_dit.model.layers.5.attn1.norm_k.bias', 'model.language_model.traj_dit.model.layers.5.attn1.norm_k.weight', 'model.language_model.traj_dit.model.layers.5.attn1.norm_q.bias', 'model.language_model.traj_dit.model.layers.5.attn1.norm_q.weight', 'model.language_model.traj_dit.model.layers.5.attn1.to_k.weight', 'model.language_model.traj_dit.model.layers.5.attn1.to_q.weight', 'model.language_model.traj_dit.model.layers.5.attn1.to_v.weight', 'model.language_model.traj_dit.model.layers.5.attn2.norm_k.bias', 'model.language_model.traj_dit.model.layers.5.attn2.norm_k.weight', 'model.language_model.traj_dit.model.layers.5.attn2.norm_q.bias', 'model.language_model.traj_dit.model.layers.5.attn2.norm_q.weight', 'model.language_model.traj_dit.model.layers.5.attn2.to_k.weight', 'model.language_model.traj_dit.model.layers.5.attn2.to_out.0.weight', 'model.language_model.traj_dit.model.layers.5.attn2.to_q.weight', 'model.language_model.traj_dit.model.layers.5.attn2.to_v.weight', 'model.language_model.traj_dit.model.layers.5.feed_forward.linear_1.weight', 'model.language_model.traj_dit.model.layers.5.feed_forward.linear_2.weight', 'model.language_model.traj_dit.model.layers.5.feed_forward.linear_3.weight', 'model.language_model.traj_dit.model.layers.5.ffn_norm1.weight', 'model.language_model.traj_dit.model.layers.5.ffn_norm2.weight', 'model.language_model.traj_dit.model.layers.5.gate', 'model.language_model.traj_dit.model.layers.5.norm1.linear.bias', 'model.language_model.traj_dit.model.layers.5.norm1.linear.weight', 'model.language_model.traj_dit.model.layers.5.norm1.norm.weight', 'model.language_model.traj_dit.model.layers.5.norm1_context.weight', 'model.language_model.traj_dit.model.layers.5.norm2.weight', 'model.language_model.traj_dit.model.layers.6.attn1.norm_k.bias', 'model.language_model.traj_dit.model.layers.6.attn1.norm_k.weight', 'model.language_model.traj_dit.model.layers.6.attn1.norm_q.bias', 'model.language_model.traj_dit.model.layers.6.attn1.norm_q.weight', 'model.language_model.traj_dit.model.layers.6.attn1.to_k.weight', 'model.language_model.traj_dit.model.layers.6.attn1.to_q.weight', 'model.language_model.traj_dit.model.layers.6.attn1.to_v.weight', 'model.language_model.traj_dit.model.layers.6.attn2.norm_k.bias', 'model.language_model.traj_dit.model.layers.6.attn2.norm_k.weight', 'model.language_model.traj_dit.model.layers.6.attn2.norm_q.bias', 'model.language_model.traj_dit.model.layers.6.attn2.norm_q.weight', 'model.language_model.traj_dit.model.layers.6.attn2.to_k.weight', 'model.language_model.traj_dit.model.layers.6.attn2.to_out.0.weight', 'model.language_model.traj_dit.model.layers.6.attn2.to_q.weight', 'model.language_model.traj_dit.model.layers.6.attn2.to_v.weight', 'model.language_model.traj_dit.model.layers.6.feed_forward.linear_1.weight', 'model.language_model.traj_dit.model.layers.6.feed_forward.linear_2.weight', 'model.language_model.traj_dit.model.layers.6.feed_forward.linear_3.weight', 'model.language_model.traj_dit.model.layers.6.ffn_norm1.weight', 'model.language_model.traj_dit.model.layers.6.ffn_norm2.weight', 'model.language_model.traj_dit.model.layers.6.gate', 'model.language_model.traj_dit.model.layers.6.norm1.linear.bias', 'model.language_model.traj_dit.model.layers.6.norm1.linear.weight', 'model.language_model.traj_dit.model.layers.6.norm1.norm.weight', 'model.language_model.traj_dit.model.layers.6.norm1_context.weight', 'model.language_model.traj_dit.model.layers.6.norm2.weight', 'model.language_model.traj_dit.model.layers.7.attn1.norm_k.bias', 'model.language_model.traj_dit.model.layers.7.attn1.norm_k.weight', 'model.language_model.traj_dit.model.layers.7.attn1.norm_q.bias', 'model.language_model.traj_dit.model.layers.7.attn1.norm_q.weight', 'model.language_model.traj_dit.model.layers.7.attn1.to_k.weight', 'model.language_model.traj_dit.model.layers.7.attn1.to_q.weight', 'model.language_model.traj_dit.model.layers.7.attn1.to_v.weight', 'model.language_model.traj_dit.model.layers.7.attn2.norm_k.bias', 'model.language_model.traj_dit.model.layers.7.attn2.norm_k.weight', 'model.language_model.traj_dit.model.layers.7.attn2.norm_q.bias', 'model.language_model.traj_dit.model.layers.7.attn2.norm_q.weight', 'model.language_model.traj_dit.model.layers.7.attn2.to_k.weight', 'model.language_model.traj_dit.model.layers.7.attn2.to_out.0.weight', 'model.language_model.traj_dit.model.layers.7.attn2.to_q.weight', 'model.language_model.traj_dit.model.layers.7.attn2.to_v.weight', 'model.language_model.traj_dit.model.layers.7.feed_forward.linear_1.weight', 'model.language_model.traj_dit.model.layers.7.feed_forward.linear_2.weight', 'model.language_model.traj_dit.model.layers.7.feed_forward.linear_3.weight', 'model.language_model.traj_dit.model.layers.7.ffn_norm1.weight', 'model.language_model.traj_dit.model.layers.7.ffn_norm2.weight', 'model.language_model.traj_dit.model.layers.7.gate', 'model.language_model.traj_dit.model.layers.7.norm1.linear.bias', 'model.language_model.traj_dit.model.layers.7.norm1.linear.weight', 'model.language_model.traj_dit.model.layers.7.norm1.norm.weight', 'model.language_model.traj_dit.model.layers.7.norm1_context.weight', 'model.language_model.traj_dit.model.layers.7.norm2.weight', 'model.language_model.traj_dit.model.layers.8.attn1.norm_k.bias', 'model.language_model.traj_dit.model.layers.8.attn1.norm_k.weight', 'model.language_model.traj_dit.model.layers.8.attn1.norm_q.bias', 'model.language_model.traj_dit.model.layers.8.attn1.norm_q.weight', 'model.language_model.traj_dit.model.layers.8.attn1.to_k.weight', 'model.language_model.traj_dit.model.layers.8.attn1.to_q.weight', 'model.language_model.traj_dit.model.layers.8.attn1.to_v.weight', 'model.language_model.traj_dit.model.layers.8.attn2.norm_k.bias', 'model.language_model.traj_dit.model.layers.8.attn2.norm_k.weight', 'model.language_model.traj_dit.model.layers.8.attn2.norm_q.bias', 'model.language_model.traj_dit.model.layers.8.attn2.norm_q.weight', 'model.language_model.traj_dit.model.layers.8.attn2.to_k.weight', 'model.language_model.traj_dit.model.layers.8.attn2.to_out.0.weight', 'model.language_model.traj_dit.model.layers.8.attn2.to_q.weight', 'model.language_model.traj_dit.model.layers.8.attn2.to_v.weight', 'model.language_model.traj_dit.model.layers.8.feed_forward.linear_1.weight', 'model.language_model.traj_dit.model.layers.8.feed_forward.linear_2.weight', 'model.language_model.traj_dit.model.layers.8.feed_forward.linear_3.weight', 'model.language_model.traj_dit.model.layers.8.ffn_norm1.weight', 'model.language_model.traj_dit.model.layers.8.ffn_norm2.weight', 'model.language_model.traj_dit.model.layers.8.gate', 'model.language_model.traj_dit.model.layers.8.norm1.linear.bias', 'model.language_model.traj_dit.model.layers.8.norm1.linear.weight', 'model.language_model.traj_dit.model.layers.8.norm1.norm.weight', 'model.language_model.traj_dit.model.layers.8.norm1_context.weight', 'model.language_model.traj_dit.model.layers.8.norm2.weight', 'model.language_model.traj_dit.model.layers.9.attn1.norm_k.bias', 'model.language_model.traj_dit.model.layers.9.attn1.norm_k.weight', 'model.language_model.traj_dit.model.layers.9.attn1.norm_q.bias', 'model.language_model.traj_dit.model.layers.9.attn1.norm_q.weight', 'model.language_model.traj_dit.model.layers.9.attn1.to_k.weight', 'model.language_model.traj_dit.model.layers.9.attn1.to_q.weight', 'model.language_model.traj_dit.model.layers.9.attn1.to_v.weight', 'model.language_model.traj_dit.model.layers.9.attn2.norm_k.bias', 'model.language_model.traj_dit.model.layers.9.attn2.norm_k.weight', 'model.language_model.traj_dit.model.layers.9.attn2.norm_q.bias', 'model.language_model.traj_dit.model.layers.9.attn2.norm_q.weight', 'model.language_model.traj_dit.model.layers.9.attn2.to_k.weight', 'model.language_model.traj_dit.model.layers.9.attn2.to_out.0.weight', 'model.language_model.traj_dit.model.layers.9.attn2.to_q.weight', 'model.language_model.traj_dit.model.layers.9.attn2.to_v.weight', 'model.language_model.traj_dit.model.layers.9.feed_forward.linear_1.weight', 'model.language_model.traj_dit.model.layers.9.feed_forward.linear_2.weight', 'model.language_model.traj_dit.model.layers.9.feed_forward.linear_3.weight', 'model.language_model.traj_dit.model.layers.9.ffn_norm1.weight', 'model.language_model.traj_dit.model.layers.9.ffn_norm2.weight', 'model.language_model.traj_dit.model.layers.9.gate', 'model.language_model.traj_dit.model.layers.9.norm1.linear.bias', 'model.language_model.traj_dit.model.layers.9.norm1.linear.weight', 'model.language_model.traj_dit.model.layers.9.norm1.norm.weight', 'model.language_model.traj_dit.model.layers.9.norm1_context.weight', 'model.language_model.traj_dit.model.layers.9.norm2.weight', 'model.language_model.traj_dit.model.norm_out.linear_1.bias', 'model.language_model.traj_dit.model.norm_out.linear_1.weight', 'model.language_model.traj_dit.model.norm_out.linear_2.bias', 'model.language_model.traj_dit.model.norm_out.linear_2.weight', 'model.language_model.traj_dit.model.patch_embedder.proj.bias', 'model.language_model.traj_dit.model.patch_embedder.proj.weight', 'model.language_model.traj_dit.model.time_caption_embed.caption_embedder.0.bias', 'model.language_model.traj_dit.model.time_caption_embed.caption_embedder.0.weight', 'model.language_model.traj_dit.model.time_caption_embed.caption_embedder.1.bias', 'model.language_model.traj_dit.model.time_caption_embed.caption_embedder.1.weight', 'model.language_model.traj_dit.model.time_caption_embed.timestep_embedder.linear_1.bias', 'model.language_model.traj_dit.model.time_caption_embed.timestep_embedder.linear_1.weight', 'model.language_model.traj_dit.model.time_caption_embed.timestep_embedder.linear_2.bias', 'model.language_model.traj_dit.model.time_caption_embed.timestep_embedder.linear_2.weight']

  • This IS expected if you are initializing InternVLAN1ForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing InternVLAN1ForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    2026-01-05 10:05:26 [23,968ms] [Error] [omni.kit.app._impl] [py stderr]: The image processor of type Qwen2VLImageProcessor is now loaded as a fast processor by default, even if the model checkpoint was saved with a slow processor. This is a breaking change and may produce slightly different outputs. To continue using the slow processor, instantiate this class with use_fast=False. Note that this behavior will be extended to all models in a future release.
    [2026-01-05 10:05:27,790][INFO] /workspace/CODE_11_20/InternNav/InternNav/internnav/evaluator/vln_distributed_evaluator.py[line:59] -: start eval dataset: test, total_path: 1347
    [2026-01-05 10:05:27,791][INFO] [TIME] Env Init time: 27.02s
    [2026-01-05 10:05:27,791][INFO] /workspace/CODE_11_20/InternNav/InternNav/internnav/evaluator/vln_distributed_evaluator.py[line:70] -: [TIME] Env Init time: 27.02s
    [10:05:27.791703] --- VlnMultiEvaluator start ---
    [2026-01-05 10:05:27,791][INFO] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/runner.py[line:274] -: ===================== init reset =====================
    2026-01-05 10:05:27 [24,917ms] [Warning] [omni.isaac.core.utils.torch.maths] omni.isaac.core.utils.torch.maths has been deprecated in favor of isaacsim.core.utils.torch.maths. Please update your code accordingly.
    2026-01-05 10:05:27 [24,917ms] [Warning] [omni.isaac.core.utils.torch.rotations] omni.isaac.core.utils.torch.rotations has been deprecated in favor of isaacsim.core.utils.torch.rotations. Please update your code accordingly.
    2026-01-05 10:05:27 [24,917ms] [Warning] [omni.isaac.core.utils.torch.tensor] omni.isaac.core.utils.torch.tensor has been deprecated in favor of isaacsim.core.utils.torch.tensor. Please update your code accordingly.
    2026-01-05 10:05:27 [24,917ms] [Warning] [omni.isaac.core.utils.torch.transformations] omni.isaac.core.utils.torch.transformations has been deprecated in favor of isaacsim.core.utils.torch.transformations. Please update your code accordingly.
    [10:05:27.945905] Setting seed: 0
    [10:05:27.946476] Setting seed: 0
    [2026-01-05 10:05:27,947][INFO] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/task/task.py[line:65] -: env 0 at [0.0, 0.0, 0]
    [2026-01-05 10:05:27,947][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia_extension/robots/h1.py[line:21] -: h1 h1_0: position : [17.2064991 2.06001997 1.22162801]
    [2026-01-05 10:05:27,948][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia_extension/robots/h1.py[line:22] -: h1 h1_0: orientation : [0.96592583 0. 0. 0.25881905]
    [2026-01-05 10:05:27,948][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia_extension/robots/h1.py[line:26] -: h1 h1_0: usd_path : data/Embodiments/vln-pe/h1/h1_internvla.usd
    [2026-01-05 10:05:27,948][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia_extension/robots/h1.py[line:27] -: h1 h1_0: config.prim_path : /World/env_0/robots/h1
    2026-01-05 10:05:27 [24,920ms] [Warning] [omni.isaac.core.utils.stage] omni.isaac.core.utils.stage has been deprecated in favor of isaacsim.core.utils.stage. Please update your code accordingly.
    [2026-01-05 10:05:28,079][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/pelvis
    2026-01-05 10:05:28 [25,051ms] [Warning] [omni.isaac.core.prims.base_sensor] omni.isaac.core.prims.base_sensor has been deprecated in favor of isaacsim.core.api.sensors.base_sensor. Please update your code accordingly.
    2026-01-05 10:05:28 [25,051ms] [Warning] [omni.isaac.core.prims.geometry_prim] omni.isaac.core.prims.geometry_prim.GeometryPrim has been deprecated in favor of isaacsim.core.prims.SingleGeometryPrim. Please update your code accordingly.
    2026-01-05 10:05:28 [25,052ms] [Warning] [omni.isaac.core.prims.geometry_prim_view] omni.isaac.core.prims.geometry_prim_view.GeometryPrimView has been deprecated in favor of isaacsim.core.prims.GeometryPrim Please update your code accordingly.
    2026-01-05 10:05:28 [25,052ms] [Warning] [omni.isaac.core.prims.rigid_contact_view] omni.isaac.core.prims.rigid_contact_view has been deprecated in favor of isaacsim.core.api.sensors.rigid_contact_view. Please update your code accordingly.
    2026-01-05 10:05:28 [25,052ms] [Warning] [omni.isaac.core.prims.rigid_prim] omni.isaac.core.prims.rigid_prim.RigidPrim has been deprecated in favor of isaacsim.core.prims.SingleRigidPrim. Please update your code accordingly.
    2026-01-05 10:05:28 [25,052ms] [Warning] [omni.isaac.core.prims.rigid_prim_view] omni.isaac.core.prims.rigid_prim_view.RigidPrimView has been deprecated in favor of isaacsim.core.prims.RigidPrim. Please update your code accordingly.
    2026-01-05 10:05:28 [25,052ms] [Warning] [omni.isaac.core.prims.soft.cloth_prim] omni.isaac.core.prims.soft.cloth_prim.ClothPrim has been deprecated in favor of isaacsim.core.prims.SingleClothPrim. Please update your code accordingly.
    2026-01-05 10:05:28 [25,053ms] [Warning] [omni.isaac.core.prims.soft.particle_system] omni.isaac.core.prims.soft.particle_system.ParticleSystem has been deprecated in favor of isaacsim.core.prims.SingleParticleSystem. Please update your code accordingly.
    2026-01-05 10:05:28 [25,053ms] [Warning] [omni.isaac.core.prims.soft.particle_system_view] omni.isaac.core.prims.soft.particle_system_view.ParticleSystemView has been deprecated in favor of isaacsim.core.prims.ParticleSystem. Please update your code accordingly.
    2026-01-05 10:05:28 [25,053ms] [Warning] [omni.isaac.core.prims.xform_prim] omni.isaac.core.prims.xform_prim.XFormPrim has been deprecated in favor of isaacsim.core.prims.SingleXFormPrim. Please update your code accordingly.
    2026-01-05 10:05:28 [25,053ms] [Warning] [omni.isaac.core.prims.xform_prim_view] omni.isaac.core.prims.xform_prim_view.XFormPrimView has been deprecated in favor of isaacsim.core.prims.XFormPrim. Please update your code accordingly.
    [2026-01-05 10:05:28,090][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/left_hip_yaw_link
    [2026-01-05 10:05:28,097][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/left_hip_roll_link
    [2026-01-05 10:05:28,108][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/left_hip_pitch_link
    [2026-01-05 10:05:28,115][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/left_knee_link
    [2026-01-05 10:05:28,122][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/left_ankle_link
    [2026-01-05 10:05:28,129][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/right_hip_yaw_link
    [2026-01-05 10:05:28,136][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/right_hip_roll_link
    [2026-01-05 10:05:28,146][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/right_hip_pitch_link
    [2026-01-05 10:05:28,153][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/right_knee_link
    [2026-01-05 10:05:28,161][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/right_ankle_link
    [2026-01-05 10:05:28,168][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/torso_link
    [2026-01-05 10:05:28,176][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/imu_link
    [2026-01-05 10:05:28,183][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/left_shoulder_pitch_link
    [2026-01-05 10:05:28,191][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/left_shoulder_roll_link
    [2026-01-05 10:05:28,199][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/left_shoulder_yaw_link
    [2026-01-05 10:05:28,207][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/left_elbow_link
    [2026-01-05 10:05:28,214][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/logo_link
    [2026-01-05 10:05:28,219][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/right_shoulder_pitch_link
    [2026-01-05 10:05:28,225][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/right_shoulder_roll_link
    [2026-01-05 10:05:28,231][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/right_shoulder_yaw_link
    [2026-01-05 10:05:28,238][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:81] -: [create_rigid_bodies] found rigid body at path: /World/env_0/robots/h1/right_elbow_link
    2026-01-05 10:05:28 [25,260ms] [Error] [omni.kit.app._impl] [py stderr]: /workspace/CODE_11_20/isaacsim4_5/exts/omni.isaac.ml_archive/pip_prebundle/torch/functional.py:534: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3595.)
    return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
    [2026-01-05 10:05:28,289][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/controller.py[line:138] -: [create_controllers] vln_move_by_speed loaded
    [2026-01-05 10:05:28,307][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/controller.py[line:138] -: [create_controllers] stand_still loaded
    [2026-01-05 10:05:28,324][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/controller.py[line:138] -: [create_controllers] move_by_discrete loaded
    [2026-01-05 10:05:28,324][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/controller.py[line:138] -: [create_controllers] move_by_flash loaded
    [2026-01-05 10:05:28,325][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/sensor/sensor.py[line:104] -: [create_sensors] pano_camera_0 loaded
    [2026-01-05 10:05:28,325][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/sensor/sensor.py[line:104] -: [create_sensors] topdown_camera_500 loaded
    [2026-01-05 10:05:28,325][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/sensor/sensor.py[line:104] -: [create_sensors] tp_pointcloud loaded
    [2026-01-05 10:05:28,325][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/robot/robot.py[line:241] -: [create_robots] h1_0 loaded
    2026-01-05 10:05:28 [25,363ms] [Warning] [omni.physx.plugin] The rigid body at /World/env_0/robots/h1/imu_link has a possibly invalid inertia tensor of {1.0, 1.0, 1.0} and a negative mass, small sphere approximated inertia was used. Either specify correct values in the mass properties, or add collider(s) to any shape(s) that you wish to automatically compute mass properties for.
    2026-01-05 10:05:28 [25,363ms] [Warning] [omni.physx.plugin] The rigid body at /World/env_0/robots/h1/logo_link has a possibly invalid inertia tensor of {1.0, 1.0, 1.0} and a negative mass, small sphere approximated inertia was used. Either specify correct values in the mass properties, or add collider(s) to any shape(s) that you wish to automatically compute mass properties for.
    2026-01-05 10:05:28 [25,363ms] [Warning] [omni.physx.plugin] Detected an articulation at /World/env_0/robots/h1 with more than 4 velocity iterations being added to a TGS scene.The related behavior changed recently, please consult the changelog. This warning will only print once.
    [2026-01-05 10:05:28,580][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia_extension/sensors/rep_camera.py[line:58] -: ================ create camera ===============
    [2026-01-05 10:05:28,580][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia_extension/sensors/rep_camera.py[line:59] -: camera_prim_path: /World/env_0/robots/h1/logo_link/Camera_pointcloud
    [2026-01-05 10:05:28,580][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia_extension/sensors/rep_camera.py[line:60] -: name : tp_pointcloud
    [2026-01-05 10:05:28,580][DEBUG] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia_extension/sensors/rep_camera.py[line:61] -: resolution : (64, 64)
    [2026-01-05 10:05:28,637][INFO] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/runner.py[line:435] -: ===================== episodes ========================
    [2026-01-05 10:05:28,638][INFO] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/runner.py[line:437] -: Next episode: 0 at 0
    [2026-01-05 10:05:28,638][INFO] /workspace/miniforge3/envs/isa_4_5_internnav/lib/python3.10/site-packages/internutopia/core/runner.py[line:438] -: ======================================================
    [2026-01-05 10:05:28,638][INFO] /workspace/CODE_11_20/InternNav/InternNav/internnav/utils/progress_log_multi_util.py[line:107] -: start sampling trajectory_id: 7192_1809
    [2026-01-05 10:05:28,638][INFO] start new episode!
    [2026-01-05 10:05:28,638][INFO] /workspace/CODE_11_20/InternNav/InternNav/internnav/evaluator/vln_distributed_evaluator.py[line:281] -: start new episode!
    2026-01-05 10:05:28 [25,610ms] [Warning] [omni.isaac.core.utils.numpy.maths] omni.isaac.core.utils.numpy.maths has been deprecated in favor of isaacsim.core.utils.numpy.maths. Please update your code accordingly.
    2026-01-05 10:05:28 [25,610ms] [Warning] [omni.isaac.core.utils.numpy.rotations] omni.isaac.core.utils.numpy.rotations has been deprecated in favor of isaacsim.core.utils.numpy.rotations. Please update your code accordingly.
    2026-01-05 10:05:28 [25,611ms] [Warning] [omni.isaac.core.utils.numpy.tensor] omni.isaac.core.utils.numpy.tensor has been deprecated in favor of isaacsim.core.utils.numpy.tensor. Please update your code accordingly.
    2026-01-05 10:05:28 [25,611ms] [Warning] [omni.isaac.core.utils.numpy.transformations] omni.isaac.core.utils.numpy.transformations has been deprecated in favor of isaacsim.core.utils.numpy.transformations. Please update your code accordingly.
    2026-01-05 10:05:31 [28,729ms] [Warning] [omni.syntheticdata.plugin] OgnSdPostRenderVarToHost : rendervar copy from texture directly to host buffer is counter-performant. Please use copy from texture to device buffer first.
    2026-01-05 10:05:33 [30,821ms] [Warning] [omni.isaac.core.utils.rotations] omni.isaac.core.utils.rotations has been deprecated in favor of isaacsim.core.utils.rotations. Please update your code accordingly.
    [10:05:33.850136] ======== Infer S2 at step 0========
    2026-01-05 10:05:34 [31,371ms] [Error] [omni.kit.app._impl] [py stderr]: The following generation flags are not valid and may be ignored: ['temperature', 'top_p', 'top_k']. Set TRANSFORMERS_VERBOSITY=info for more details.
    [10:05:34.533117] s2 infer error: 'InternVLAN1Model' object has no attribute 'embed_tokens'
    [10:05:34.562479] s2 infer error: 'InternVLAN1Model' object has no attribute 'embed_tokens'`

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions