You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: source/en/user_guide/internnav/quick_start/evaluation.md
+34-61Lines changed: 34 additions & 61 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,16 +1,16 @@
1
-
# Training and Evaluation
1
+
# Evaluation
2
2
3
-
This document presents how to train and evaluate models for different systems with InternNav.
3
+
This document describes how to evaluate models in **InternNav**.
4
4
5
-
## Whole-system
5
+
## InternVLA-N1 (Dual System)
6
6
7
-
### Training
8
-
The training pipeline is currently under preparation and will be open-sourced soon.
7
+
Model weights of InternVLA-N1 (Dual System) can be downloaded from [InternVLA-N1-DualVLN](https://huggingface.co/InternRobotics/InternVLA-N1-DualVLN) and [InternVLA-N1-w-NavDP](https://huggingface.co/InternRobotics/InternVLA-N1-w-NavDP).
9
8
10
-
### Evaluation
11
-
Before evaluation, we should download the robot assets from [InternUTopiaAssets](https://huggingface.co/datasets/InternRobotics/Embodiments) and move them to the `data/` directory. Model weights of InternVLA-N1 can be downloaded from [InternVLA-N1](https://huggingface.co/InternRobotics/InternVLA-N1).
9
+
---
10
+
11
+
### Evaluation on Isaac Sim
12
+
Before evaluation, we should download the robot assets from [InternUTopiaAssets](https://huggingface.co/datasets/InternRobotics/Embodiments) and move them to the `data/` directory.
12
13
13
-
#### Evaluation on Isaac Sim
14
14
[UPDATE] We support using local model and isaac sim in one process now. Evaluate on Single-GPU:
15
15
16
16
```bash
@@ -51,7 +51,7 @@ The simulation can be visualized by set `vis_output=True` in eval_cfg.
Model weights of InternVLA-N1 (System2) can be downloaded from [InternVLA-N1-System2](https://huggingface.co/InternRobotics/InternVLA-N1-System2).
79
80
80
-
### Training
81
+
Currently we only support evaluate single System2 on Habitat:
81
82
82
-
Download the training data from [Hugging Face](https://huggingface.co/datasets/InternRobotics/InternData-N1/), and organize them in the form mentioned in [installation](./installation.md).
"mode": "system2", # inference mode: dual_system or system2
94
+
"model_path": "checkpoints/<s2_checkpoint>", # path to model checkpoint
95
+
}
96
+
)
97
+
)
98
+
```
99
+
100
+
For multi-gpu inference, currently we only support inference on SLURM.
101
+
102
+
```bash
103
+
./scripts/eval/bash/eval_system2.sh
86
104
```
87
105
88
-
### Evaluation
106
+
##VN Systems (System 1)
89
107
90
108
We support the evaluation of diverse System-1 baselines separately in [NavDP](https://github.com/InternRobotics/NavDP/tree/navdp_benchmark) to make it easy to use and deploy.
91
109
To install the environment, we provide a quick start below:
Currently, we only support training of small VLN models (CMA, RDP, Seq2Seq) in this repo. For the training of LLM-based VLN (Navid, StreamVLN, etc), please refer to [StreamVLN](https://github.com/OpenRobotLab/StreamVLN) for training details.
### 2. Joint Training for InternVLA-N1 (Dual System)
47
+
48
+
After completing training of **InternVLA-N1 (System2)**, joint training is supported with a pixel-goal navigation System1, using either the **NavDP** or **NextDiT** architecture.
49
+
50
+
-**InternVLA-N1 (Dual System) w/ NavDP**: preserves **NavDP**'s model design and uses **RGB-D** input.
51
+
-**InternVLA-N1 (Dual System) DualVLN**: uses only **RGB** input, resulting in a smaller model footprint.
0 commit comments