You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+43-23Lines changed: 43 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,28 +8,29 @@
8
8
---
9
9
## 🔔 News
10
10
11
-
🎊 We have a plan to integrate [Grounded-Segment-Anything](https://github.com/IDEA-Research/Grounded-Segment-Anything), [Yolo-World](https://github.com/AILab-CVC/YOLO-World), [OWLv2](https://huggingface.co/docs/transformers/en/model_doc/owlv2) and into this mapping pipeline, for **online open-world semantic-mapping**.
12
-
Besides, we will support [Habitat-Lab](https://github.com/facebookresearch/habitat-lab) to advance developments in embodied-AI. Stay tuned for our upcoming feature releases and don’t forget to give us a star!
11
+
🎊 We plan to integrate [Grounded-Segment-Anything](https://github.com/IDEA-Research/Grounded-Segment-Anything), [Yolo-World](https://github.com/AILab-CVC/YOLO-World), and [OWLv2](https://huggingface.co/docs/transformers/en/model_doc/owlv2) into this mapping pipeline, for **online open-world semantic-mapping**.
12
+
Additionally, we will support [Habitat-Lab](https://github.com/facebookresearch/habitat-lab) to advance developments in embodied-AI. Stay tuned for our upcoming feature releases and don’t forget to give us a star!
13
13
14
14
🔥 **27/12/2024** Release experiment configs and launch files.
15
15
16
-
17
16
🔥 **26/11/2024** Release main algorithms!
18
17
19
-
🤗 **15/10/2024**present at [iros2024-abudhabi](https://iros2024-abudhabi.org/)
18
+
🤗 **15/10/2024**Presented at [iros2024-abudhabi](https://iros2024-abudhabi.org/)
20
19
21
-
🚀 **30/06/2024**accepted by IROS2024!
20
+
🚀 **30/06/2024**Accepted by IROS2024!
22
21
23
22
📜 **26/03/2024** arXiv version [paper](https://arxiv.org/abs/2403.16880)
24
23
25
-
## 🎈 Getting Start
26
-
### Step1: Dataset and pre-process
27
-
#### SemanticKITTI
28
-
- prepare the dataset to desire format, following [dataset](panoptic_mapping_utils/src/kitti_dataset/README.md).
29
-
#### flat
30
-
- Download the dataset from the [ASL Datasets](https://projects.asl.ethz.ch/datasets/doku.php?id=panoptic_mapping).
24
+
## 🎈 Getting Started
25
+
26
+
### Step1: Dataset and Pre-process
27
+
**SemanticKITTI**: Prepare the dataset to desired format, following [dataset](panoptic_mapping_utils/src/kitti_dataset/README.md).
28
+
29
+
**flat**: Download the dataset from the [ASL Datasets](https://projects.asl.ethz.ch/datasets/doku.php?id=panoptic_mapping).
### Step3 Modify config and launch file to run an experiment
46
-
The launch files are stored at [panoptic_mapping_ros/launch/iros_exp](panoptic_mapping_ros/launch/iros_exp)
47
-
For example, you should modify `<arg name="base_path" default="/dataset/KITTI/dataset/sequences/07"/>` and `<arg name="config" default="iros_exp/kitti/iros_exp_kitti_detectron_07"/>` in the launch file.
48
46
49
-
```
47
+
### Step3 Modify config and launch file to run an experiment
- Update `base_path` and `config` in the launch file.
56
+
- Note: For long sequences in SemanticKITTI, adjust `max_frames` in the launch file to limit the number of frames based on your device's memory.
57
+
- Update the following in the config file:
58
+
-`save_map_path_when_finished`
59
+
-`label_info_print_path`
60
+
-`save_mesh_folder_path`
61
+
-`submap_info_path`
62
+
-`Tr`
63
+
-`P2`
64
+
-`labels: file_name`
65
+
-`visualization: colormap_print_path`
66
+
- Note: The `Tr` and `P2` parameters should be modified based on the extrinsics calculated in the [data preprocess step](panoptic_mapping_utils/src/kitti_dataset/README.md#-📌-data-process).
0 commit comments