Skip to content

Commit ce92eb5

Browse files
authored
Merge pull request #90 from NVIDIA-ISAAC-ROS/release-dp3.1
Isaac ROS 0.31.0 (DP3.1)
2 parents 2898053 + dd72084 commit ce92eb5

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+1306
-488
lines changed

.gitattributes

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,4 @@
2222
# ROS Bags
2323
**/resources/**/*.db3 filter=lfs diff=lfs merge=lfs -text
2424
**/resources/**/*.yaml filter=lfs diff=lfs merge=lfs -text
25-
isaac_ros_visual_slam/test/test_cases/rosbags/** filter=lfs diff=lfs merge=lfs -text
26-
27-
# Elbrus Library
28-
isaac_ros_visual_slam/elbrus/** filter=lfs diff=lfs merge=lfs -text
25+
isaac_ros_visual_slam/test/test_cases/rosbags/** filter=lfs diff=lfs merge=lfs -text

README.md

Lines changed: 49 additions & 43 deletions
Large diffs are not rendered by default.
Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
# Elbrus SLAM
1+
# cuVSLAM
22

3-
All SLAM-related operations work in parallel to visual odometry in a separate thread. Input images get copied into GPU and then Elbrus starts tracking.
3+
All SLAM-related operations work in parallel to visual odometry in a separate thread. Input images get copied into GPU and then cuVSLAM starts tracking.
44

5-
Elbrus uses the following:
5+
cuVSLAM uses the following:
66

77
* 2D features on the input images
88
* Landmarks: The patch of pixels in the source images with coordinates of the feature associated with the position of the camera from where that feature is visible.
@@ -12,13 +12,13 @@ Landmarks and PoseGraph are stored in a data structure that doesn't increase its
1212

1313
Imagine a robot moving around and returning to the same place. Since odometry always has some accumulated drift, we see a deviation in trajectory from the ground truth. SLAM can be used to solve this problem.
1414

15-
During the robot's motion, SLAM stores all landmarks and it continuously verifies if each landmark has been seen before. When Elbrus recognizes several landmarks, it determines their position from the data structure.
15+
During the robot's motion, SLAM stores all landmarks and it continuously verifies if each landmark has been seen before. When cuVSLAM recognizes several landmarks, it determines their position from the data structure.
1616

17-
At this moment, a connection is added to the PoseGraph which makes a loop from the free trail of the poses; this event is termed 'loop closure'. Immediately after that, Elbrus performs a graph optimization that leads to the correction of the current odometric pose and all previous poses in the graph.
17+
At this moment, a connection is added to the PoseGraph which makes a loop from the free trail of the poses; this event is termed 'loop closure'. Immediately after that, cuVSLAM performs a graph optimization that leads to the correction of the current odometric pose and all previous poses in the graph.
1818

19-
The procedure for adding landmarks is designed such that if we do not see a landmark in the place where it was expected, then such landmarks are marked for eventual deletion. This allows you to use Elbrus over a changing terrain.
19+
The procedure for adding landmarks is designed such that if we do not see a landmark in the place where it was expected, then such landmarks are marked for eventual deletion. This allows you to use cuVSLAM over a changing terrain.
2020

21-
Along with visual data, Elbrus can use Inertial Measurement Unit (IMU) measurements. It automatically switches to IMU when VO is unable to estimate a pose – for example, when there is dark lighting or long solid surfaces in front of a camera. Using an IMU usually leads to a significant performance improvement in cases of poor visual conditions.
21+
Along with visual data, cuVSLAM can use Inertial Measurement Unit (IMU) measurements. It automatically switches to IMU when VO is unable to estimate a pose – for example, when there is dark lighting or long solid surfaces in front of a camera. Using an IMU usually leads to a significant performance improvement in cases of poor visual conditions.
2222

2323
In case of severe degradation of image input (lights being turned off, dramatic motion blur on a bump while driving, and other possible scenarios), below mentioned motion estimation algorithms will ensure acceptable quality for pose tracking:
2424

@@ -41,6 +41,6 @@ Naturally, we would like to save the stored landmarks and pose graph in a map. W
4141

4242
## Loading and Localization in the Map
4343

44-
Once the map has been saved to the disk, it can be used later to localize the robot. To load the map into the memory, we have made a ROS 2 action called `LoadMapAndLocalize`. It requires a map file path and a prior pose, which is an initial guess of where the robot is in the map. Given the prior pose and current set of camera frames, Elbrus tries to find the pose of the landmarks in the requested map that matches the current set. If the localization is successful, Elbrus will load the map in the memory. Otherwise, it will continue building a new map.
44+
Once the map has been saved to the disk, it can be used later to localize the robot. To load the map into the memory, we have made a ROS 2 action called `LoadMapAndLocalize`. It requires a map file path and a prior pose, which is an initial guess of where the robot is in the map. Given the prior pose and current set of camera frames, cuVSLAM tries to find the pose of the landmarks in the requested map that matches the current set. If the localization is successful, cuVSLAM will load the map in the memory. Otherwise, it will continue building a new map.
4545

4646
Both `SaveMap` and `LoadMapAndLocalize` can take some time to complete. Hence, they are designed to be asynchronous to avoid interfering with odometry calculations.

docs/tutorial-isaac-sim.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,7 @@ Last validated with [Isaac Sim 2022.2.1](https://docs.omniverse.nvidia.com/app_i
7171

7272
As soon as you start the visual SLAM node, it starts storing the landmarks and the pose graph. You can save them in a map and store the map onto a disk. Make a call to the `SaveMap` ROS 2 Action with the following command:
7373

74+
> Note: `/path/to/save/the/map` must be a new empty directory everytime you call this action as this action will overwrite the existing contents.
7475
```bash
7576
ros2 action send_goal /visual_slam/save_map isaac_ros_visual_slam_interfaces/action/SaveMap "{map_url: /path/to/save/the/map}"
7677
```

docs/tutorial-realsense.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77
This tutorial walks you through setting up [Isaac ROS Visual SLAM](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam) with a [Realsense camera](https://www.intel.com/content/www/us/en/architecture-and-technology/realsense-overview.html).
88

9-
> **Note**: The [launch file](../isaac_ros_visual_slam/launch/isaac_ros_visual_slam_realsense.launch.py) provided in this tutorial is designed for a RealSense camera with integrated IMU. If you want to run this tutorial with a RealSense camera without an IMU (like RealSense D435), then change `enable_imu` param in the launch file to `False`.
9+
> **Note**: The [launch file](../isaac_ros_visual_slam/launch/isaac_ros_visual_slam_realsense.launch.py) provided in this tutorial is designed for a RealSense camera with integrated IMU. If you want to run this tutorial with a RealSense camera without an IMU (like RealSense D435), then change `enable_imu_fusion` param in the launch file to `False`.
1010
<!-- Split blockquote -->
1111
> **Note**: This tutorial requires a compatible RealSense camera from the list available [here](https://github.com/NVIDIA-ISAAC-ROS/.github/blob/main/profile/realsense-setup.md#camera-compatibility).
1212
@@ -16,7 +16,9 @@ This tutorial walks you through setting up [Isaac ROS Visual SLAM](https://githu
1616

1717
2. Complete the [Quickstart section](../README.md#quickstart) in the main README.
1818

19-
3. \[Terminal 1\] Run `realsense-camera` node and `visual_slam` node
19+
3. Follow this [page](https://github.com/ethz-asl/kalibr/wiki/IMU-Noise-Model#how-to-obtain-the-parameters-for-your-imu) to obtain [IMU Noise Model](https://github.com/ethz-asl/kalibr/wiki/IMU-Noise-Model) params. Params can be obtained either through datasheet for the IMU or from a ROS package like [this](https://github.com/CruxDevStuff/allan_ros2).
20+
21+
4. \[Terminal 1\] Run `realsense-camera` node and `visual_slam` node
2022

2123
Make sure you have your RealSense camera attached to the system, and then start the Isaac ROS container.
2224

@@ -31,21 +33,21 @@ This tutorial walks you through setting up [Isaac ROS Visual SLAM](https://githu
3133
> ./scripts/run_dev.sh ${ISAAC_ROS_WS}
3234
> ```
3335

34-
4. \[Terminal 1\] Inside the container, build and source the workspace:
36+
5. \[Terminal 1\] Inside the container, build and source the workspace:
3537

3638
```bash
3739
cd /workspaces/isaac_ros-dev && \
3840
colcon build --symlink-install && \
3941
source install/setup.bash
4042
```
4143

42-
5. \[Terminal 1\] Run the launch file, which launches the example and wait for 5 seconds:
44+
6. \[Terminal 1\] Run the launch file, which launches the example and wait for 5 seconds:
4345

4446
```bash
4547
ros2 launch isaac_ros_visual_slam isaac_ros_visual_slam_realsense.launch.py
4648
```
4749

48-
6. \[Terminal 2\] Attach a second terminal to check the operation.
50+
7. \[Terminal 2\] Attach a second terminal to check the operation.
4951

5052
Attach another terminal to the running container for issuing other ROS2 commands.
5153

docs/validating-cuvslam-setup.md

Lines changed: 227 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,227 @@
1+
# Validating Isaac ROS Visual SLAM on a Robot
2+
3+
## Overview
4+
5+
This document outlines a procedure that can be used to validate the successful installation and configuration of Isaac ROS Visual SLAM on real hardware.
6+
7+
## Table of Contents
8+
- [Validating Isaac ROS Visual SLAM on a Robot](#validating-isaac-ros-visual-slam-on-a-robot)
9+
- [Overview](#overview)
10+
- [Table of Contents](#table-of-contents)
11+
- [Prerequisites](#prerequisites)
12+
- [Prepare Physical Environment](#prepare-physical-environment)
13+
- [Download and Configure Packages](#download-and-configure-packages)
14+
- [Build ROS Workspace](#build-ros-workspace)
15+
- [Select and Start ROS Visualization Tool](#select-and-start-ros-visualization-tool)
16+
- [Run Isaac ROS Visual SLAM and Monitor Results](#run-isaac-ros-visual-slam-and-monitor-results)
17+
- [Test 1: Visual Odometry](#test-1-visual-odometry)
18+
- [Test 2: Loop Closure](#test-2-loop-closure)
19+
- [Next Steps](#next-steps)
20+
- [Troubleshooting](#troubleshooting)
21+
- [Integrating Isaac ROS Visual SLAM](#integrating-isaac-ros-visual-slam)
22+
23+
## Prerequisites
24+
25+
- [ ] NVIDIA Jetson
26+
- [ ] Standalone computer with ROS data visualization tool (RViz, Foxglove, etc.)
27+
- [ ] Compatible stereo camera
28+
- [ ] Enough space to navigate (~40m loop)
29+
- [ ] Tape measure
30+
31+
32+
## Prepare Physical Environment
33+
34+
1. Examine the designated testing area and identify a single, non-intersecting loop whose path length is approximately 40 meters. This will be the path along which the camera is moved and tracked using Isaac ROS Visual SLAM.
35+
2. Define a starting pose along the identified loop. Label this pose with masking tape or a different marker.
36+
37+
One possible approach is to mark an L-shape on the floor, aligning the camera's position and orientation off of the corner of the shape.
38+
3. Using a measuring tool, measure out a secondary pose along the loop that is exactly 1 meter along the forward axis from the starting pose. Label this pose in the same manner.
39+
40+
<div align="center"><img src="../resources/validation_tape.jpg" width="600px"/></div>
41+
42+
## Download and Configure Packages
43+
44+
1. On each of the Jetson and standalone computer, set up your development environment by following the instructions [here](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/docs/dev-env-setup.md).
45+
46+
> Note: `${ISAAC_ROS_WS}` is defined to point to either `/ssd/workspaces/isaac_ros-dev/` or `~/workspaces/isaac_ros-dev/`.
47+
48+
2. Set up your stereo camera and connect it to the Jetson.
49+
50+
- For HAWK cameras, complete the [HAWK setup tutorial](https://github.com/NVIDIA-ISAAC-ROS/.github/blob/main/profile/hawk-setup.md).
51+
- For RealSense cameras, complete the [RealSense setup tutorial](https://github.com/NVIDIA-ISAAC-ROS/.github/blob/main/profile/realsense-setup.md).
52+
53+
3. On both devices, clone this repository and its dependencies under `${ISAAC_ROS_WS}/src`.
54+
55+
```bash
56+
cd ${ISAAC_ROS_WS}/src && \
57+
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam && \
58+
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common && \
59+
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros
60+
```
61+
62+
## Build ROS Workspace
63+
64+
1. On the Jetson, launch the Docker container in a new terminal:
65+
66+
```bash
67+
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
68+
./scripts/run_dev.sh ${ISAAC_ROS_WS}
69+
```
70+
71+
2. Inside the container, build and source the workspace:
72+
73+
```bash
74+
cd /workspaces/isaac_ros-dev && \
75+
colcon build --symlink-install && \
76+
source install/setup.bash
77+
```
78+
79+
## Select and Start ROS Visualization Tool
80+
81+
1. Ensure that the standalone computer and Jetson device are on the same network.
82+
2. On the standalone computer, start RViz from the provided configuration file:
83+
84+
Start the Docker container for RViz on the standalone computer:
85+
86+
```bash
87+
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
88+
./scripts/run_dev.sh ${ISAAC_ROS_WS}
89+
```
90+
91+
> **(Optional)** Configure the `ROS_DOMAIN_ID` environment variable on both devices to eliminate any cross-talk from other ROS nodes on the same network.
92+
>
93+
> In each terminal on each device:
94+
> ```bash
95+
> export ROS_DOMAIN_ID=5 # Any number between 1-101; use the same number for each terminal
96+
> ```
97+
> **Note**: This environment variable must be set in each terminal, on both the Jetson and the standalone computer.
98+
99+
From this terminal, run Rviz2 to display the output:
100+
101+
```bash
102+
source /workspaces/isaac_ros-dev/install/setup.bash && \
103+
rviz2 -d src/isaac_ros_visual_slam/isaac_ros_visual_slam/rviz/default.cfg.rviz
104+
```
105+
106+
3. On the standalone computer, start logging the estimated transform:
107+
108+
Attach another terminal to the running container for `tf2_echo`:
109+
110+
```bash
111+
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
112+
./scripts/run_dev.sh ${ISAAC_ROS_WS}
113+
```
114+
115+
From this terminal, run `tf2_echo` to display the transform between the `/map` frame and the appropriate camera frame:
116+
117+
- HAWK: `/camera`
118+
- RealSense: `/camera_link`
119+
120+
```bash
121+
source /workspaces/isaac_ros-dev/install/setup.bash && \
122+
ros2 run tf2_ros tf2_echo map camera_link
123+
```
124+
125+
## Run Isaac ROS Visual SLAM and Monitor Results
126+
127+
### Test 1: Visual Odometry
128+
129+
The purpose of this test is to validate that the system is able to provide accurate measurements of the robot's local motion.
130+
131+
1. Initialize the camera at the marked starting pose.
132+
2. On the Jetson, run the Isaac ROS Visual SLAM launch file appropriate for your camera:
133+
134+
- HAWK:
135+
```bash
136+
ros2 launch isaac_ros_visual_slam isaac_ros_visual_slam_hawk.launch.py
137+
```
138+
- RealSense:
139+
```bash
140+
ros2 launch isaac_ros_visual_slam isaac_ros_visual_slam_realsense.launch.py
141+
```
142+
143+
At this point, the `/map`, `/odom`, and camera frames will all be initialized to the starting pose.
144+
3. Precisely move the camera straight forward to the marked secondary pose.
145+
146+
4. On the standalone computer and within the `tf2_echo` terminal, note the latest transform reported between `/map` and the appropriate camera frame.
147+
148+
<div align="center"><img src="../resources/validation_test1_tf2.png" width="600px"/></div>
149+
150+
1. Validate Orientation:
151+
152+
The orientation error between the measured rotation unit quaternion and the expected rotation unit quaternion (`[0 0 0 1]`) should be less than 15 degrees. This error can be calculated using the process explained [here](https://gitlab.tudelft.nl/-/snippets/190):
153+
154+
$$q_{measured} = w_m + x_m i + y_m j + z_m k$$
155+
$$q_{expected} = w_e + x_e i + y_e j + z_e k$$
156+
$$2 \arccos (w_m w_e + x_m x_e + y_m y_e + z_m z_e) \leq 15 \degree $$
157+
158+
2. Validate Translation:
159+
160+
The magnitude of the measured translation should be within 5 centimeters of the expected translation (1 meter):
161+
162+
$$ 0.95 \text{m} \leq \|t_{measured}\| \leq 1.05 \text{m}$$
163+
164+
If all real-world measurements correspond to the reported estimates within the expected tolerance, the Visual Odometry test is a sucess.
165+
166+
### Test 2: Loop Closure
167+
168+
The purpose of this test is to validate that the system is able to appropriately realign the estimated pose path to previously-seen features.
169+
170+
1. Initialize the camera at the marked starting pose.
171+
2. On the Jetson, run the Isaac ROS Visual SLAM launch file appropriate for your camera:
172+
173+
- HAWK:
174+
```bash
175+
ros2 launch isaac_ros_visual_slam isaac_ros_visual_slam_hawk.launch.py
176+
```
177+
- RealSense:
178+
```bash
179+
ros2 launch isaac_ros_visual_slam isaac_ros_visual_slam_realsense.launch.py
180+
```
181+
182+
At this point, the `/map`, `/odom`, and camera frames will all be initialized to the starting pose.
183+
184+
3. Precisely move the camera along the predetermined 40m path.
185+
186+
> **Note**: To maximize the accuracy of this test, ensure that dynamic obstacles are removed from the robot's field of view. If a nontrivial dynamic obstacle is introduced while the robot is performing the loop, please restart this test.
187+
188+
4. As the camera nears the completion of its first lap, place the camera at rest, taking care to precisely align it with the floor marker indicating the starting pose.
189+
5. On the standalone computer and within the RViz window, ensure that a nontrivial number of visual features now appear red. Additionally, ensure that the camera frame appears nearly coincident with the `/map` frame.
190+
191+
> **Note**: It is acceptable for the `/odom` frame to jump around significantly with respect to the `/map` frame. For the purposes of this test, only the transform between the `/map` and camera frames is relevant. For additional information about the design intent behind these frames, please see [REP-0105](https://www.ros.org/reps/rep-0105.html).
192+
193+
<div align="center"><img src="../resources/validation_test2_rviz.png" width="600px"/></div>
194+
195+
6. On the standalone computer and within the `tf2_echo` terminal, note the latest transform reported between `/map` and the appropriate camera frame.
196+
<div align="center"><img src="../resources/validation_test2_tf2.png" width="600px"/></div>
197+
198+
1. Validate Orientation:
199+
200+
The orientation error between the measured rotation unit quaternion and the expected rotation unit quaternion (`[0 0 0 1]`, since the path is a closed loop) should be less than 15 degrees. This error can be calculated using the process explained [here](https://gitlab.tudelft.nl/-/snippets/190):
201+
202+
$$q_{measured} = w_m + x_m i + y_m j + z_m k$$
203+
$$q_{expected} = w_e + x_e i + y_e j + z_e k$$
204+
$$2 \arccos (w_m w_e + x_m x_e + y_m y_e + z_m z_e) \leq 15 \degree $$
205+
206+
2. Validate Translation:
207+
208+
The magnitude of the measured translation should be within 2 meters (5% of the 40m path length) of the expected translation (0 meters, since the path is a closed loop):
209+
210+
$$ 0 \text{m} \leq \|t_{measured}\| \leq 2 \text{m}$$
211+
212+
If all real-world measurements correspond to the reported estimates within the expected tolerance, the Loop Closure test is a sucess.
213+
214+
## Next Steps
215+
216+
### Troubleshooting
217+
If any step of the above tests did not succeed, then there may be a configuration error with your system. Please follow these [troubleshooting steps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam#troubleshooting).
218+
219+
If the prepared troubleshooting steps are unable to resolve the test failure, please contact your NVIDIA support representative and file an issue report with all relevant details.
220+
221+
### Integrating Isaac ROS Visual SLAM
222+
223+
While the Isaac ROS Visual SLAM node exposes a [multitude of interfaces](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam#isaac_ros_visual_slam), the core way to consume the node's output is through the reported transform between the `/map` and camera frames.
224+
225+
Based on your robot's precise specifications, you can set up a static transform publisher or URDF-based publisher that connects the camera frame to a relevant planning frame on the robot (typically named `/base_link`). Then, your navigation stack can use the transform between `/base_link` and `/map` to plan movements within the local space.
226+
227+
Consider further augmenting your robot's navigation stack with other Isaac ROS packages, such as [Isaac ROS Nvblox](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox) for 3D reconstruction.

0 commit comments

Comments
 (0)