|
| 1 | +# Local Localisation |
| 2 | + |
| 3 | +## Introduction |
| 4 | + |
| 5 | +Local robot localisation refers to estimating a robot’s position and orientation **relative to its initial reference frame**, rather than to a global map. In our system, the frame **`/odom`** acts as the origin of the robot’s local coordinate system and is initialised when the ROS2 controller starts. All subsequent motion updates are expressed relative to this frame. |
| 6 | + |
| 7 | +A variety of sensing modalities can be used to estimate odometry, including **wheel encoders**, **IMUs**, **cameras**, **GPS**, and **indoor motion-capture systems** (e.g., OptiTrack). Each of these sensors introduces uncertainties—such as drift, noise, or environmental disturbances—making _perfect_ localisation impossible. Instead, an appropriate odometry method is selected based on the available hardware, mission constraints, and operating environment to achieve a localisation estimate that is sufficiently accurate, stable, and computationally feasible for navigation tasks. |
| 8 | + |
| 9 | +## Overview |
| 10 | + |
| 11 | +For a Lunar Rover operating on the Moon, several localisation techniques commonly used on Earth are **not feasible**. For example: |
| 12 | + |
| 13 | +- **GPS cannot be used**, as there is no satellite navigation infrastructure on the Moon. |
| 14 | +- **OptiTrack or other external motion-capture systems are unsuitable** due to the absence of controlled indoor environments. |
| 15 | +- **Magnetometers perform poorly**, as the Moon lacks a global magnetic field and therefore cannot provide reliable heading information. |
| 16 | + |
| 17 | +Given these constraints, the most practical and cost-effective odometry sources for lunar surface operations are: |
| 18 | + |
| 19 | +- **Wheel encoders**, which provide incremental motion estimation through wheel rotation. |
| 20 | +- **Inertial Measurement Units (IMUs)**, which estimate orientation and short-term motion but suffer from drift over time. |
| 21 | +- **Onboard cameras**, which enable **visual odometry** or **visual-inertial odometry** by tracking visual features across frames to estimate movement. |
| 22 | + |
| 23 | +By combining these complementary sensors—typically through an **Extended Kalman Filter (EKF)** or a similar fusion framework—the rover can maintain a robust, drift-bounded estimate of its pose within the local odometry frame. This hybrid approach provides a balance between accuracy, computational cost, and environmental suitability, making it ideal for lunar surface exploration where external localisation references are unavailable. |
| 24 | + |
| 25 | +Due to the complexity and computational demands of full visual SLAM (VSLAM) pipelines, our current objective is to focus on achieving reliable **EKF-based fusion of IMU and wheel encoder data**. Since no single sensor can provide perfectly accurate odometry—especially in a feature-sparse and unstructured lunar environment—it becomes essential to understand, configure, and fine-tune the EKF parameters. A well-tuned EKF is critical for reducing drift, managing sensor noise, and ensuring that the rover maintains a stable and consistent local pose estimate throughout its mission. |
| 26 | + |
| 27 | +:::{note} |
| 28 | + |
| 29 | +### **Why not just use a single encoder or IMU for odometry estimation?** |
| 30 | + |
| 31 | +Relying on **only one sensor** for odometry—whether it be wheel encoders or an IMU—is insufficient for reliable localisation, especially in a challenging environment like the lunar surface: |
| 32 | + |
| 33 | +**1. Wheel Encoders Alone Are Not Reliable** |
| 34 | + |
| 35 | +- **Slip and wheel sinkage** on loose regolith cause the measured wheel rotation to differ from actual motion. |
| 36 | +- **No correction mechanism** exists—errors accumulate without any way to reset or bound them. |
| 37 | +- Encoder-only odometry drifts significantly over time, especially during long traverses or rough terrain. |
| 38 | + |
| 39 | +**2. IMU Alone Cannot Maintain Accurate Pose** |
| 40 | + |
| 41 | +- IMUs measure acceleration and angular velocity, which must be **integrated** to estimate position. |
| 42 | +- This integration process causes **rapid drift**, as even tiny biases or noise accumulate exponentially. |
| 43 | +- Low-cost or space-rated IMUs without external reference can become unusably inaccurate within seconds to minutes. |
| 44 | + |
| 45 | +**3. Fundamental Limitation: No Single Sensor Provides Complete, Drift-Bounded Odometry** |
| 46 | +Each sensor has **complementary strengths and weaknesses**: |
| 47 | + |
| 48 | +- Encoders provide **short-term accurate displacement**, but fail when slip occurs. |
| 49 | +- IMUs provide **short-term accurate rotation and dynamics**, but drift rapidly in position. |
| 50 | + |
| 51 | +Without sensor fusion, the rover would either: |
| 52 | + |
| 53 | +- **Quickly lose localisation**, or |
| 54 | +- **Misinterpret its motion**, leading to incorrect navigation, poor path tracking, or hazard avoidance failures. |
| 55 | + |
| 56 | +**4. Fusion is Required to Achieve Stable Localisation** |
| 57 | +By combining IMU and encoder data through an EKF, the system: |
| 58 | + |
| 59 | +- Compensates for the drift of the IMU |
| 60 | +- Corrects encoder errors during slip |
| 61 | +- Produces a **consistent and bounded** pose estimate |
| 62 | +- Supports higher-level navigation algorithms (Nav2, mapping, hazard avoidance, etc.) |
| 63 | + the rest of your documentation. |
| 64 | + ::: |
| 65 | + |
| 66 | +## Tuning Parameters |
| 67 | + |
| 68 | +To learn more about configuration options and recommended practices, refer to the official **[`robot_localization` documentation](https://docs.nav2.org/setup_guides/odom/setup_robot_localization.html)**. |
| 69 | + |
| 70 | +A typical configuration for the `ekf_filter_node` is shown below. This setup fuses wheel encoder odometry (from the differential drive controller) and IMU measurements, each with appropriate covariance values, to produce a stable and drift-bounded estimate of the rover’s local odometry. |
| 71 | + |
| 72 | +```yaml |
| 73 | +ekf_filter_node: |
| 74 | + ros__parameters: |
| 75 | + frequency: 30.0 |
| 76 | + sensor_timeout: 0.1 |
| 77 | + two_d_mode: false |
| 78 | + publish_acceleration: false |
| 79 | + publish_tf: true |
| 80 | + |
| 81 | + map_frame: map |
| 82 | + odom_frame: odom |
| 83 | + base_link_frame: base_link |
| 84 | + world_frame: odom |
| 85 | + |
| 86 | + # Wheel encoder (odometry) input |
| 87 | + odom0: /odom |
| 88 | + odom0_config: |
| 89 | + [ |
| 90 | + true, |
| 91 | + true, |
| 92 | + false, |
| 93 | + false, |
| 94 | + false, |
| 95 | + true, |
| 96 | + true, |
| 97 | + true, |
| 98 | + false, |
| 99 | + false, |
| 100 | + false, |
| 101 | + true, |
| 102 | + false, |
| 103 | + false, |
| 104 | + false, |
| 105 | + ] |
| 106 | + odom0_queue_size: 10 |
| 107 | + odom0_nodelay: false |
| 108 | + odom0_differential: false |
| 109 | + odom0_relative: false |
| 110 | + odom0_pose_rejection_threshold: 5.0 |
| 111 | + odom0_twist_rejection_threshold: 1.0 |
| 112 | + |
| 113 | + # IMU input |
| 114 | + imu0: /imu/data |
| 115 | + imu0_config: |
| 116 | + [ |
| 117 | + false, |
| 118 | + false, |
| 119 | + false, |
| 120 | + true, |
| 121 | + true, |
| 122 | + true, |
| 123 | + false, |
| 124 | + false, |
| 125 | + false, |
| 126 | + true, |
| 127 | + true, |
| 128 | + true, |
| 129 | + true, |
| 130 | + true, |
| 131 | + true, |
| 132 | + ] |
| 133 | + imu0_queue_size: 10 |
| 134 | + imu0_nodelay: false |
| 135 | + imu0_differential: false |
| 136 | + imu0_relative: true |
| 137 | + imu0_pose_rejection_threshold: 0.8 |
| 138 | + imu0_twist_rejection_threshold: 0.8 |
| 139 | + imu0_linear_acceleration_rejection_threshold: 0.8 |
| 140 | + imu0_remove_gravitational_acceleration: true |
| 141 | + |
| 142 | + # Process noise model for the EKF |
| 143 | + process_noise_covariance: [... trimmed for readability ...] |
| 144 | + |
| 145 | + # Initial uncertainty in the state estimate |
| 146 | + initial_estimate_covariance: [1e-9, 0, 0, ...] |
| 147 | +``` |
| 148 | +
|
| 149 | +This configuration ensures that the EKF receives encoder-derived linear velocity and yaw information, along with IMU-derived angular velocity and orientation, weighted appropriately by their covariance values. Together, these inputs produce a fused odometry estimate that is more reliable than either sensor alone. |
| 150 | +
|
| 151 | +--- |
| 152 | +
|
| 153 | +### State Variables |
| 154 | +
|
| 155 | +The `*_config` arrays specify which elements of the EKF state vector each sensor contributes to. For example: |
| 156 | + |
| 157 | +```yaml |
| 158 | +odom0_config: |
| 159 | + [ |
| 160 | + true, |
| 161 | + true, |
| 162 | + false, |
| 163 | + false, |
| 164 | + false, |
| 165 | + true, |
| 166 | + true, |
| 167 | + true, |
| 168 | + false, |
| 169 | + false, |
| 170 | + false, |
| 171 | + true, |
| 172 | + false, |
| 173 | + false, |
| 174 | + false, |
| 175 | + ] |
| 176 | +imu0_config: |
| 177 | + [ |
| 178 | + false, |
| 179 | + false, |
| 180 | + false, |
| 181 | + true, |
| 182 | + true, |
| 183 | + true, |
| 184 | + false, |
| 185 | + false, |
| 186 | + false, |
| 187 | + true, |
| 188 | + true, |
| 189 | + true, |
| 190 | + true, |
| 191 | + true, |
| 192 | + true, |
| 193 | + ] |
| 194 | +``` |
| 195 | + |
| 196 | +Each Boolean corresponds to a state variable in the filter’s 15-element state vector. |
| 197 | + |
| 198 | +Nice, this is shaping up really well. Here’s a **compact state-space section** you can drop under **“State Variables”** (and/or after the tuning part). |
| 199 | + |
| 200 | +#### **15 Element State Space Matrices:** |
| 201 | + |
| 202 | +The EKF in `robot_localization` uses a 15-dimensional state vector: |
| 203 | +$ { |
| 204 | +\mathbf{x} = |
| 205 | +\begin{bmatrix} |
| 206 | +x & y & z & \phi & \theta & \psi & v_x & v_y & v_z & \omega_x & \omega_y & \omega_z & a_x & a_y & a_z |
| 207 | +\end{bmatrix}^\top |
| 208 | +} $ |
| 209 | +where: |
| 210 | + |
| 211 | +| Index | Symbol | Meaning | |
| 212 | +| ----- | -------------------------------- | -------------------------------- | |
| 213 | +| 0–2 | $(x, y, z)$ | Position in `odom` frame | |
| 214 | +| 3–5 | $(\phi, \theta, \psi)$ | Roll, pitch, yaw | |
| 215 | +| 6–8 | $(v_x, v_y, v_z)$ | Linear velocities in `odom` | |
| 216 | +| 9–11 | $(\omega_x, \omega_y, \omega_z)$ | Angular velocities (body / odom) | |
| 217 | +| 12–14 | $(a_x, a_y, a_z)$ | Linear accelerations | |
| 218 | + |
| 219 | +In our case, we let the **IMU provide orientation, angular velocity, and linear acceleration**, while the **wheel encoders provide linear velocity and yaw-rate information**. By selectively enabling these elements in the `*_config` arrays, each sensor contributes only the components it can measure reliably. The EKF then fuses these complementary measurements into a single, drift-bounded state estimate: |
| 220 | + |
| 221 | +- **Encoders** stabilise the rover’s short-term linear motion and yaw. |
| 222 | +- **IMU** stabilises orientation, angular motion, and short-term accelerations. |
| 223 | +- **Process and measurement covariances** determine how heavily the EKF trusts each source. |
| 224 | + |
| 225 | +This selective fusion ensures that each state variable in the 15-element vector is updated by the best available sensor, while the EKF model propagates the full state over time. The result is a smooth, consistent, and reliable odometry estimate suitable for local navigation, control, and downstream modules such as Nav2, mapping, or hazard detection. |
| 226 | + |
| 227 | +#### **Tuning `process_noise_covariance` and `initial_estimate_covariance`** |
| 228 | + |
| 229 | +In `robot_localization`, these two matrices control **how the EKF behaves over time**: |
| 230 | + |
| 231 | +- `process_noise_covariance` → how much the filter believes the state can change _between_ measurements. |
| 232 | +- `initial_estimate_covariance` → how uncertain the filter is about the **starting pose and state**. |
| 233 | + |
| 234 | +```yaml |
| 235 | +# Process noise model for the EKF |
| 236 | +process_noise_covariance: [...] |
| 237 | +
|
| 238 | +# Initial uncertainty in the state estimate |
| 239 | +initial_estimate_covariance: [1e-9, 0, 0, ...] |
| 240 | +``` |
| 241 | + |
| 242 | +**`process_noise_covariance` (Q)** |
| 243 | +Each diagonal element corresponds to one state variable in the 15-element vector ($[x, y, z, \phi, \theta, \psi, v_x, \dots, a_z]$). |
| 244 | + |
| 245 | +- Larger values → EKF assumes the **model is less accurate**, so it trusts **sensor measurements more** and adapts faster. |
| 246 | +- Smaller values → EKF assumes the **model is reliable**, so it smooths aggressively and reacts more slowly. |
| 247 | + |
| 248 | +Typical practice: |
| 249 | + |
| 250 | +- Use **higher Q** for velocities and accelerations (they change quickly, are hard to model). |
| 251 | +- Use **lower Q** for orientation and position in simulation, where the motion model is clean. |
| 252 | +- If `/odometry/filtered` looks **sluggish** → slightly _increase_ the relevant Q terms. |
| 253 | +- If it looks **noisy or jittery** → slightly _decrease_ the relevant Q terms. |
| 254 | + |
| 255 | +**`initial_estimate_covariance` (P₀)** |
| 256 | +This matrix defines the EKF’s **initial confidence** in each state variable. |
| 257 | + |
| 258 | +- Very small values (e.g. `1e-9`) mean the filter assumes the starting state is **almost exact** (typical for simulation, where `odom` and `base_link` are known). |
| 259 | +- Larger values tell the filter it is **uncertain** about the initial pose, so early measurements can shift the estimate more aggressively. |
| 260 | + |
| 261 | +For a Lunar Rover simulation, it is reasonable to: |
| 262 | + |
| 263 | +- Keep **very low covariance** on the initial pose if the robot always spawns at a known location. |
| 264 | +- Increase initial covariance only if you deliberately introduce uncertainty (e.g., randomised spawn poses or noisy initial alignment). |
| 265 | + |
| 266 | +In practice, you tune these two matrices together: |
| 267 | +`process_noise_covariance` shapes how the state evolves over time, while `initial_estimate_covariance` controls how quickly the EKF “locks in” to the correct pose at startup. |
| 268 | + |
| 269 | +:::{note} |
| 270 | + |
| 271 | +#### **Practical Tuning Workflow Summary** |
| 272 | + |
| 273 | +- Start with **moderate process noise (Q)** for velocities and accelerations. |
| 274 | +- Tune **IMU covariances first** to stabilise orientation and angular velocity. |
| 275 | +- Tune **encoder covariances next** to achieve smooth linear velocity and yaw estimates. |
| 276 | +- Adjust based on behaviour: |
| 277 | + |
| 278 | + - **Jittering / jumping** → increase measurement covariance. |
| 279 | + - **Lagging / slow response** → decrease measurement covariance. |
| 280 | + - **Oscillation / divergence** → increase process noise. |
| 281 | + |
| 282 | +- Validate tuning by driving the rover and monitoring: |
| 283 | + |
| 284 | + - `/odometry/filtered` |
| 285 | + - raw `/odom` |
| 286 | + - raw `/imu` |
| 287 | + ::: |
0 commit comments