Skip to content

[Feature]: kornia-sensors crate with IMU preintegrationΒ #845

@cjpurackal

Description

@cjpurackal

πŸš€ Feature Description

Create a new kornia-sensors crate for physical sensor processing. The first module implements IMU preintegration β€” compressing high-rate IMU measurements between keyframes into a single relative motion constraint for use in visual-inertial optimization.

PRs for this issue should target the feat/slam-utils branch.

πŸ“‚ Feature Category

Rust Core Library (new crate)

πŸ’‘ Motivation

kornia-3d handles geometric primitives (camera models, PnP, triangulation, BA). Physical sensor behavior β€” IMU integration, bias estimation, noise models, time synchronization β€” doesn't belong there. A kornia-sensors crate gives these a clean home.

IMU preintegration is the first and most important module. It enables:

  • Visual-inertial odometry (VIO) β€” fusing camera and IMU for robust pose estimation
  • Metric scale recovery in monocular SLAM
  • Pose prediction during fast motion (better than constant-velocity model)
  • Bridging tracking gaps when visual features are temporarily lost

πŸ’­ Proposed Solution

Crate structure

kornia-sensors/
  src/
    lib.rs
    imu/
      mod.rs            β€” public API
      types.rs          β€” ImuMeasurement, ImuBias, ImuCalib
      preintegration.rs β€” PreintegratedImu

Core types

/// Single IMU sample.
pub struct ImuMeasurement {
    pub timestamp_sec: f64,
    pub gyro: Vec3F64,    // angular velocity (rad/s)
    pub accel: Vec3F64,   // linear acceleration (m/sΒ²)
}

/// Accelerometer and gyroscope biases.
pub struct ImuBias {
    pub gyro: Vec3F64,
    pub accel: Vec3F64,
}

/// IMU noise and calibration parameters.
pub struct ImuCalib {
    pub gyro_noise: f64,         // gyroscope noise density
    pub accel_noise: f64,        // accelerometer noise density
    pub gyro_walk: f64,          // gyroscope random walk
    pub accel_walk: f64,         // accelerometer random walk
}

Preintegration

/// Preintegrated IMU measurement between two keyframes.
pub struct PreintegratedImu {
    pub delta_rotation: Mat3F64,    // accumulated rotation (SO3)
    pub delta_velocity: Vec3F64,    // accumulated velocity change
    pub delta_position: Vec3F64,    // accumulated position change
    pub dt: f64,                    // total integration time
    pub covariance: [[f64; 15]; 15], // 15x15 covariance (rot, vel, pos, bias_g, bias_a)
    pub bias: ImuBias,              // bias used during integration

    // Bias correction Jacobians (enable first-order correction without re-integration)
    pub j_rotation_gyro: Mat3F64,   // βˆ‚Ξ”R/βˆ‚bg
    pub j_velocity_gyro: Mat3F64,   // βˆ‚Ξ”V/βˆ‚bg
    pub j_velocity_accel: Mat3F64,  // βˆ‚Ξ”V/βˆ‚ba
    pub j_position_gyro: Mat3F64,   // βˆ‚Ξ”P/βˆ‚bg
    pub j_position_accel: Mat3F64,  // βˆ‚Ξ”P/βˆ‚ba
}

impl PreintegratedImu {
    /// Create a new preintegration from initial bias.
    pub fn new(bias: ImuBias, calib: &ImuCalib) -> Self;

    /// Integrate a single IMU measurement.
    pub fn integrate(&mut self, measurement: &ImuMeasurement, dt: f64);

    /// Integrate a batch of measurements between two timestamps.
    pub fn integrate_batch(&mut self, measurements: &[ImuMeasurement]);

    /// Apply first-order bias correction without re-integrating.
    pub fn correct_bias(&self, new_bias: &ImuBias) -> CorrectedPreintegration;

    /// Predict pose at time t given initial state.
    pub fn predict(
        &self,
        rotation: &Mat3F64,
        position: &Vec3F64,
        velocity: &Vec3F64,
        gravity: &Vec3F64,
    ) -> (Mat3F64, Vec3F64, Vec3F64);  // (R2, p2, v2)
}

Key math

Each measurement (accel, gyro, dt):

  1. Bias-correct: a = accel - bias_a, w = gyro - bias_g
  2. Update position: dP += dV*dt + 0.5*dR*a*dtΒ²
  3. Update velocity: dV += dR*a*dt
  4. Update rotation: dR = dR * exp(w*dt) (SO3 exponential map from kornia-algebra)
  5. Propagate 15Γ—15 covariance
  6. Update bias Jacobians (for first-order correction)

Pose prediction from keyframe state:

R2 = R1 * dR_corrected
p2 = p1 + v1*dt + 0.5*g*dtΒ² + R1*dP_corrected
v2 = v1 + g*dt + R1*dV_corrected

πŸ“š Library Reference

πŸ”„ Alternatives Considered

No response

🎯 Use Cases

  • Visual-inertial odometry and SLAM
  • Pose prediction during visual tracking gaps
  • Metric scale estimation for monocular systems
  • Any multi-sensor fusion involving IMU

πŸ“ Additional Context

This crate depends on kornia-3d (for Pose3d, Mat3F64, Vec3F64) and kornia-algebra (for SO3 exponential map, right Jacobian, and matrix operations). It does not duplicate camera models β€” those stay in kornia-3d.

🀝 Contribution Intent

  • I plan to submit a PR to implement this feature
  • I'm requesting this feature but not planning to implement it

Metadata

Metadata

Assignees

Labels

enhancementNew feature or requesttriagewait for a maintainer to approve and assign this ticket

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions