Skip to content

Question about the test output #4

@Alex-Beh

Description

@Alex-Beh

Hi authors,

First, thank you for releasing the IMU-factor evaluation code.

I tried running the unit test in evalImuFactor.cpp and obtained the following console output:

./evalAHRSFactor 
Not equal:
expected:
 [
	-0.346692, 0.044757, -0.936911;
	-0.0291355, 0.997865, 0.0584501;
	0.937527, 0.0475615, -0.344648
]
actual:
 [
	0.925444, 0.308216, 0.220352;
	0.119303, -0.789052, 0.60263;
	0.359609, -0.531411, -0.766995
]
Failure: "assert_equal(scenario.rotation(scenario.duration()), runner.predict(pim), 1e-9)" 
Not equal:
expected:
 [
	0.0448764, 0.970451, 0.237093;
	0.359178, -0.237137, 0.902641;
	0.932191, 0.0446501, -0.359207
]
actual:
 [
	0.155674, -0.178706, -0.971508;
	-0.351104, 0.909267, -0.223518;
	0.923304, 0.375896, 0.0788047
]
Failure: "assert_equal(scenario.rotation(scenario.duration()), runner.predict(pim), 1e-9)" 
There were 2 failures
  1. Is this the same output you see on your side?
  2. Does the test compare the whole pre-integrated IMU trajectory against whole ground truth?
    From my understanding, it integrates IMU, then compares the predicted end NavState against the scenario’s ground-truth state at the same time.

Thanks

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions