Skip to content

Identifying inaccurate camera pose info #23

@roterrex

Description

@roterrex
  1. the robot filters tag locations that do not conform to its expected location.
  2. this is done through a simple distance check.
    • this starts as "dist < 0.2m"
    • every time a tag fails this check the tollerance is doubled.
    • once a tag passes the looser test the tollerance resets to 0.2m.
  3. the intent is that this allows the robot to skip bad readings while not preventing it from fixing its pose if it crosses some limit.
  4. In practice, this is not perfect.
    • if tags start to fail this test it takes ages to correct since the tollerance resets every reading while it needs 10+ readings to correct pose.
    • if our pose is good, but we can see no tags. the tollerance can be increased by long range bad readings.

The "PhotonTrackedTarget" module used by our system has two functions ("getDetectedObjectConfidence", "getPoseAmbiguity").
The same module also includes relative yaw and tag ID for the identifications.
Investigate these and brainstorm a way to use their confidence values to eliminate in a more inteligent way.

  1. tags at long range may still provide good yaw info
  2. Any suggested pose outside the field, etc may be discarded
  3. Suggested locations from the same tag should not vary wildly. (i.e. you can check a suggested location against the last suggested location from that tag. If there is a 10m difference in 0.1s stop using that tag until 'x' happens)
  4. Tag locations while the robot is moving or turning rapidly should be distrusted more.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions