Skip to content

Links to Publicly Available Input Data

martyvona edited this page May 14, 2024 · 4 revisions

PLACES Rover Localization Data

To make contextual and orbital tilesets, Landform typically uses the best_interp view, which inherits the manually localized solutions for rover end-of-drive locations from the best_tactical view, and also interpolates the other rover positions from the telemetry view between those. Landform does not use PLACES to make tactical tilesets.

Orbital Digital Elevation Map (DEM) GeoTIFF

resolution: 1 sample per square meter

Landform typically uses the orbital DEM to make contextual and orbital tilesets. However, it's possible to build a contextual tileset without using the orbital DEM.

Orbital Orthoimage GeoTIFF

resolution: 4 samples per square meter

Landform typically uses the orbital orthoimage to make contextual and orbital tilesets. However, it's possible to build both without this product, in which case there will be no texture coloration contribution from orbital.

Hazcam

Documentation: https://pds-geosciences.wustl.edu/m2020/urn-nasa-pds-mars2020_mission/document_camera/Mars2020_Camera_SIS.pdf

RAS, RZS:

XYZ, UVW, MXY:

obj:

In contextual tilesets Landform typically uses

  • Hazcam left and right eye front and rear radiometrically calibrated visible light images with nonlinear camera models (regex [FB][LR].*RAS_N.*IMG), though zenith scaled radiance (RZS instead of RAS) can also be used
  • Hazcam left eye front and rear stereo vision point and normal clouds with nonlinear camaera models (regex [FB]L.*(XYZ|UVW)_N.*IMG)

Landform can also make tactical tilesets from obj format Hazcam wedge meshes.

Navcam

Documentation: https://pds-geosciences.wustl.edu/m2020/urn-nasa-pds-mars2020_mission/document_camera/Mars2020_Camera_SIS.pdf

RAS, RZS:

XYZ, UVW, MXY:

obj:

In contextual tilesets Landform typically uses

  • Navcam left and right eye radiometrically calibrated visible light images with nonlinear camera models (regex N[LR].*RAS_N.*IMG), though zenith scaled radiance (RZS instead of RAS) can also be used
  • Navcam left eye stereo vision point and normal clouds with nonlinear camaera models (regex NL.*(XYZ|UVW)_N.*IMG)

Landform can also make tactical tilesets from obj format Navcam wedge meshes.

Mastcam-Z

Documentation: https://pds-geosciences.wustl.edu/m2020/urn-nasa-pds-mars2020_mission/document_camera/Mars2020_Camera_SIS.pdf

RAS, RZS:

obj:

In contextual tilesets Landform typically uses

  • left and right eye radiometrically calibrated visible light images with nonlinear camera models (regex Z[LR].*RAS_N.*IMG), though zenith scaled radiance (RZS instead of RAS) can also be used

Landform can also make tactical tilesets from obj format Mastcam-Z wedge meshes.