-
Notifications
You must be signed in to change notification settings - Fork 8
Links to Publicly Available Input Data
- https://pds-geosciences.wustl.edu/missions/mars2020/places.htm
- https://pds-geosciences.wustl.edu/m2020/urn-nasa-pds-mars2020_rover_places
- https://pds-geosciences.wustl.edu/m2020/urn-nasa-pds-mars2020_rover_places/document/Mars2020_Rover_PLACES_PDS_SIS.pdf
To make contextual and orbital tilesets, Landform typically uses the best_interp view, which inherits the manually localized solutions for rover end-of-drive locations from the best_tactical view, and also interpolates the other rover positions from the telemetry view between those. Landform does not use PLACES to make tactical tilesets.
- https://astrogeology.usgs.gov/search/map/Mars/Mars2020/JEZ_hirise_soc_006_DTM_MOLAtopography_DeltaGeoid_1m_Eqc_latTs0_lon0_blend40
- https://planetarymaps.usgs.gov/mosaic/mars2020_trn/HiRISE/JEZ_hirise_soc_006_DTM_MOLAtopography_DeltaGeoid_1m_Eqc_latTs0_lon0_blend40.tif
resolution: 1 sample per square meter
Landform typically uses the orbital DEM to make contextual and orbital tilesets. However, it's possible to build a contextual tileset without using the orbital DEM.
- https://astrogeology.usgs.gov/search/map/Mars/Mars2020/JEZ_hirise_soc_006_orthoMosaic_25cm_Eqc_latTs0_lon0_first
- https://planetarymaps.usgs.gov/mosaic/mars2020_trn/HiRISE/JEZ_hirise_soc_007_orthoMosaic_25cm_Ortho_blend120.tif
resolution: 4 samples per square meter
Landform typically uses the orbital orthoimage to make contextual and orbital tilesets. However, it's possible to build both without this product, in which case there will be no texture coloration contribution from orbital.
Documentation: https://pds-geosciences.wustl.edu/m2020/urn-nasa-pds-mars2020_mission/document_camera/Mars2020_Camera_SIS.pdf
RAS, RZS:
- https://planetarydata.jpl.nasa.gov/img/data/mars2020/mars2020_hazcam_ops_calibrated
- https://pds-imaging.jpl.nasa.gov/beta/archive-explorer?mission=mars_2020&bundle=mars2020_hazcam_ops_calibrated
XYZ, UVW, MXY:
- https://planetarydata.jpl.nasa.gov/img/data/mars2020/mars2020_hazcam_ops_stereo
- https://pds-imaging.jpl.nasa.gov/beta/archive-explorer?mission=mars_2020&bundle=mars2020_hazcam_ops_stereo
obj:
- https://planetarydata.jpl.nasa.gov/img/data/mars2020/mars2020_hazcam_ops_mesh
- https://pds-imaging.jpl.nasa.gov/beta/archive-explorer?mission=mars_2020&bundle=mars2020_hazcam_ops_mesh
In contextual tilesets Landform typically uses
- Hazcam left and right eye front and rear radiometrically calibrated visible light images with nonlinear camera models (regex
[FB][LR].*RAS_N.*IMG), though zenith scaled radiance (RZSinstead ofRAS) can also be used - Hazcam left eye front and rear stereo vision point and normal clouds with nonlinear camaera models (regex
[FB]L.*(XYZ|UVW)_N.*IMG)
Landform can also make tactical tilesets from obj format Hazcam wedge meshes.
Documentation: https://pds-geosciences.wustl.edu/m2020/urn-nasa-pds-mars2020_mission/document_camera/Mars2020_Camera_SIS.pdf
RAS, RZS:
- https://planetarydata.jpl.nasa.gov/img/data/mars2020/mars2020_navcam_ops_calibrated
- https://pds-imaging.jpl.nasa.gov/beta/archive-explorer?mission=mars_2020&bundle=mars2020_navcam_ops_calibrated
XYZ, UVW, MXY:
- https://planetarydata.jpl.nasa.gov/img/data/mars2020/mars2020_navcam_ops_stereo
- https://pds-imaging.jpl.nasa.gov/beta/archive-explorer?mission=mars_2020&bundle=mars2020_hazcam_ops_stereo
obj:
- https://planetarydata.jpl.nasa.gov/img/data/mars2020/mars2020_navcam_ops_mesh
- https://pds-imaging.jpl.nasa.gov/beta/archive-explorer?mission=mars_2020&bundle=mars2020_navcam_ops_mesh
In contextual tilesets Landform typically uses
- Navcam left and right eye radiometrically calibrated visible light images with nonlinear camera models (regex
N[LR].*RAS_N.*IMG), though zenith scaled radiance (RZSinstead ofRAS) can also be used - Navcam left eye stereo vision point and normal clouds with nonlinear camaera models (regex
NL.*(XYZ|UVW)_N.*IMG)
Landform can also make tactical tilesets from obj format Navcam wedge meshes.
Documentation: https://pds-geosciences.wustl.edu/m2020/urn-nasa-pds-mars2020_mission/document_camera/Mars2020_Camera_SIS.pdf
RAS, RZS:
- https://planetarydata.jpl.nasa.gov/img/data/mars2020/mars2020_mastcamz_ops_calibrated
- https://pds-imaging.jpl.nasa.gov/beta/archive-explorer?mission=mars_2020&bundle=mars2020_mastcamz_ops_calibrated
obj:
- https://planetarydata.jpl.nasa.gov/img/data/mars2020/mars2020_mastcamz_ops_mesh
- https://pds-imaging.jpl.nasa.gov/beta/archive-explorer?mission=mars_2020&bundle=mars2020_mastcamz_ops_mesh
In contextual tilesets Landform typically uses
- left and right eye radiometrically calibrated visible light images with nonlinear camera models (regex
Z[LR].*RAS_N.*IMG), though zenith scaled radiance (RZSinstead ofRAS) can also be used
Landform can also make tactical tilesets from obj format Mastcam-Z wedge meshes.