We containerize Sat2Graph inference server with several models (so far, 7) into one container image.
The inference server supports two inference modes. (1) Given a lat/lon coordinate and the size of the tile, the inference server can automatically download MapBox images and run inference on it. (2) Run on custom input images as long as the ground sampling distance (e.g., 50 cm/pixel) is provided.
We also add code into this container to make it easy to download OpenStreetMap graphs (often used as ground truth). Please check out the detailed instruction below.
To install docker (with GPU support), you can check out the instruction here.
Start the server with GPU support.
docker run -p8010:8000 -p8011:8001 --gpus all -it --rm songtaohe/sat2graph_inference_server_gpu:latestOr just run it on CPUs (the image is smaller).
docker run -p8010:8000 -p8011:8001 -it --rm songtaohe/sat2graph_inference_server_cpu:latestYou can find these two commands in run-cpu.sh and run-gpu.sh.
You can use the Python scripts in the scripts folder to start inference tasks.
For example, to run inference on MapBox images, use the following command,
python infer_mapbox_input.py -lat 47.601214 -lon -122.134466 -tile_size 500 -model_id 2 -output out.json
This script uses meter as the unit for the tile_size. The model_id argument determines which model to use. We show the supported models below.
To run inference on custom images (e.g., sample.png), use the following command,
python infer_custom_input.py -input sample.png -gsd 0.5 -model_id 2 -output out.json
Here, the gsd argument indicates the ground sampling distance or the spatial resolution of the image. The unit is in meter.
| Model ID | Note |
|---|---|
| 0 | Sat2Graph-V1, 80-City US, 1 meter GSD, DLA backbone (2x more channels compared with model-1) |
| 1 | Sat2Graph-V1, 20-City US, 1 meter GSD, DLA backbone (the one used in the Sat2Graph paper) |
| 2 | Sat2Graph-V2, 20-City US, 50cm GSD (preview, same GTE representation but new backbone) |
| 3 | Sat2Graph-V2, 80-City Global, 50cm GSD (preview, same GTE representation but new backbone) |
| 4 | UNet segmentation (our implementation), 20-City US, 1 meter GSD |
| 5 | DeepRoadMapper segmentation (our implementation), 20-City US, 1 meter GSD |
| 6 | JointOrientation segmentation (our implementation), 20-City US, 1 meter GSD |
To get OSM graphs, set the osm_only flag to 1 when using infer_mapbox_input.py. For example,
python infer_mapbox_input.py -lat 47.601214 -lon -122.134466 -tile_size 500 -osm_only 1 -output out.json
The out.json will be the road graph from OpenStreetMap.
The inference result (graph) is stored in a json format. It is basically a list of edges. Each edge stores the coordinates of its two vertices. We add a simple script to visualize it (need opencv).
python vis.py tile_size input_json output_image
You can also convert this json format into the pickle format that is compatible with the Sat2Graph evaluation code (topo and apls).
python convert.py input.json output.p
The intermediate results (e.g., segmentation mask, the downloaded satellite imagery, etc.) of each inference task are stored at http://localhost:8010/t(task_id)/ (replace the task_id with the task id you get after each run.)