The following scripts assume or create the following directory structure
|-- illum_est_expts
| |-- data
| | |-- SamsungNX2000
| | | |-- ours
| | | |-- real
| | | `-- upi
| | |-- Canon1DsMkIII
| | |-- Canon600D
| | |-- FujifilmXM1
| | |-- NikonD40
| | |-- NikonD5200
| | |-- OlympusEPL6
| | |-- PanasonicGX1
| | `-- SonyA57
| `-- expts
| |-- Canon1DsMkIII_illum_est_ours
| | |-- models
| | |-- tensorboard
| | `-- results
| |-- nus_metadata
| | `-- nus_outdoor_gt_illum_mats
| `-- synthia
| `-- SYNTHIA_RAND_CVPR16
`-- RGB
For all methods, we use a subset of the SYNTHIA dataset for training and the NUS dataset for testing.
- Prepare the NUS dataset
- Follow the instructions in illum_est_nus.pptx for each camera and put the images under
data/<camera>/real - From the NUS dataset webpage, download the groundtruth illuminant (
MAT) files for each camera and put them under bothdata/nus_metadata/nus_outdoor_gt_illum_matsanddata/<camera>/realfor each camera
- Follow the instructions in illum_est_nus.pptx for each camera and put the images under
- Download the SYNTHIA-RAND (CVPR16) dataset from link
- We used 200 images from
SYNTHIA_RAND_CVPR16/RGBfor training and validation
- We used 200 images from
python3 -m jobs.generate_dataset_illum_est_graphics2raw
python3 -m jobs.illum_est -c <camera1,camera2,...> -m ours
Due to copyright issues, we cannot re-distribute third-party code. Please refer to upi.md before proceeding to the following steps.
python3 -m jobs.generate_dataset_illum_est_upi
python3 -m jobs.illum_est_upi
Already completed in Prepare real data.
python3 -m jobs.illum_est -c <camera1,camera2,...> -m real