Since we used different cameras from those used in the
Unprocessing paper,
we followed the authors' suggestion to modify random_ccm() and random_gains() in unprocess.py
to best match the distribution of image metadata from the cameras we used.
Due to copyright issues, we cannot re-distribute third-party code. To reproduce our procedure for UPI, please copy over the official code to data_generation/unprocess.py and refer to the following modifications.
We used camera sensors from the NUS dataset and the nighttime dataset of Day-to-Night as our target sensors. Therefore, the code was modified to use those sensors' CST matrices and gain ranges.
- Replace
xyz2camsinrandom_ccm()(line 32) with
xyz2cams = pickle.load(open('assets/container_dngs/NUS_S20FE_CST_mats.p', 'rb'))
- Return the sampled
xyz2cam; this matrix is needed in package_exr_to_dng_upi.py for building the DNG.
- Return both
rgb2cam, xyz2caminrandom_ccm()(line 58, line 124) - line 141-146,
change
metadatatometadata = { 'cam2rgb': cam2rgb, 'rgb_gain': rgb_gain, 'red_gain': red_gain, 'blue_gain': blue_gain, 'xyz2cam': xyz2cam, }
- Replace
red_gainandblue_gaininrandom_gains()(line 67-68) with
red_gain = tf.random.uniform((), 1.0, 3.3)
blue_gain = tf.random.uniform((), 1.3, 4.4)
- Comment out
image = mosaic(image)(line 139) because the illumination estimation experiments need demosaiced images instead of bayer images.
Some small syntax modifications were needed to adjust the code to work with TensorFlow 2.0. Other fixes may work, too.
- Define
tf.to_float = lambda x: tf.cast(x, tf.float32)(tf.to_floatis deprecated in tf v2.0) - Change
tf.random_uniformtotf.random.uniform - Change
tf.random_normaltotf.random.normal - Remove the
Noneintf.name_scope(None, 'unprocess') - Change
tf.matrix_inversetotf.linalg.inv