To use the pretrained models, we need to preprocess the Sentinel-1 images identically to the KuroSiwo dataset. Else we risk degraded performance from subtle data distribution misalignment.
In the paper, Section 3 describes it as:
[Using Sentinal Application Platform (SNAP) to apply] precise orbit application, removal of thermal and border noise, land and sea masking, calibration, speckle filtering, and terrain correction using an external digital elevation model.
Could you provide either a script or a configuration file or something like that which would allow others to exactly replicate this preprocessing on other Sentinel-1 data?
P.S. Sorry for raising so many issues. I'm excited to use the model is all.
To use the pretrained models, we need to preprocess the Sentinel-1 images identically to the KuroSiwo dataset. Else we risk degraded performance from subtle data distribution misalignment.
In the paper, Section 3 describes it as:
Could you provide either a script or a configuration file or something like that which would allow others to exactly replicate this preprocessing on other Sentinel-1 data?
P.S. Sorry for raising so many issues. I'm excited to use the model is all.