We implement a CycleGAN to translate Sentinel-1 SAR imagery (2 channels) into Sentinel-2 EO imagery (3 channels).
Objectives:
- Normalize and preprocess SAR & EO patches
- Train CycleGAN under the SAR→EO (RGB) configuration
- Evaluate results with spectral SSIM, PSNR, and visual comparisons
project/
├── Project1_SAR_to_EO/
│ ├── data/
│ │ ├── raw/ # original .pt samples
│ │ └── processed/ # normalized train/val splits
│ ├── checkpoints/ # saved model weights
│ ├── generated_samples/ # output EO images & comparisons
│ ├── preprocess.py # preprocessing & splitting
│ ├── train_cycleGAN.py # training script
│ ├── evaluate_results.py # metrics & visualization
│ └── config.yaml # hyperparameters & paths
├── README.md
└── requirements.txt
- Clone the repo:
git clone https://github.com/your‐org/your‐repo.git cd your‐repo/Project1_SAR_to_EO - Create a virtual environment and install dependencies:
python3 -m venv venv source venv/bin/activate pip install --upgrade pip pip install -r requirements.txt
- Place your 6,000
.ptfiles indata/raw/. Each file must be a dict:{'A': Tensor(2×256×256), # SAR (VV, VH) 'B': Tensor(3×256×256)} # EO (RGB) - Run preprocessing and train/val split (80/20):
python preprocess.py- Normalizes each channel to [–1,1]
- Saves to
data/processed/train/anddata/processed/val/
Edit config.yaml to adjust hyperparameters (batch size, epochs, learning rate). Then run:
python train_cycleGAN.py
— Model checkpoints will be saved under checkpoints/ as G_A2B_epoch{n}.pth.
Run metrics computation and generate example images:
python evaluate_results.py
This will:
- Compute SSIM & PSNR for N validation samples
- Save comparison grids to
generated_samples/
Leave space below to paste or link your visuals:
- Figure 1. SAR → EO Example (Before / After)
- Figure 2. Additional Examples
- Figure 3. Quantitative Metrics Over Samples
| Sample | SSIM | PSNR | NDVI |
|---|---|---|---|
| sample_0000.pt | 0.7110 | 24.58 | 0.5067 |
| sample_0001.pt | 0.6143 | 17.03 | 0.4683 |
| sample_0002.pt | 0.7328 | 26.68 | 0.4780 |
| sample_0003.pt | 0.6836 | 21.49 | 0.5638 |
| sample_0004.pt | 0.6749 | 26.36 | 0.5649 |
| sample_0005.pt | 0.7644 | 25.68 | 0.5153 |
| sample_0006.pt | 0.7750 | 26.93 | 0.5481 |
| sample_0007.pt | 0.6369 | 19.69 | 0.5864 |
| sample_0008.pt | 0.6452 | 19.65 | 0.6178 |
| sample_0009.pt | 0.7290 | 21.83 | 0.6507 |
| sample_0010.pt | 0.7109 | 22.55 | 0.6260 |
✅ Overall Performance:
- Average SSIM: 0.6980
- Average PSNR: 22.95 dB
- Average NDVI: 0.5569
- **Figure 4. Training Curves **
- Normalizing each channel independently to [–1,1] stabilized training.
- Achieved average SSIM ≈ 0.69 and PSNR ≈ 22.95 dB on RGB.
- Failure modes: difficulty reconstructing fine texture in heavily vegetated zones.
- Python 3.8+
- PyTorch
- torchvision
- scikit-image (SSIM, PSNR)
- matplotlib
- Experiment with perceptual (VGG) loss for sharper textures
- Extend to NIR or SWIR target bands
- Integrate cloud-mask guided loss for cloudy regions
- Ensure data in
data/raw/ - Install requirements and follow Sections 4–6 above