This project implements a Pix2Pix model using Conditional GANs (cGAN) for image-to-image translation. The task was completed as part of my internship with Prodigy InfoTech. The dataset used is the CMP Facade dataset, where the model learns to translate building facade labels into realistic building images.
- Implement Pix2Pix using a Conditional GAN (cGAN).
- Perform paired image-to-image translation using a pre-trained dataset.
- Visualize results: input image, ground truth, and predicted image.
- Python
- TensorFlow
- Google Colab
- Matplotlib
- CMP Facades Dataset
- Dataset: CMP Facades
- Format: Paired images (labels and facades)
- Structure:
facades/ βββ train/ βββ test/ βββ val/
- β Set up environment (TensorFlow, image utils)
- β Downloaded and extracted the CMP Facades dataset
- β Preprocessed paired images: resizing, normalization
- β Built Generator and Discriminator models
- β Trained model using cGAN loss and generator loss
- β Visualized predictions (input vs ground truth vs generated)
- β
Saved sample result to
results/epoch_20_sample.png
| Input Image | Ground Truth | Predicted Image |
|---|---|---|
![]() |
![]() |
![]() |
(Note: All 3 images in a row are included in the single output image)
-
Clone this repository:
git clone https://github.com/your-username/pix2pix-facades.git cd pix2pix-facades -
Upload to Google Colab or run locally with GPU.
-
Install dependencies (if running locally):
pip install tensorflow matplotlib
-
Run
pix2pix_facades.ipynbstep by step.
pix2pix-facades/
βββ facades/ # Dataset folder
β βββ train/
β βββ test/
β βββ val/
βββ results/
β βββ epoch_20_sample.png # Output image
βββ pix2pix_facades.ipynb # Main notebook
βββ README.md
This repository was created for the "Image-to-Image Translation with cGANs" task of my Generative AI Internship at Prodigy InfoTech.
Dharmit Shah
LinkedIn β’ GitHub
This project is open-source and free to use for educational purposes.
