Skip to content

AkshatSharma25/ColorGANimeBackend

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ColorGANime Backend

A backend service for automatic colorization of grayscale manga panels using a Pix2Pix GAN architecture. This project is designed to process manga images, break them into fragments, colorize them using a trained GAN, and reconstruct the final colored image.

Link to the dataset: https://drive.google.com/drive/folders/1mY4-AalHR6pwcjebRrAq40HflbEjM2CR?usp=sharing

Features

  • Pix2Pix GAN: Converts grayscale manga panels to colored images using a custom generator and discriminator.
  • Fragmentation: Splits uploaded manga pages into individual panels/fragments for targeted colorization.
  • Colorization: Applies a trained GAN model to colorize each fragment.
  • Enhancement: Post-processes colored fragments for improved contrast, saturation, and sharpness.
  • Reconstruction: Reassembles colored fragments into a full manga page.
  • REST API: Flask-based API endpoint for uploading images and receiving colorized results.

Project Structure

  • app.py: Flask server with /process endpoint for image upload and processing.
  • generator.py: Defines the GAN generator model and loads pretrained weights.
  • helper.py: Contains functions for fragmenting images, colorizing fragments, enhancing, reconstructing, and cleaning up.
  • main.ipynb: Jupyter notebook for model development, training, and experimentation.
  • model/: Stores trained model checkpoints.
  • uploads/: Temporary storage for uploaded and processed images.
  • fragments/: Temporary storage for image fragments.

Processing Pipeline

  1. Image Upload: User uploads a manga page via the /process API endpoint.
  2. Fragmentation: The image is split into individual panels/fragments.
  3. Colorization: Each fragment is colorized using the GAN generator.
  4. Enhancement: Colorized fragments are enhanced for better visual quality.
  5. Reconstruction: Fragments are reassembled into a single colored image.
  6. Cleanup: Temporary files are deleted after processing.
  7. Result Delivery: The final colorized image is returned to the user.

Quick Start

  1. Install dependencies:
    pip install -r requirements.txt
  2. Place your trained model checkpoint in the model/ directory (e.g., checkpoint24.pth).
  3. Run the Flask server:
    python app.py
  4. Use the /process endpoint to upload grayscale manga images and receive colorized results.

Notes

  • The GAN model is trained on 128x128 grayscale manga panels.
  • The project includes various experiments and results in main.ipynb.
  • For best results, use high-quality, well-segmented manga pages.

Author: Akshat Sharma page

result

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages