This project implements a Generative Adversarial Network (GAN) to generate anime face images using the Anime Face Dataset. The model is trained using PyTorch and progressively generates better quality images after each epoch.
-
Generated sample images are saved in the
generatedfolder with filenames likegenerated-images-0000.png. -
Example generated image:
-
You can view the training progression as a video saved as
gans_training.avi.
- Run the notebook or Python script.
- The dataset will be downloaded automatically.
- The model will start training, displaying sample generated images after each epoch.
- Generated images will be saved in the
generateddirectory. - After training, a video
gans_training.aviwill be created to visualize the image progression.
- A convolutional neural network that takes 64x64 RGB images as input.
- Outputs a probability of the image being real or fake.
- Uses Conv2d layers, BatchNorm, LeakyReLU activations, and Sigmoid output.
- A transposed convolutional neural network that takes random noise vectors as input.
- Generates 64x64 RGB images.
- Uses ConvTranspose2d layers, BatchNorm, ReLU activations, and Tanh output.
- Loss function: Binary Cross Entropy (BCE) for both Generator and Discriminator.
- Optimizer: Adam with learning rate 0.0002 and betas (0.5, 0.999).
- Batch size: 128
- Image size: 64x64
- Latent vector size: 128
- Number of epochs: Configurable (default 5)
This project is open source and available under the MIT License.
- Dataset from splcher/animefacedataset
- PyTorch official DCGAN tutorial inspired the architecture and training approach.
Feel free to open issues or submit pull requests for improvements!
