Skip to content

Commit cf737ab

Browse files
committed
Added References to CodeExplanation
1 parent 02549cb commit cf737ab

File tree

2 files changed

+11
-65
lines changed

2 files changed

+11
-65
lines changed

CONTRIBUTING.md

Lines changed: 7 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -5,63 +5,15 @@ Thank you for considering contributing to [Generative-AI-Based-Spatio-Temporal-F
55
## Guidelines:
66

77
### Pull Requests
8-
- 🍴 Fork the repository.
9-
- 📌 Include descriptive commit messages.
8+
- Fork the repository.
9+
- Include descriptive commit messages.
10+
- Include comments in code explaining why certain pieces of code were implemented.
1011

11-
### Code Styleguide
12-
- 💬 Include comments explaining why certain pieces of code were implemented.
13-
- ✅ Write tests (if applicable) for the new code you're submitting.
12+
## Resource Links
13+
- [Code Explanation](CodeExplanation.md)
14+
- [Issue Tracker](https://github.com/iSiddharth20/Spatio-Temporal-Fusion-in-Remote-Sensing/issues)
15+
- [Dataset Access](https://www.kaggle.com/datasets/isiddharth/spatio-temporal-data-of-moon-rise-in-raw-and-tif)
1416

1517
## 🙌 Acknowledgments
1618
Thanks to all the contributors who have helped this project grow!
1719

18-
# Required Codebase:
19-
20-
### [LSTM.py](https://github.com/iSiddharth20/Generative-AI-Based-Spatio-Temporal-Fusion/blob/main/Code/LSTM.py)
21-
- Define a PyTorch LSTM model class for frame interpolation, generating an entire greyscale image for a given sequence. The model takes a sequence (sequence length = `len_seq`) of grayscale images (400x600) as input and predicts the following, according to user preference:
22-
- The next image in the sequence.
23-
- `n` images interpolated between existing images of the sequence.
24-
- Write a function using PyTorch to perform hyperparameter tuning for an LSTM model, testing various learning rates and numbers of hidden units. Record the results of hyperparameter tuning, i.e., the performance of each parameter combination.
25-
26-
### [AutoEncoder.py](https://github.com/iSiddharth20/Generative-AI-Based-Spatio-Temporal-Fusion/blob/main/Code/AutoEncoder.py)
27-
- Define a PyTorch AutoEncoder model class with:
28-
- An encoder that maps 400x600 greyscale images to known RGB images.
29-
- A decoder that reconstructs the RGB images from the greyscale images.
30-
31-
### [LossFunction.py](https://github.com/iSiddharth20/Generative-AI-Based-Spatio-Temporal-Fusion/blob/main/Code/LossFunction.py)
32-
- Write a PyTorch loss function named `loss_MEP` that combines Mean Squared Error with a Maximum Entropy regularization term for an AutoEncoder.
33-
- The Composite loss function (loss_MEP) is given by:
34-
` L = (1/2) * Σ(i=1 to N) (x_i - x̂_i)^2 - λmep * H(q(z|x) `
35-
where:
36-
- L represents the Composite Loss Function.
37-
- N is the number of dimensions in the latent space.
38-
- x_i is the input data of the AutoEncoder (greyscale image).
39-
- x̂_i is the output data of the AutoEncoder (RGB image).
40-
- λmep is a Maximum Entropy regularization parameter.
41-
- H(q(z|x)) represents the entropy of the variational posterior distribution q(z|x).
42-
- Write a PyTorch loss function named `loss_MLP` that combines Mean Squared Error with a Maximum Likelihood regularization term for an AutoEncoder.
43-
- The Composite loss function (loss_MLP) is given by:
44-
` L = (1/2) * Σ(i=1 to N) (x_i - x̂_i)^2 + λmlp) `
45-
where:
46-
- L represents the Composite Loss Function.
47-
- N is the number of dimensions in the latent space.
48-
- x_i is the input data of the AutoEncoder (greyscale image).
49-
- x̂_i is the output data of the AutoEncoder (RGB image).
50-
- λmlp is a Maximum Likelihood regularization parameter.
51-
52-
### [main.py](https://github.com/iSiddharth20/Generative-AI-Based-Spatio-Temporal-Fusion/blob/main/Code/main.py)
53-
- Write a Python function using PyTorch to load a dataset of grayscale TIF images from directory `../Dataset/Grey` and RGB TIF images from directory `../Dataset/RGB`, resize them to 400x600 pixels, and normalize the pixel values (0-255).
54-
- Split the dataset into training, testing, and validation sets using `sklearn.train_test_split` with a ratio of 60:20:20 and convert to PyTorch tensors using batch size = `batch_size`.
55-
- Export the Training, Testing, and Validation Sets to the directory `../Dataset/PyTorchTensors` using `torch.save`.
56-
- Import the LSTM model class from lstm.py.
57-
- Import the AutoEncoder model class from autoencoder.py.
58-
- Outline a training loop (EPOCHS = `num_epochs`) in PyTorch that trains an LSTM and an AutoEncoder model using the Adam optimizer, and include calculating and printing the loss every epoch.
59-
- Train the model named `model_MEP` using `loss_MEP` as the Loss Function.
60-
- Train the model named `model_MLP` using `loss_MLP` as the Loss Function.
61-
- Export the Trained Models to the directory `../TrainedModel` if the new model has lower loss than previous one in thae training loop.
62-
63-
### [Results.py](https://github.com/iSiddharth20/Generative-AI-Based-Spatio-Temporal-Fusion/blob/main/Code/Results.py)
64-
- Import the Validation Sets from the directory `../Dataset/PyTorchTensors`.
65-
- Import the Trained Models.
66-
- Implement a PyTorch validation loop that computes the Mean Squared Error (MSE) and Structural Similarity Index Measure (SSIM) as validation metrics on Validation Sets of greyscale and RGB image pairs.
67-
- Create a Python function using PyTorch to compare the performance of two models (`model_MEP` and `model_MLP`) trained with different regularization principles: Maximum Likelihood and Maximum Entropy.

README.md

Lines changed: 4 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# GenAI-Powered Spatio-Temporal Fusion for Video Super-Resolution
2-
![Status](https://img.shields.io/badge/status-ongoing-yellow.svg)
2+
![GitHub Latest Release)](https://img.shields.io/github/v/release/iSiddharth20/Generative-AI-Based-Spatio-Temporal-Fusion?logo=github)
33
![License](https://img.shields.io/github/license/iSiddharth20/Spatio-Temporal-Fusion-in-Remote-Sensing)
44

55
#### Based on PyTorch, Install [Here](https://pytorch.org/get-started/locally/)
@@ -23,18 +23,12 @@ Here's a visual representation of the data transformation:
2323

2424
## Resource Links
2525

26-
- 🐞 [Issue Tracker](https://github.com/iSiddharth20/Spatio-Temporal-Fusion-in-Remote-Sensing/issues) - Check out open issues and contribute by addressing them.
27-
- 🌐 [Dataset Access](https://www.kaggle.com/datasets/isiddharth/spatio-temporal-data-of-moon-rise-in-raw-and-tif) - The dataset is now available on Kaggle. Dive into real-world data!
28-
- 🔗 [Concept Presentation](./Documentation/Concept_Presentation.pptx) - Gain insights into the concept with the Powerpoint presentation.
29-
- 📊 [System Overview](./Documentation/System_Diagram.png) - See the system diagram for a high-level understanding of the project.
30-
31-
## Concept Overview
32-
![System Diagram](./Documentation/System_Diagram.png)
26+
- [Code Explanation](CodeExplanation.md)
27+
- [Issue Tracker](https://github.com/iSiddharth20/Spatio-Temporal-Fusion-in-Remote-Sensing/issues)
28+
- [Dataset Access](https://www.kaggle.com/datasets/isiddharth/spatio-temporal-data-of-moon-rise-in-raw-and-tif)
3329

3430
## Contributions Welcome!
3531
Your interest in contributing to the project is highly respected. Aiming for collaborative excellence, your insights, code improvements, and innovative ideas are highly appreciated. Make sure to check [Contributing Guidelines](CONTRIBUTING.md) for more information on how you can become an integral part of this project.
3632

3733
## Acknowledgements
3834
A heartfelt thank you to all contributors and supporters who are on this journey to break new ground in video super-resolution technology.
39-
40-
![Contributors](https://img.shields.io/github/contributors/iSiddharth20/Spatio-Temporal-Fusion-in-Remote-Sensing)

0 commit comments

Comments
 (0)