This project performs sentiment analysis using four deep learning models — BERT, LSTM, GRU, and simple RNN — and compares them on classification performance, computational efficiency, and implementation complexity. Built using PyTorch and Hugging Face Transformers.
NLP-Sentiment-Model-Comparison/
├── NLPComparativeAnalysis.ipynb # Main notebook with training and evaluation
├── NLP Comparative Analysis.pdf # Project methodology and insights
├── requirements.txt # Python dependencies
├── README.md # Project overview
└── .gitignore # Files to exclude from Git tracking
| Model | Summary |
|---|---|
| BERT | Pre-trained transformer from Hugging Face (fine-tuned) |
| LSTM | Long Short-Term Memory network for sequential modeling |
| GRU | Gated Recurrent Unit for efficient RNN-based modeling |
| RNN | Baseline simple Recurrent Neural Network |
git clone https://github.com/shahsanjanav/NLP-Sentiment-Model-Comparison.git
cd NLP-Sentiment-Model-Comparisonpip install -r requirements.txtjupyter notebook NLPComparativeAnalysis.ipynb📊 Evaluation Metrics
✅ Accuracy ✅ Precision, Recall, F1-Score ✅ Confusion Matrix ✅ ROC-AUC ✅ Training Time & Memory Usage
🛠 Built With
- Python 3.10+
- PyTorch
- Hugging Face Transformers
- scikit-learn
- Jupyter Notebook
- matplotlib, seaborn, numpy, pandas
MIT License © 2025 Sanjana Shah
Sanjana Shah
✨ Machine Learning & Generative AI Enthusiast
📫 Connect on LinkedIn
GitHub: @shahsanjanav
⭐ If you like this project, consider starring it on GitHub!