Skip to content

Latest commit

 

History

History
90 lines (64 loc) · 2.35 KB

File metadata and controls

90 lines (64 loc) · 2.35 KB

🧠 NLP Sentiment Analysis: Comparative Study using BERT, LSTM, GRU, and RNN

Python PyTorch License: MIT

This project performs sentiment analysis using four deep learning models — BERT, LSTM, GRU, and simple RNN — and compares them on classification performance, computational efficiency, and implementation complexity. Built using PyTorch and Hugging Face Transformers.


📁 Project Structure

NLP-Sentiment-Model-Comparison/
├── NLPComparativeAnalysis.ipynb 		# Main notebook with training and evaluation
├── NLP Comparative Analysis.pdf 		# Project methodology and insights
├── requirements.txt                            # Python dependencies
├── README.md                                   # Project overview
└── .gitignore                                  # Files to exclude from Git tracking

🧠 Models Compared

Model Summary
BERT Pre-trained transformer from Hugging Face (fine-tuned)
LSTM Long Short-Term Memory network for sequential modeling
GRU Gated Recurrent Unit for efficient RNN-based modeling
RNN Baseline simple Recurrent Neural Network

🚀 Getting Started

1. Clone the Repository

git clone https://github.com/shahsanjanav/NLP-Sentiment-Model-Comparison.git
cd NLP-Sentiment-Model-Comparison

2. Install Dependencies

pip install -r requirements.txt

3. Run the Notebook

jupyter notebook NLPComparativeAnalysis.ipynb

📊 Evaluation Metrics

✅ Accuracy ✅ Precision, Recall, F1-Score ✅ Confusion Matrix ✅ ROC-AUC ✅ Training Time & Memory Usage


🛠 Built With

  • Python 3.10+
  • PyTorch
  • Hugging Face Transformers
  • scikit-learn
  • Jupyter Notebook
  • matplotlib, seaborn, numpy, pandas

📄 License

MIT License © 2025 Sanjana Shah


👤 Author

Sanjana Shah
✨ Machine Learning & Generative AI Enthusiast
📫 Connect on LinkedIn GitHub: @shahsanjanav


⭐ If you like this project, consider starring it on GitHub!