Skip to content

Latest commit

 

History

History
29 lines (21 loc) · 650 Bytes

File metadata and controls

29 lines (21 loc) · 650 Bytes

Fine-Tuning LLM Models

This repository contains fine-tuning experiments of multiple Transformer and LLM-based models for suicide ideation and mental health text classification using public Kaggle datasets.

All models are trained using efficient fine-tuning techniques (e.g., LoRA) to reduce compute and memory usage.


Models Covered

  • BERT
  • RoBERTa
  • DeBERTa
  • T5
  • BART
  • GPT-2
  • LLaMA-2
  • Mistral-7B
  • Phi-2
  • MentalLLaMA

Techniques Used

  • Transfer learning with pretrained NLP models
  • Parameter-efficient fine-tuning (LoRA)
  • Hugging Face Transformers & Trainer APIs
  • Kaggle-based mental health datasets