From f99f48eafe9a70f604fe7017b2245f56d965d0b3 Mon Sep 17 00:00:00 2001 From: modleao Date: Wed, 7 May 2025 21:16:24 +0300 Subject: [PATCH] Create Hugging.readme --- Hugging.readme | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) create mode 100644 Hugging.readme diff --git a/Hugging.readme b/Hugging.readme new file mode 100644 index 0000000..8e9903b --- /dev/null +++ b/Hugging.readme @@ -0,0 +1,26 @@ +# Toxic Content Detector + +This is a simple web app that detects toxic or offensive content in English text. It uses the `unitary/toxic-bert` model from Hugging Face and is built with the Transformers and Gradio libraries. + +## 🔍 Features + +- Detects multiple types of toxicity: + - Toxic + - Insult + - Obscene + - Threat + - Identity hate + - Severe toxicity +- Easy-to-use web interface +- Real-time feedback + +## 🧠 Model Used + +- [`unitary/toxic-bert`](https://huggingface.co/unitary/toxic-bert): A fine-tuned BERT model for multi-label toxicity classification. + +## 💻 Installation + +Make sure you have Python installed, then install the required libraries: + +```bash +pip install transformers gradio torch