You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: restructure README with visual design and separate documentation (#13)
Transform README into visual-first design with logo, emojis, and performance benchmarks while moving detailed content into dedicated documentation files.
- Add centered logo and visual styling to README header
- Include performance comparison table (nsfw-detector-mini vs Azure AI vs Falconsai)
- Reduce README from 221 to 120 lines by moving content to docs/
- Create comprehensive documentation structure:
- docs/INSTALLATION.md: Detailed installation options (pip, uv, source)
- docs/CLI.md: Complete CLI usage guide with examples
- docs/API.md: Python API reference and advanced usage
- docs/FAQ.md: Common questions and answers
- docs/TROUBLESHOOTING.md: Issue resolution guide
- Add navigation links from README to separate documentation files
- Update .gitignore to exclude .DS_Store files
Run open‑source content moderation models (NSFW, toxicity, etc.) with one line — from Python or the CLI. Works with Hugging Face models or local folders. Outputs are normalized and app‑ready.
11
+
Run open‑source content moderation models (NSFW, nudity, etc.) with one line — from Python or the CLI.
12
+
13
+
</div>
14
+
15
+
## ✨ Key Highlights
9
16
10
17
- One simple API and CLI
11
18
- Use any compatible Transformers model from the Hub or disk
12
19
- Normalized JSON output you can plug into your app
13
20
- Optional auto‑install of dependencies for a smooth first run
14
21
15
-
Note: Today we ship a Transformers-based integration for image/text classification.
22
+
## 🚀 Performance
16
23
24
+
NSFW image detection performance of `nsfw-detector-mini` compared with [Azure Content Safety AI](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety) and [Falconsai](https://huggingface.co/Falconsai/nsfw_image_detection).
17
25
18
-
## Who is this for?
19
-
Developers and researchers/academics who want to quickly evaluate or deploy moderation models without wiring different runtimes or dealing with model‑specific output formats.
26
+
**F_safe** and **F_nsfw** below are class-wise F1 scores for safe and nsfw classes, respectively. Results show that `nsfw-detector-mini` performs better than Falconsai and Azure AI with fewer parameters.
- Expanded benchmarks: latency, throughput, memory on common tasks
206
-
- Documentation and examples to help you pick the right option
207
-
208
-
209
-
## Troubleshooting
210
-
- ImportError (PIL/torch/transformers):
211
-
- Install the package (`pip install moderators`) or let auto‑install run (ensure `MODERATORS_DISABLE_AUTO_INSTALL` is unset). If you prefer manual dependency control, install extras: `pip install "moderators[transformers]"`.
212
-
- OSError: couldn’t find `config.json` / model files:
213
-
- Check your model id or local folder path; ensure `config.json` is present.
214
-
- HTTP errors when pulling from the Hub:
215
-
- Verify connectivity and auth (if private). Use offline mode if already cached.
216
-
- GPU not used:
217
-
- Ensure your framework is installed with CUDA support.
0 commit comments