Skip to content

Commit c8f91a7

Browse files
authored
docs: restructure README with visual design and separate documentation (#13)
Transform README into visual-first design with logo, emojis, and performance benchmarks while moving detailed content into dedicated documentation files. - Add centered logo and visual styling to README header - Include performance comparison table (nsfw-detector-mini vs Azure AI vs Falconsai) - Reduce README from 221 to 120 lines by moving content to docs/ - Create comprehensive documentation structure: - docs/INSTALLATION.md: Detailed installation options (pip, uv, source) - docs/CLI.md: Complete CLI usage guide with examples - docs/API.md: Python API reference and advanced usage - docs/FAQ.md: Common questions and answers - docs/TROUBLESHOOTING.md: Issue resolution guide - Add navigation links from README to separate documentation files - Update .gitignore to exclude .DS_Store files
1 parent c9a168c commit c8f91a7

File tree

7 files changed

+459
-152
lines changed

7 files changed

+459
-152
lines changed

.gitignore

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -213,4 +213,7 @@ uv.lock
213213
.mcp.json
214214

215215
# vscode
216-
.vscode/
216+
.vscode/
217+
218+
# macos
219+
.DS_Store

README.md

Lines changed: 62 additions & 151 deletions
Original file line numberDiff line numberDiff line change
@@ -1,53 +1,49 @@
1+
<div align="center">
2+
<img src="https://github.com/viddexa/moderators/releases/download/v0.1.1/logo-v2.jpeg" width="400" alt="Moderators Logo">
3+
14
# Moderators
25

36
[![Moderators PYPI](https://img.shields.io/pypi/v/moderators?color=blue)](https://pypi.org/project/moderators/)
47
[![Moderators HuggingFace Space](https://raw.githubusercontent.com/obss/sahi/main/resources/hf_spaces_badge.svg)](https://huggingface.co/spaces/viddexa/moderators)
58
[![Moderators CI](https://github.com/viddexa/moderators/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/viddexa/moderators/actions/workflows/ci.yml)
69
[![Moderators License](https://img.shields.io/pypi/l/moderators)](https://github.com/viddexa/moderators/blob/main/LICENSE)
710

8-
Run open‑source content moderation models (NSFW, toxicity, etc.) with one line — from Python or the CLI. Works with Hugging Face models or local folders. Outputs are normalized and app‑ready.
11+
Run open‑source content moderation models (NSFW, nudity, etc.) with one line — from Python or the CLI.
12+
13+
</div>
14+
15+
## ✨ Key Highlights
916

1017
- One simple API and CLI
1118
- Use any compatible Transformers model from the Hub or disk
1219
- Normalized JSON output you can plug into your app
1320
- Optional auto‑install of dependencies for a smooth first run
1421

15-
Note: Today we ship a Transformers-based integration for image/text classification.
22+
## 🚀 Performance
1623

24+
NSFW image detection performance of `nsfw-detector-mini` compared with [Azure Content Safety AI](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety) and [Falconsai](https://huggingface.co/Falconsai/nsfw_image_detection).
1725

18-
## Who is this for?
19-
Developers and researchers/academics who want to quickly evaluate or deploy moderation models without wiring different runtimes or dealing with model‑specific output formats.
26+
**F_safe** and **F_nsfw** below are class-wise F1 scores for safe and nsfw classes, respectively. Results show that `nsfw-detector-mini` performs better than Falconsai and Azure AI with fewer parameters.
2027

28+
| Model | F_safe | F_nsfw | Params |
29+
| ------------------------------------------------------------------------------------ | ---------: | ---------: | ------: |
30+
| [nsfw-detector-nano](https://huggingface.co/viddexa/nsfw-detection-nano) | 96.91% | 96.87% | 4M |
31+
| **[nsfw-detector-mini](https://huggingface.co/viddexa/nsfw-detector-mini)** | **97.90%** | **97.89%** | **17M** |
32+
| [Azure AI](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety) | 96.79% | 96.57% | N/A |
33+
| [Falconsai](https://huggingface.co/Falconsai/nsfw_image_detection) | 89.52% | 89.32% | 85M |
2134

22-
## Installation
23-
Pick one option:
35+
## 📦 Installation
2436

25-
Using pip (recommended):
2637
```bash
2738
pip install moderators
2839
```
2940

30-
Using uv:
31-
```bash
32-
uv venv --python 3.10
33-
source .venv/bin/activate
34-
uv add moderators
35-
```
36-
37-
From source (cloned repo):
38-
```bash
39-
uv sync --extra transformers
40-
```
41-
42-
Requirements:
43-
- Python 3.10+
44-
- For image tasks, Pillow and a DL framework (PyTorch preferred). Moderators can auto‑install these.
41+
For detailed installation options, see the [Installation Guide](docs/INSTALLATION.md).
4542

43+
## 🚀 Quickstart
4644

47-
## Quickstart
48-
Run a model in a few lines.
45+
**Python API:**
4946

50-
Python API:
5147
```python
5248
from moderators import AutoModerator
5349

@@ -59,163 +55,78 @@ result = moderator("/path/to/image.jpg")
5955
print(result)
6056
```
6157

62-
CLI:
58+
**CLI:**
59+
6360
```bash
61+
# Image classification
6462
moderators viddexa/nsfw-detector-mini /path/to/image.jpg
65-
```
6663

67-
Text example (sentiment/toxicity):
68-
```bash
64+
# Text classification
6965
moderators distilbert/distilbert-base-uncased-finetuned-sst-2-english "I love this!"
7066
```
7167

68+
## 📊 Real Output Example
7269

73-
## What do results look like?
74-
You get a list of normalized prediction entries. In Python, they’re dataclasses; in the CLI, you get JSON.
70+
![Example input image](https://img.freepik.com/free-photo/front-view-woman-doing-exercises_23-2148498678.jpg?t=st=1760435237~exp=1760438837~hmac=9a0a0a56f83d8fa52f424c7acdf4174dffc3e4d542e189398981a13af3f82b40&w=360)
7571

76-
Python shape (pretty-printed):
77-
```text
78-
[
79-
PredictionResult(
80-
source_path='',
81-
classifications={'NSFW': 0.9821},
82-
detections=[],
83-
raw_output={'label': 'NSFW', 'score': 0.9821}
84-
),
85-
...
86-
]
87-
```
72+
Moderators normalized JSON output:
8873

89-
JSON shape (CLI output):
9074
```json
9175
[
9276
{
9377
"source_path": "",
94-
"classifications": {"NSFW": 0.9821},
78+
"classifications": { "safe": 0.9999891519546509 },
9579
"detections": [],
96-
"raw_output": {"label": "NSFW", "score": 0.9821}
80+
"raw_output": { "label": "safe", "score": 0.9999891519546509 }
81+
},
82+
{
83+
"source_path": "",
84+
"classifications": { "nsfw": 0.000010843970812857151 },
85+
"detections": [],
86+
"raw_output": { "label": "nsfw", "score": 0.000010843970812857151 }
9787
}
9888
]
9989
```
10090

101-
Tip (Python):
102-
```python
103-
from dataclasses import asdict
104-
from moderators import AutoModerator
91+
## 🔍 Comparison at a Glance
10592

106-
moderator = AutoModerator.from_pretrained("viddexa/nsfw-detector-mini")
107-
result = moderator("/path/to/image.jpg")
108-
json_ready = [asdict(r) for r in result]
109-
print(json_ready)
110-
```
93+
| Feature | Transformers.pipeline() | Moderators |
94+
| ------------------- | ----------------------------- | ---------------------------------------------------------- |
95+
| Usage | `pipeline("task", model=...)` | `AutoModerator.from_pretrained(...)` |
96+
| Model configuration | Manual or model-specific | Automatic via `config.json` (task inference when possible) |
97+
| Output format | Varies by model/pipe | Standardized `PredictionResult` / JSON |
98+
| Requirements | Manual dependency setup | Optional automatic `pip/uv` install |
99+
| CLI | None or project-specific | Built-in `moderators` CLI (JSON to stdout) |
100+
| Extensibility | Mostly one ecosystem | Open to new integrations (same interface) |
101+
| Error messages | Vary by model | Consistent, task/integration-guided |
102+
| Task detection | User-provided | Auto-inferred from config when possible |
111103

104+
## 🎯 Pick a Model
112105

113-
## Example: Real output on a sample image
114-
Image source:
106+
- **From the Hub**: Pass a model ID like `viddexa/nsfw-detector-mini` or any compatible Transformers model
107+
- **From disk**: Pass a local folder that contains a `config.json` next to your weights
115108

116-
![Example input image](https://img.freepik.com/free-photo/front-view-woman-doing-exercises_23-2148498678.jpg?t=st=1760435237~exp=1760438837~hmac=9a0a0a56f83d8fa52f424c7acdf4174dffc3e4d542e189398981a13af3f82b40&w=360)
109+
Moderators detects the task and integration from the config when possible, so you don't have to specify pipelines manually.
117110

118-
Raw model scores:
119-
```json
120-
[
121-
{ "normal": 0.9999891519546509 },
122-
{ "nsfw": 0.000010843970812857151 }
123-
]
124-
```
111+
## 📚 Documentation
125112

126-
Moderators normalized JSON shape:
127-
```json
128-
[
129-
{ "source_path": "", "classifications": {"normal": 0.9999891519546509}, "detections": [], "raw_output": {"label": "normal", "score": 0.9999891519546509} },
130-
{ "source_path": "", "classifications": {"nsfw": 0.000010843970812857151}, "detections": [], "raw_output": {"label": "nsfw", "score": 0.000010843970812857151} }
131-
]
132-
```
113+
- [Installation Guide](docs/INSTALLATION.md) - Detailed installation options and requirements
114+
- [CLI Reference](docs/CLI.md) - Complete command-line usage guide
115+
- [API Documentation](docs/API.md) - Python API reference and output formats
116+
- [FAQ](docs/FAQ.md) - Frequently asked questions
117+
- [Troubleshooting](docs/TROUBLESHOOTING.md) - Common issues and solutions
133118

119+
## 📝 Examples
134120

135-
## Comparison at a glance
136-
The table below places Moderators next to the raw Transformers `pipeline()` usage.
121+
Small demos and benchmarking script: `examples/README.md`, `examples/benchmarks.py`
137122

138-
| Feature | Transformers.pipeline() | Moderators |
139-
|---|---|---|
140-
| Usage | `pipeline("task", model=...)` | `AutoModerator.from_pretrained(...)` |
141-
| Model configuration | Manual or model-specific | Automatic via `config.json` (task inference when possible) |
142-
| Output format | Varies by model/pipe | Standardized `PredictionResult` / JSON |
143-
| Requirements | Manual dependency setup | Optional automatic `pip/uv` install |
144-
| CLI | None or project-specific | Built-in `moderators` CLI (JSON to stdout) |
145-
| Extensibility | Mostly one ecosystem | Open to new integrations (same interface) |
146-
| Error messages | Vary by model | Consistent, task/integration-guided |
147-
| Task detection | User-provided | Auto-inferred from config when possible |
123+
## 🗺️ Roadmap
148124

149-
150-
## Pick a model
151-
- From the Hub: pass a model id like `viddexa/nsfw-detector-mini` or any compatible Transformers model.
152-
- From disk: pass a local folder that contains a `config.json` next to your weights.
153-
154-
Moderators detects the task and integration from the config when possible, so you don’t have to specify pipelines manually.
155-
156-
157-
## Command line usage
158-
Run models from your terminal and get normalized JSON to stdout.
159-
160-
Usage:
161-
```bash
162-
moderators <model_id_or_local_dir> <input> [--local-files-only]
163-
```
164-
165-
Examples:
166-
- Text classification:
167-
```bash
168-
moderators distilbert/distilbert-base-uncased-finetuned-sst-2-english "I love this!"
169-
```
170-
- Image classification (local image):
171-
```bash
172-
moderators viddexa/nsfw-detector-mini /path/to/image.jpg
173-
```
174-
175-
Tips:
176-
- `--local-files-only` forces offline usage if files are cached.
177-
- The CLI prints a single JSON array (easy to pipe or parse).
178-
179-
180-
## Examples
181-
- Small demos and benchmarking script: `examples/README.md`, `examples/benchmarks.py`
182-
183-
184-
## FAQ
185-
- Which tasks are supported?
186-
- Image and text classification via Transformers (e.g., NSFW, sentiment/toxicity). More can be added over time.
187-
- Does it need a GPU?
188-
- No. CPU is fine for small models. If your framework has CUDA installed, it will use it.
189-
- How are dependencies handled?
190-
- If something is missing (e.g., `torch`, `transformers`, `Pillow`), Moderators can auto‑install via `uv` or `pip` unless you disable it. To disable:
191-
```bash
192-
export MODERATORS_DISABLE_AUTO_INSTALL=1
193-
```
194-
- Can I run offline?
195-
- Yes. Use `--local-files-only` in the CLI or `local_files_only=True` in Python after you have the model cached.
196-
- What does “normalized output” mean?
197-
- Regardless of the underlying pipeline, you always get the same result schema (classifications/detections/raw_output), so your app code stays simple.
198-
199-
200-
## Roadmap
201-
What’s planned:
202125
- Ultralytics integration (YOLO family) via `UltralyticsModerator`
203126
- Optional ONNX Runtime backend where applicable
204127
- Simple backend switch (API/CLI flag, e.g., `--backend onnx|torch`)
205128
- Expanded benchmarks: latency, throughput, memory on common tasks
206-
- Documentation and examples to help you pick the right option
207-
208-
209-
## Troubleshooting
210-
- ImportError (PIL/torch/transformers):
211-
- Install the package (`pip install moderators`) or let auto‑install run (ensure `MODERATORS_DISABLE_AUTO_INSTALL` is unset). If you prefer manual dependency control, install extras: `pip install "moderators[transformers]"`.
212-
- OSError: couldn’t find `config.json` / model files:
213-
- Check your model id or local folder path; ensure `config.json` is present.
214-
- HTTP errors when pulling from the Hub:
215-
- Verify connectivity and auth (if private). Use offline mode if already cached.
216-
- GPU not used:
217-
- Ensure your framework is installed with CUDA support.
218129

130+
## 📄 License
219131

220-
## License
221-
Apache-2.0. See `LICENSE`.
132+
Apache-2.0. See [LICENSE](LICENSE).

0 commit comments

Comments
 (0)