Run open‑source content moderation models (NSFW, toxicity, etc.) with one line — from Python or the CLI. Works with Hugging Face models or local folders. Outputs are normalized and app‑ready.
- One simple API and CLI
- Use any compatible Transformers model from the Hub or disk
- Normalized JSON output you can plug into your app
- Optional auto‑install of dependencies for a smooth first run
Note: Today we ship a Transformers-based integration for image/text classification.
Developers and researchers/academics who want to quickly evaluate or deploy moderation models without wiring different runtimes or dealing with model‑specific output formats.
Pick one option:
Using pip (recommended):
pip install moderatorsUsing uv:
uv venv --python 3.10
source .venv/bin/activate
uv add moderatorsFrom source (cloned repo):
uv sync --extra transformersRequirements:
- Python 3.10+
- For image tasks, Pillow and a DL framework (PyTorch preferred). Moderators can auto‑install these.
Run a model in a few lines.
Python API:
from moderators import AutoModerator
# Load from the Hugging Face Hub (e.g., NSFW image classifier)
moderator = AutoModerator.from_pretrained("viddexa/nsfw-detector-mini")
# Run on a local image path
result = moderator("/path/to/image.jpg")
print(result)CLI:
moderators viddexa/nsfw-detector-mini /path/to/image.jpgText example (sentiment/toxicity):
moderators distilbert/distilbert-base-uncased-finetuned-sst-2-english "I love this!"You get a list of normalized prediction entries. In Python, they’re dataclasses; in the CLI, you get JSON.
Python shape (pretty-printed):
[
PredictionResult(
source_path='',
classifications={'NSFW': 0.9821},
detections=[],
raw_output={'label': 'NSFW', 'score': 0.9821}
),
...
]
JSON shape (CLI output):
[
{
"source_path": "",
"classifications": {"NSFW": 0.9821},
"detections": [],
"raw_output": {"label": "NSFW", "score": 0.9821}
}
]Tip (Python):
from dataclasses import asdict
from moderators import AutoModerator
moderator = AutoModerator.from_pretrained("viddexa/nsfw-detector-mini")
result = moderator("/path/to/image.jpg")
json_ready = [asdict(r) for r in result]
print(json_ready)Image source:
Raw model scores:
[
{ "normal": 0.9999891519546509 },
{ "nsfw": 0.000010843970812857151 }
]Moderators normalized JSON shape:
[
{ "source_path": "", "classifications": {"normal": 0.9999891519546509}, "detections": [], "raw_output": {"label": "normal", "score": 0.9999891519546509} },
{ "source_path": "", "classifications": {"nsfw": 0.000010843970812857151}, "detections": [], "raw_output": {"label": "nsfw", "score": 0.000010843970812857151} }
]The table below places Moderators next to the raw Transformers pipeline() usage.
| Feature | Transformers.pipeline() | Moderators |
|---|---|---|
| Usage | pipeline("task", model=...) |
AutoModerator.from_pretrained(...) |
| Model configuration | Manual or model-specific | Automatic via config.json (task inference when possible) |
| Output format | Varies by model/pipe | Standardized PredictionResult / JSON |
| Requirements | Manual dependency setup | Optional automatic pip/uv install |
| CLI | None or project-specific | Built-in moderators CLI (JSON to stdout) |
| Extensibility | Mostly one ecosystem | Open to new integrations (same interface) |
| Error messages | Vary by model | Consistent, task/integration-guided |
| Task detection | User-provided | Auto-inferred from config when possible |
- From the Hub: pass a model id like
viddexa/nsfw-detector-minior any compatible Transformers model. - From disk: pass a local folder that contains a
config.jsonnext to your weights.
Moderators detects the task and integration from the config when possible, so you don’t have to specify pipelines manually.
Run models from your terminal and get normalized JSON to stdout.
Usage:
moderators <model_id_or_local_dir> <input> [--local-files-only]Examples:
- Text classification:
moderators distilbert/distilbert-base-uncased-finetuned-sst-2-english "I love this!" - Image classification (local image):
moderators viddexa/nsfw-detector-mini /path/to/image.jpg
Tips:
--local-files-onlyforces offline usage if files are cached.- The CLI prints a single JSON array (easy to pipe or parse).
- Small demos and benchmarking script:
examples/README.md,examples/benchmarks.py
- Which tasks are supported?
- Image and text classification via Transformers (e.g., NSFW, sentiment/toxicity). More can be added over time.
- Does it need a GPU?
- No. CPU is fine for small models. If your framework has CUDA installed, it will use it.
- How are dependencies handled?
- If something is missing (e.g.,
torch,transformers,Pillow), Moderators can auto‑install viauvorpipunless you disable it. To disable:export MODERATORS_DISABLE_AUTO_INSTALL=1
- If something is missing (e.g.,
- Can I run offline?
- Yes. Use
--local-files-onlyin the CLI orlocal_files_only=Truein Python after you have the model cached.
- Yes. Use
- What does “normalized output” mean?
- Regardless of the underlying pipeline, you always get the same result schema (classifications/detections/raw_output), so your app code stays simple.
What’s planned:
- Ultralytics integration (YOLO family) via
UltralyticsModerator - Optional ONNX Runtime backend where applicable
- Simple backend switch (API/CLI flag, e.g.,
--backend onnx|torch) - Expanded benchmarks: latency, throughput, memory on common tasks
- Documentation and examples to help you pick the right option
- ImportError (PIL/torch/transformers):
- Install the package (
pip install moderators) or let auto‑install run (ensureMODERATORS_DISABLE_AUTO_INSTALLis unset). If you prefer manual dependency control, install extras:pip install "moderators[transformers]".
- Install the package (
- OSError: couldn’t find
config.json/ model files:- Check your model id or local folder path; ensure
config.jsonis present.
- Check your model id or local folder path; ensure
- HTTP errors when pulling from the Hub:
- Verify connectivity and auth (if private). Use offline mode if already cached.
- GPU not used:
- Ensure your framework is installed with CUDA support.
Apache-2.0. See LICENSE.
