Skip to content

Commit 282615f

Browse files
reformat build & run commands. Add link to upstream vLLM docs
1 parent e960908 commit 282615f

File tree

2 files changed

+24
-9
lines changed

2 files changed

+24
-9
lines changed

README.md

Lines changed: 23 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -17,18 +17,33 @@ At the moment, the following detectors are supported:
1717

1818
## Building
1919

20-
* `huggingface`: podman build -f detectors/Dockerfile.hf detectors
21-
* `llm_judge`: podman build -f detectors/Dockerfile.judge detectors
22-
* `builtIn`: podman build -f detectors/Dockerfile.builtIn detectors
20+
To build the detector images, use the following commands:
21+
22+
| Detector | Build Command |
23+
|----------|---------------|
24+
| `huggingface` | `podman build -t $TAG -f detectors/Dockerfile.hf detectors` |
25+
| `llm_judge` | `podman build -t $TAG -f detectors/Dockerfile.judge detectors` |
26+
| `builtIn` | `podman build -t $TAG -f detectors/Dockerfile.builtIn detectors` |
27+
28+
Replace `$TAG` with your desired image tag (e.g., `my-detector:latest`).
29+
2330

2431
## Running locally
25-
* `builtIn`: podman run -p 8080:8080 $BUILT_IN_IMAGE
2632

27-
## Examples
33+
### Quick Start Commands
34+
35+
| Detector | Run Command | Notes |
36+
|----------|-------------|-------|
37+
| `builtIn` | `podman run -p 8080:8080 $BUILT_IN_IMAGE` | Ready to use |
38+
| `huggingface` | `podman run -p 8000:8000 -e MODEL_DIR=/mnt/models/$MODEL_NAME -v $MODEL_PATH:/mnt/models/$MODEL_NAME:Z $HF_IMAGE` | Requires model download |
39+
| `llm_judge` | `podman run -p 8000:8000 -e VLLM_BASE_URL=$LLM_SERVER_URL $LLM_JUDGE_IMAGE` | Requires OpenAI-compatible LLM server |
40+
41+
42+
### Detailed Setup Instructions & Examples
2843

29-
- Check out [built-in detector examples](docs/builtin_examples.md) to see how to use the built-in detectors for file type validation and personally identifiable information (PII) detection
30-
- Check out [Hugging Face detector examples](docs/hf_examples.md) to see how to use the Hugging Face detectors for detecting toxic content and prompt injection
31-
- Check out [LLM Judge detector examples](docs/llm_judge_examples.md) to see how to use any OpenAI API compatible LLM for content assessment with built-in metrics and custom natural-language criteria
44+
- **Built-in detector**: No additional setup required. Check out [built-in detector examples](docs/builtin_examples.md) to see how to use the built-in detectors for file type validation and personally identifiable information (PII) detection
45+
- **Hugging Face detector**: Check out [Hugging Face detector examples](docs/hf_examples.md) for a complete setup and examples on how to use the Hugging Face detectors for detecting toxic content and prompt injection
46+
- **LLM Judge detector**: Check out [LLM Judge detector examples](docs/llm_judge_examples.md) for a complete setup and examples on how to use any OpenAI API compatible LLM for content assessment with built-in metrics and custom natural-language criteria
3247

3348
## API
3449
See [IBM Detector API](https://foundation-model-stack.github.io/fms-guardrails-orchestrator/?urls.primaryName=Detector+API)

docs/llm_judge_examples.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ The LLM Judge detector integrates [vLLM Judge](https://github.com/trustyai-expla
44

55
### Local Setup
66

7-
1. **Start an OpenAI-compatible LLM server** (example with vLLM):
7+
1. **Start an OpenAI-compatible LLM server** (example with [vLLM](https://docs.vllm.ai/en/stable/serving/openai_compatible_server.html)):
88
```bash
99
vllm serve Qwen/Qwen2.5-7B-Instruct --port 9090
1010
```

0 commit comments

Comments
 (0)