Skip to content

Commit a57ab3c

Browse files
add quick start and installation docs
1 parent 0199641 commit a57ab3c

File tree

6 files changed

+412
-14
lines changed

6 files changed

+412
-14
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ pip install vllm-judge[dev]
3333
from vllm_judge import Judge
3434

3535
# Initialize with vLLM url
36-
judge = await Judge.from_url("http://localhost:8000")
36+
judge = Judge.from_url("http://localhost:8000")
3737

3838
# Simple evaluation
3939
result = await judge.evaluate(

docs/getting-started/installation.md

Lines changed: 151 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,151 @@
1+
# Installation
2+
3+
This guide covers the installation of vLLM Judge and its prerequisites.
4+
5+
## Prerequisites
6+
7+
### Python Version
8+
9+
vLLM Judge requires Python 3.8 or higher. You can check your Python version:
10+
11+
```bash
12+
python --version
13+
```
14+
15+
### vLLM Server
16+
17+
You need access to a vLLM server running your preferred model. If you don't have one:
18+
19+
20+
```bash
21+
# Install vLLM
22+
pip install vllm
23+
24+
# Start a model server
25+
python -m vllm.entrypoints.openai.api_server \
26+
--model meta-llama/Llama-3-8b-instruct \
27+
--port 8000
28+
```
29+
30+
## Installing vLLM Judge
31+
32+
### Basic Installation
33+
34+
Install the core library with pip:
35+
36+
```bash
37+
pip install vllm-judge
38+
```
39+
40+
This installs the essential dependencies:
41+
- `httpx` - Async HTTP client
42+
- `pydantic` - Data validation
43+
- `tenacity` - Retry logic
44+
- `click` - CLI interface
45+
46+
### Optional Features
47+
48+
#### API Server
49+
50+
To run vLLM Judge as an API server:
51+
52+
```bash
53+
pip install vllm-judge[api]
54+
```
55+
56+
This adds:
57+
- `fastapi` - Web framework
58+
- `uvicorn` - ASGI server
59+
- `websockets` - WebSocket support
60+
61+
#### Jinja2 Templates
62+
63+
For advanced template support:
64+
65+
```bash
66+
pip install vllm-judge[jinja2]
67+
```
68+
69+
This enables Jinja2 template engine for complex template logic.
70+
71+
72+
#### Everything
73+
74+
Install all optional features:
75+
76+
```bash
77+
pip install vllm-judge[dev]
78+
```
79+
80+
### Installation from Source
81+
82+
To install the latest development version:
83+
84+
```bash
85+
# Clone the repository
86+
git clone https://github.com/saichandrapandraju/vllm-judge.git
87+
cd vllm-judge
88+
89+
# Install in development mode
90+
pip install -e .
91+
92+
# With all extras
93+
pip install -e ".[dev]"
94+
```
95+
96+
## Verifying Installation
97+
98+
### Basic Check
99+
100+
```python
101+
# In Python
102+
from vllm_judge import Judge
103+
print("vLLM Judge installed successfully!")
104+
```
105+
106+
### CLI Check
107+
108+
```bash
109+
# Check CLI installation
110+
vllm-judge --help
111+
```
112+
113+
### Version Check
114+
115+
```python
116+
import vllm_judge
117+
print(f"vLLM Judge version: {vllm_judge.__version__}")
118+
```
119+
120+
## Environment Setup
121+
122+
### Virtual Environment (Recommended)
123+
124+
It's recommended to use a virtual environment:
125+
126+
#### venv
127+
128+
```bash
129+
# Create virtual environment
130+
python -m venv vllm-judge-env
131+
132+
# Activate it
133+
# On Linux/Mac:
134+
source vllm-judge-env/bin/activate
135+
# On Windows:
136+
vllm-judge-env\Scripts\activate
137+
138+
# Install vLLM Judge
139+
pip install vllm-judge
140+
```
141+
142+
#### conda
143+
144+
```bash
145+
# Create conda environment
146+
conda create -n vllm-judge python=3.9
147+
conda activate vllm-judge
148+
149+
# Install vLLM Judge
150+
pip install vllm-judge
151+
```

0 commit comments

Comments
 (0)