Skip to content

Commit a69b905

Browse files
authored
Adds vLLM Simulator Support (#898)
* Adds vLLM Simulator Support Signed-off-by: Daneyon Hansen <[email protected]> * Resolves Nir review feedback Signed-off-by: Daneyon Hansen <[email protected]> --------- Signed-off-by: Daneyon Hansen <[email protected]>
1 parent 4b09ad7 commit a69b905

File tree

4 files changed

+89
-18
lines changed

4 files changed

+89
-18
lines changed

Makefile

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,9 @@ IMAGE_NAME := epp
3535
IMAGE_REPO ?= $(IMAGE_REGISTRY)/$(IMAGE_NAME)
3636
IMAGE_TAG ?= $(IMAGE_REPO):$(GIT_TAG)
3737
PROJECT_DIR := $(shell dirname $(abspath $(lastword $(MAKEFILE_LIST))))
38-
E2E_MANIFEST_PATH ?= config/manifests/vllm/gpu-deployment.yaml
38+
# The path to the E2E manifest file. It can be overridden by setting the
39+
# E2E_MANIFEST_PATH environment variable. Note that HF_TOKEN must be set when using the GPU-based manifest.
40+
E2E_MANIFEST_PATH ?= config/manifests/vllm/sim-deployment.yaml
3941

4042
SYNCER_IMAGE_NAME := lora-syncer
4143
SYNCER_IMAGE_REPO ?= $(IMAGE_REGISTRY)/$(SYNCER_IMAGE_NAME)
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
apiVersion: apps/v1
2+
kind: Deployment
3+
metadata:
4+
name: vllm-llama3-8b-instruct
5+
spec:
6+
replicas: 3
7+
selector:
8+
matchLabels:
9+
app: vllm-llama3-8b-instruct
10+
template:
11+
metadata:
12+
labels:
13+
app: vllm-llama3-8b-instruct
14+
spec:
15+
containers:
16+
- name: vllm-sim
17+
image: ghcr.io/llm-d/llm-d-inference-sim:v0.1.0
18+
imagePullPolicy: Always
19+
args:
20+
- --model
21+
- meta-llama/Llama-3.1-8B-Instruct
22+
- --port
23+
- "8000"
24+
- --max-loras
25+
- "2"
26+
- --lora
27+
- food-review-1
28+
env:
29+
- name: POD_NAME
30+
valueFrom:
31+
fieldRef:
32+
fieldPath: metadata.name
33+
- name: NAMESPACE
34+
valueFrom:
35+
fieldRef:
36+
fieldPath: metadata.namespace
37+
ports:
38+
- containerPort: 8000
39+
name: http
40+
protocol: TCP
41+
resources:
42+
requests:
43+
cpu: 10m

site-src/guides/index.md

Lines changed: 21 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -18,22 +18,26 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
1818

1919
### Deploy Sample Model Server
2020

21-
Two options are supported for running the model server:
21+
Three options are supported for running the model server:
2222

23-
1. GPU-based model server.
23+
1. GPU-based model server.
2424
Requirements: a Hugging Face access token that grants access to the model [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
2525

26-
1. CPU-based model server (not using GPUs).
27-
The sample uses the model [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
26+
1. CPU-based model server (not using GPUs).
27+
The sample uses the model [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
2828

29-
Choose one of these options and follow the steps below. Please do not deploy both, as the deployments have the same name and will override each other.
29+
1. [vLLM Simulator](https://github.com/llm-d/llm-d-inference-sim/tree/main) model server (not using GPUs).
30+
The sample is configured to simulate the [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) model.
31+
32+
Choose one of these options and follow the steps below. Please do not deploy more than one, as the deployments have the same name and will override each other.
3033

3134
=== "GPU-Based Model Server"
3235

3336
For this setup, you will need 3 GPUs to run the sample model server. Adjust the number of replicas in `./config/manifests/vllm/gpu-deployment.yaml` as needed.
3437
Create a Hugging Face secret to download the model [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). Ensure that the token grants access to this model.
3538
3639
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
40+
3741
```bash
3842
kubectl create secret generic hf-token --from-literal=token=$HF_TOKEN # Your Hugging Face Token with access to the set of Llama models
3943
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/gpu-deployment.yaml
@@ -49,10 +53,22 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
4953
After running multiple configurations of these values we decided in this sample to use 9.5GB of memory and 12 CPUs for each replica, which gives reasonable response times. You can increase those numbers and potentially may even get better response times. For modifying the allocated resources, adjust the numbers in [cpu-deployment.yaml](https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml) as needed.
5054

5155
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
56+
5257
```bash
5358
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml
5459
```
5560

61+
=== "vLLM Simulator Model Server"
62+
63+
This option uses the [vLLM simulator](https://github.com/llm-d/llm-d-inference-sim/tree/main) to simulate a backend model server.
64+
This setup uses the least amount of compute resources, does not require GPU's, and is ideal for test/dev environments.
65+
66+
To deploy the vLLM simulator, run the following command.
67+
68+
```bash
69+
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/sim-deployment.yaml
70+
```
71+
5672
### Install the Inference Extension CRDs
5773

5874
=== "Latest Release"

test/e2e/epp/README.md

Lines changed: 22 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,13 @@ The end-to-end tests are designed to validate end-to-end Gateway API Inference E
1010

1111
- [Go](https://golang.org/doc/install) installed on your machine.
1212
- [Make](https://www.gnu.org/software/make/manual/make.html) installed to run the end-to-end test target.
13-
- A Hugging Face Hub token with access to the [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) model.
13+
- (Optional) When using the GPU-based vLLM deployment, a Hugging Face Hub token with access to the
14+
[meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) model is required.
15+
After obtaining the token and being granted access to the model, set the `HF_TOKEN` environment variable:
16+
17+
```sh
18+
export HF_TOKEN=<MY_HF_TOKEN>
19+
```
1420

1521
## Running the End-to-End Tests
1622

@@ -22,24 +28,28 @@ Follow these steps to run the end-to-end tests:
2228
git clone https://github.com/kubernetes-sigs/gateway-api-inference-extension.git && cd gateway-api-inference-extension
2329
```
2430

25-
1. **Export Your Hugging Face Hub Token**: The token is required to run the test model server:
31+
1. **Optional Settings**
2632

27-
```sh
28-
export HF_TOKEN=<MY_HF_TOKEN>
29-
```
33+
- **Set the test namespace**: By default, the e2e test creates resources in the `inf-ext-e2e` namespace.
34+
If you would like to change this namespace, set the following environment variable:
3035

31-
1. **(Optional): Set the test namespace**: By default, the e2e test creates resources in the `inf-ext-e2e` namespace.
32-
If you would like to change this namespace, set the following environment variable:
36+
```sh
37+
export E2E_NS=<MY_NS>
38+
```
3339

34-
```sh
35-
export E2E_NS=<MY_NS>
36-
```
40+
- **Set the model server manifest**: By default, the e2e test uses the [vLLM Simulator](https://github.com/llm-d/llm-d-inference-sim)
41+
(`config/manifests/vllm/sim-deployment.yaml`) to simulate a backend model server. If you would like to change the model server
42+
deployment type, set the following environment variable to one of the following:
43+
44+
```sh
45+
export E2E_MANIFEST_PATH=[config/manifests/vllm/gpu-deployment.yaml|config/manifests/vllm/cpu-deployment.yaml]
46+
```
3747

3848
1. **Run the Tests**: Run the `test-e2e` target:
3949

4050
```sh
4151
make test-e2e
4252
```
4353

44-
The test suite prints details for each step. Note that the `vllm-llama3-8b-instruct-pool` model server deployment
45-
may take several minutes to report an `Available=True` status due to the time required for bootstraping.
54+
The test suite prints details for each step. Note that the `vllm-llama3-8b-instruct` model server deployment
55+
may take several minutes to report an `Available=True` status due to the time required for bootstrapping.

0 commit comments

Comments
 (0)