You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: site-src/guides/index.md
+21-5Lines changed: 21 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,22 +18,26 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
18
18
19
19
### Deploy Sample Model Server
20
20
21
-
Two options are supported for running the model server:
21
+
Three options are supported for running the model server:
22
22
23
-
1. GPU-based model server.
23
+
1. GPU-based model server.
24
24
Requirements: a Hugging Face access token that grants access to the model [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
25
25
26
-
1. CPU-based model server (not using GPUs).
27
-
The sample uses the model [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
26
+
1. CPU-based model server (not using GPUs).
27
+
The sample uses the model [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
28
28
29
-
Choose one of these options and follow the steps below. Please do not deploy both, as the deployments have the same name and will override each other.
29
+
1.[vLLM Simulator](https://github.com/llm-d/llm-d-inference-sim/tree/main) model server (not using GPUs).
30
+
The sample is configured to simulate the [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) model.
31
+
32
+
Choose one of these options and follow the steps below. Please do not deploy more than one, as the deployments have the same name and will override each other.
30
33
31
34
=== "GPU-Based Model Server"
32
35
33
36
For this setup, you will need 3 GPUs to run the sample model server. Adjust the number of replicas in `./config/manifests/vllm/gpu-deployment.yaml` as needed.
34
37
Create a Hugging Face secret to download the model [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). Ensure that the token grants access to this model.
35
38
36
39
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
40
+
37
41
```bash
38
42
kubectl create secret generic hf-token --from-literal=token=$HF_TOKEN # Your Hugging Face Token with access to the set of Llama models
@@ -49,10 +53,22 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
49
53
After running multiple configurations of these values we decided in this sample to use 9.5GB of memory and 12 CPUs for each replica, which gives reasonable response times. You can increase those numbers and potentially may even get better response times. For modifying the allocated resources, adjust the numbers in [cpu-deployment.yaml](https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml) as needed.
50
54
51
55
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
0 commit comments