You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: site-src/guides/index.md
+23-23Lines changed: 23 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,45 +37,45 @@ Tooling:
37
37
38
38
=== "GPU-Based Model Server"
39
39
40
-
For this setup, you will need 3 GPUs to run the sample model server. Adjust the number of replicas in `./config/manifests/vllm/gpu-deployment.yaml` as needed.
41
-
Create a Hugging Face secret to download the model [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). Ensure that the token grants access to this model.
40
+
For this setup, you will need 3 GPUs to run the sample model server. Adjust the number of replicas in `./config/manifests/vllm/gpu-deployment.yaml` as needed.
41
+
Create a Hugging Face secret to download the model [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). Ensure that the token grants access to this model.
42
42
43
-
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
43
+
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
44
44
45
-
```bash
46
-
kubectl create secret generic hf-token --from-literal=token=$HF_TOKEN # Your Hugging Face Token with access to the set of Llama models
CPU deployment can be unreliable i.e. the pods may crash/restart because of resource contraints.
54
+
CPU deployment can be unreliable i.e. the pods may crash/restart because of resource contraints.
55
55
56
-
This setup is using the formal `vllm-cpu` image, which according to the documentation can run vLLM on x86 CPU platform.
57
-
For this setup, we use approximately 9.5GB of memory and 12 CPUs for each replica.
56
+
This setup is using the formal `vllm-cpu` image, which according to the documentation can run vLLM on x86 CPU platform.
57
+
For this setup, we use approximately 9.5GB of memory and 12 CPUs for each replica.
58
58
59
-
While it is possible to deploy the model server with less resources, this is not recommended. For example, in our tests, loading the model using 8GB of memory and 1 CPU was possible but took almost 3.5 minutes and inference requests took unreasonable time. In general, there is a tradeoff between the memory and CPU we allocate to our pods and the performance. The more memory and CPU we allocate the better performance we can get.
59
+
While it is possible to deploy the model server with less resources, this is not recommended. For example, in our tests, loading the model using 8GB of memory and 1 CPU was possible but took almost 3.5 minutes and inference requests took unreasonable time. In general, there is a tradeoff between the memory and CPU we allocate to our pods and the performance. The more memory and CPU we allocate the better performance we can get.
60
60
61
-
After running multiple configurations of these values we decided in this sample to use 9.5GB of memory and 12 CPUs for each replica, which gives reasonable response times. You can increase those numbers and potentially may even get better response times. For modifying the allocated resources, adjust the numbers in [cpu-deployment.yaml](https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml) as needed.
61
+
After running multiple configurations of these values we decided in this sample to use 9.5GB of memory and 12 CPUs for each replica, which gives reasonable response times. You can increase those numbers and potentially may even get better response times. For modifying the allocated resources, adjust the numbers in [cpu-deployment.yaml](https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml) as needed.
62
62
63
-
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
63
+
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
0 commit comments