Skip to content

Commit e812885

Browse files
authored
fix: Correct indentation for MkDocs Admonitions display issue (#1643)
Signed-off-by: Kay Yan <[email protected]>
1 parent e6930c6 commit e812885

File tree

1 file changed

+23
-23
lines changed

1 file changed

+23
-23
lines changed

site-src/guides/index.md

Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -37,45 +37,45 @@ Tooling:
3737

3838
=== "GPU-Based Model Server"
3939

40-
For this setup, you will need 3 GPUs to run the sample model server. Adjust the number of replicas in `./config/manifests/vllm/gpu-deployment.yaml` as needed.
41-
Create a Hugging Face secret to download the model [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). Ensure that the token grants access to this model.
40+
For this setup, you will need 3 GPUs to run the sample model server. Adjust the number of replicas in `./config/manifests/vllm/gpu-deployment.yaml` as needed.
41+
Create a Hugging Face secret to download the model [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). Ensure that the token grants access to this model.
4242

43-
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
43+
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
4444

45-
```bash
46-
kubectl create secret generic hf-token --from-literal=token=$HF_TOKEN # Your Hugging Face Token with access to the set of Llama models
47-
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/gpu-deployment.yaml
48-
```
45+
```bash
46+
kubectl create secret generic hf-token --from-literal=token=$HF_TOKEN # Your Hugging Face Token with access to the set of Llama models
47+
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/gpu-deployment.yaml
48+
```
4949

5050
=== "CPU-Based Model Server"
5151

52-
???+ warning
52+
???+ warning
5353

54-
CPU deployment can be unreliable i.e. the pods may crash/restart because of resource contraints.
54+
CPU deployment can be unreliable i.e. the pods may crash/restart because of resource contraints.
5555

56-
This setup is using the formal `vllm-cpu` image, which according to the documentation can run vLLM on x86 CPU platform.
57-
For this setup, we use approximately 9.5GB of memory and 12 CPUs for each replica.
56+
This setup is using the formal `vllm-cpu` image, which according to the documentation can run vLLM on x86 CPU platform.
57+
For this setup, we use approximately 9.5GB of memory and 12 CPUs for each replica.
5858

59-
While it is possible to deploy the model server with less resources, this is not recommended. For example, in our tests, loading the model using 8GB of memory and 1 CPU was possible but took almost 3.5 minutes and inference requests took unreasonable time. In general, there is a tradeoff between the memory and CPU we allocate to our pods and the performance. The more memory and CPU we allocate the better performance we can get.
59+
While it is possible to deploy the model server with less resources, this is not recommended. For example, in our tests, loading the model using 8GB of memory and 1 CPU was possible but took almost 3.5 minutes and inference requests took unreasonable time. In general, there is a tradeoff between the memory and CPU we allocate to our pods and the performance. The more memory and CPU we allocate the better performance we can get.
6060

61-
After running multiple configurations of these values we decided in this sample to use 9.5GB of memory and 12 CPUs for each replica, which gives reasonable response times. You can increase those numbers and potentially may even get better response times. For modifying the allocated resources, adjust the numbers in [cpu-deployment.yaml](https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml) as needed.
61+
After running multiple configurations of these values we decided in this sample to use 9.5GB of memory and 12 CPUs for each replica, which gives reasonable response times. You can increase those numbers and potentially may even get better response times. For modifying the allocated resources, adjust the numbers in [cpu-deployment.yaml](https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml) as needed.
6262

63-
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
63+
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
6464

65-
```bash
66-
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml
67-
```
65+
```bash
66+
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml
67+
```
6868

6969
=== "vLLM Simulator Model Server"
7070

71-
This option uses the [vLLM simulator](https://github.com/llm-d/llm-d-inference-sim/tree/main) to simulate a backend model server.
72-
This setup uses the least amount of compute resources, does not require GPU's, and is ideal for test/dev environments.
71+
This option uses the [vLLM simulator](https://github.com/llm-d/llm-d-inference-sim/tree/main) to simulate a backend model server.
72+
This setup uses the least amount of compute resources, does not require GPU's, and is ideal for test/dev environments.
7373

74-
To deploy the vLLM simulator, run the following command.
74+
To deploy the vLLM simulator, run the following command.
7575

76-
```bash
77-
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/sim-deployment.yaml
78-
```
76+
```bash
77+
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/sim-deployment.yaml
78+
```
7979

8080
### Install the Inference Extension CRDs
8181

0 commit comments

Comments
 (0)