Skip to content

Commit f9ee531

Browse files
committed
Docs: Versions the quickstart guide
Signed-off-by: Daneyon Hansen <[email protected]>
1 parent 5a5f552 commit f9ee531

File tree

13 files changed

+548
-155
lines changed

13 files changed

+548
-155
lines changed

mkdocs.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,8 @@ nav:
6464
- FAQ: faq.md
6565
- Guides:
6666
- User Guides:
67-
- Getting started: guides/index.md
67+
- Getting started (Released): guides/index.md
68+
- Getting started (Latest/Main): guides/getting-started-latest.md
6869
- Use Cases:
6970
- Serve Multiple GenAI models: guides/serve-multiple-genai-models.md
7071
- Rollout:

site-src/_includes/bbr.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
### Deploy the Body Based Router Extension (Optional)
2+
3+
This guide has shown how to get started with serving a single base model type per L7 URL path. If after this exercise, you wish to continue on to exercise model-aware routing such that more than 1 base model is served at the same L7 url path, that requires use of the (optional) Body Based Routing (BBR) extension which is described in a separate section of the documentation, namely the [`Serving Multiple GenAI Models`](serve-multiple-genai-models.md) section. If you wish to exercise that function, then retain the setup you have deployed so far from this guide and move on to the additional steps described in [that guide](serve-multiple-genai-models.md) or else move on to the following section to cleanup your setup.

site-src/_includes/epp.md

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
=== "GKE"
2+
3+
```bash
4+
export GATEWAY_PROVIDER=gke
5+
helm install vllm-llama3-8b-instruct \
6+
--set inferencePool.modelServers.matchLabels.app=vllm-llama3-8b-instruct \
7+
--set provider.name=$GATEWAY_PROVIDER \
8+
--version $IGW_CHART_VERSION \
9+
oci://registry.k8s.io/gateway-api-inference-extension/charts/inferencepool
10+
```
11+
12+
=== "Istio"
13+
14+
```bash
15+
export GATEWAY_PROVIDER=istio
16+
helm install vllm-llama3-8b-instruct \
17+
--set inferencePool.modelServers.matchLabels.app=vllm-llama3-8b-instruct \
18+
--set provider.name=$GATEWAY_PROVIDER \
19+
--version $IGW_CHART_VERSION \
20+
oci://registry.k8s.io/gateway-api-inference-extension/charts/inferencepool
21+
```
22+
23+
=== "Kgateway"
24+
25+
```bash
26+
export GATEWAY_PROVIDER=none
27+
helm install vllm-llama3-8b-instruct \
28+
--set inferencePool.modelServers.matchLabels.app=vllm-llama3-8b-instruct \
29+
--set provider.name=$GATEWAY_PROVIDER \
30+
--version $IGW_CHART_VERSION \
31+
oci://registry.k8s.io/gateway-api-inference-extension/charts/inferencepool
32+
```
33+
34+
=== "Agentgateway"
35+
36+
```bash
37+
export GATEWAY_PROVIDER=none
38+
helm install vllm-llama3-8b-instruct \
39+
--set inferencePool.modelServers.matchLabels.app=vllm-llama3-8b-instruct \
40+
--set provider.name=$GATEWAY_PROVIDER \
41+
--version $IGW_CHART_VERSION \
42+
oci://registry.k8s.io/gateway-api-inference-extension/charts/inferencepool
43+
```

site-src/_includes/infobj.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
??? example "Experimental"
2+
3+
This project is still in an alpha state and breaking changes may occur in the future.
4+
5+
This quickstart guide is intended for engineers familiar with k8s and model servers (vLLM in this instance). The goal of this guide is to get an Inference Gateway up and running!

site-src/_includes/intro.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
??? example "Experimental"
2+
3+
This project is still in an alpha state and breaking changes may occur in the future.
4+
5+
This quickstart guide is intended for engineers familiar with k8s and model servers (vLLM in this instance). The goal of this guide is to get an Inference Gateway up and running!
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
=== "CPU-Based Model Server"
2+
3+
???+ warning
4+
5+
CPU deployment can be unreliable i.e. the pods may crash/restart because of resource contraints.
6+
7+
This setup is using the formal `vllm-cpu` image, which according to the documentation can run vLLM on x86 CPU platform.
8+
For this setup, we use approximately 9.5GB of memory and 12 CPUs for each replica.
9+
10+
While it is possible to deploy the model server with less resources, this is not recommended. For example, in our tests, loading the model using 8GB of memory and 1 CPU was possible but took almost 3.5 minutes and inference requests took unreasonable time. In general, there is a tradeoff between the memory and CPU we allocate to our pods and the performance. The more memory and CPU we allocate the better performance we can get.
11+
12+
After running multiple configurations of these values we decided in this sample to use 9.5GB of memory and 12 CPUs for each replica, which gives reasonable response times. You can increase those numbers and potentially may even get better response times. For modifying the allocated resources, adjust the numbers in [cpu-deployment.yaml](https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml) as needed.
13+
14+
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
=== "GPU-Based Model Server"
2+
3+
For this setup, you will need 3 GPUs to run the sample model server. Adjust the number of replicas as needed.
4+
Create a Hugging Face secret to download the model [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
5+
Ensure that the token grants access to this model.
6+
7+
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
=== "vLLM Simulator Model Server"
2+
3+
This option uses the [vLLM simulator](https://github.com/llm-d/llm-d-inference-sim/tree/main) to simulate a backend model server.
4+
This setup uses the least amount of compute resources, does not require GPU's, and is ideal for test/dev environments.
5+
6+
To deploy the vLLM simulator, run the following command.

site-src/_includes/model-server.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
Three options are supported for running the model server:
2+
3+
1. GPU-based model server.
4+
Requirements: a Hugging Face access token that grants access to the model [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
5+
6+
1. CPU-based model server (not using GPUs).
7+
The sample uses the model [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
8+
9+
1. [vLLM Simulator](https://github.com/llm-d/llm-d-inference-sim/tree/main) model server (not using GPUs).
10+
The sample is configured to simulate the [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) model.
11+
12+
Choose one of these options and follow the steps below. Please do not deploy more than one, as the deployments have the same name and will override each other.
13+
14+
=== "GPU-Based Model Server"
15+
16+
For this setup, you will need 3 GPUs to run the sample model server. Adjust the number of replicas in `./config/manifests/vllm/gpu-deployment.yaml` as needed.
17+
Create a Hugging Face secret to download the model [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). Ensure that the token grants access to this model.
18+
19+
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.

site-src/_includes/prereqs.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
A cluster with:
2+
3+
- Support for services of type `LoadBalancer`. For kind clusters, follow [this guide](https://kind.sigs.k8s.io/docs/user/loadbalancer)
4+
to get services of type LoadBalancer working.
5+
- Support for [sidecar containers](https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/) (enabled by default since Kubernetes v1.29)
6+
to run the model server deployment.
7+
8+
Tooling:
9+
10+
- [Helm](https://helm.sh/docs/intro/install/) installed.

0 commit comments

Comments
 (0)