Skip to content

Commit 2f7649c

Browse files
committed
Add inference extension doc
1 parent 8d59f42 commit 2f7649c

File tree

1 file changed

+172
-0
lines changed

1 file changed

+172
-0
lines changed
Lines changed: 172 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,172 @@
1+
---
2+
title: Gateway API Inference Extension
3+
weight: 800
4+
toc: true
5+
type: how-to
6+
product: NGF
7+
nd-docs: DOCS-0000
8+
---
9+
10+
Learn how to use NGINX Gateway Fabric with the Gateway API Inference Extension to optimize traffic routing to self-hosting Generative AI Models on Kubernetes.
11+
12+
## Overview
13+
14+
The [Gateway API Inference Extension](https://gateway-api-inference-extension.sigs.k8s.io/) is an official Kubernetes project that aims to provide optimized load-balancing for self-hosted Generative AI Models on Kubernetes.
15+
The project's goal is to improve and standardize routing to inference workloads across the ecosystem.
16+
17+
Coupled with the provided Endpoint Picker Service, NGINX Gataway Fabric becomes an [Inference Gateway](https://gateway-api-inference-extension.sigs.k8s.io/#concepts-and-definitions), with additional AI specific traffic management features such as model-aware routing, serving priority for models, model rollouts, and more.
18+
19+
{{< call-out "warning" >}} The Gateway API Inference Extension is still in alpha status and should not be used in production yet.{{< /call-out >}}
20+
21+
## Setup
22+
23+
- Install the Gateway API Inference Extension CRDs:
24+
25+
```shell
26+
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/releases/latest/download/manifests.yaml
27+
```
28+
29+
- To enable the Gateway API Inference Extension, [install]({{< ref "/ngf/install/" >}}) NGINX Gateway Fabric with these modifications:
30+
- Using Helm: set the `nginxGateway.gwAPIInferenceExtension.enable=true` Helm value.
31+
- Using Kubernetes manifests: set the `--gateway-api-inference-extension` flag in the nginx-gateway container argument, update the ClusterRole RBAC to add the `inferncepools`:
32+
```yaml
33+
- apiGroups:
34+
- inference.networking.k8s.io
35+
resources:
36+
- inferencepools
37+
verbs:
38+
- get
39+
- list
40+
- watch
41+
- apiGroups:
42+
- inference.networking.k8s.io
43+
resources:
44+
- inferencepools/status
45+
verbs:
46+
- update
47+
```
48+
49+
See this [example manifest](https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/main/deploy/inference/deploy.yaml) for clarification.
50+
51+
52+
## Deploy a sample model server
53+
54+
The [vLLM simulator](https://github.com/llm-d/llm-d-inference-sim/tree/main) model server does not use GPUs and is ideal for test/dev environments. This sample is configured to simulate the [meta-llama/LLama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) model. To deploy the vLLM simulator, run the following command:
55+
56+
```shell
57+
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/sim-deployment.yaml
58+
```
59+
60+
## Deploy the InferencePool and Endpoint Picker Extension
61+
62+
The InferencePool is a Gateway API Inference Extension resource that represents a set of Infernece-focused Pods. With InferencePool, you can configure a routing extension as well as inference-specific routing optimizations. For more information on this resource, refer to the Gateway API Inference Extension [InferencePool documentation](https://gateway-api-inference-extension.sigs.k8s.io/api-types/inferencepool/).
63+
64+
Install an InferencePool named `vllm-llama3-8b-instruct` that selects from endpoints with label `app: vllm-llama3-8b-instruct` and listening on port 8000. The Helm install command automatically installs the Endpoint Picker Extension and InferencePool.
65+
66+
NGINX will query the Endpoint Picker Extension to determine the appropriate pod endpoint to route traffic to. These pods are selected from a pool of ready pods designated by the assigned InferencePool's Selector field. For more information on the [Endpoint Picker](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/main/pkg/epp/README.md).
67+
68+
{{< call-out "warning" >}} The Endpoint Picker Extension is a third-party application written and provided by the Gateway API Inference Extension project. The communication between NGINX and the Endpoint Picker Extension does not currently have TLS support, so it is an insecure connection. The Gateway API Inference Extension is still in alpha status and should not be used in production yet. NGINX Gateway Fabric is not responsible for any threats or risks associated with using this third-party Endpoint Picker Extension application. {{< /call-out >}}
69+
70+
```shell
71+
export IGW_CHART_VERSION=v1.0.1
72+
helm install vllm-llama3-8b-instruct \
73+
--set inferencePool.modelServers.matchLabels.app=vllm-llama3-8b-instruct \
74+
--version $IGW_CHART_VERSION \
75+
oci://registry.k8s.io/gateway-api-inference-extension/charts/inferencepool
76+
```
77+
78+
## Deploy an Inference Gateway
79+
80+
1. Deploy Inference Gateway:
81+
82+
```shell
83+
kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/examples/inference/gateway.yaml
84+
```
85+
86+
Confirm that the Gateway was assigned an IP address and reports a `Programmed=True` status:
87+
88+
```shell
89+
kubectl describe gateway inference-gateway
90+
```
91+
92+
Save the public IP address and port of the NGINX Service into shell variables:
93+
94+
```text
95+
GW_IP=XXX.YYY.ZZZ.III
96+
GW_PORT=<port number>
97+
```
98+
99+
2. Deploy the HTTPRoute:
100+
101+
```shell
102+
kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/examples/snippets-filter/httproute.yaml
103+
```
104+
105+
Confirm that the HTTPRoute status conditions include `Accepted=True` and `ResolvedRefs=True`:
106+
107+
```shell
108+
kubectl describe httproute llm-route
109+
```
110+
111+
## Try it out
112+
113+
Send traffic to the Gateway:
114+
115+
```shell
116+
curl -i $GW_IP:$GW_PORT/v1/completions -H 'Content-Type: application/json' -d '{
117+
"model": "food-review-1",
118+
"prompt": "Write as if you were a critic: San Francisco",
119+
"max_tokens": 100,
120+
"temperature": 0
121+
}'
122+
```
123+
124+
## Cleanup
125+
126+
1. Uninstall the InferencePool, InferenceObjective, and model server resources:
127+
128+
129+
```shell
130+
helm uninstall vllm-llama3-8b-instruct
131+
kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/inferenceobjective.yaml --ignore-not-found
132+
kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml --ignore-not-found
133+
kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/gpu-deployment.yaml --ignore-not-found
134+
kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/sim-deployment.yaml --ignore-not-found
135+
```
136+
137+
2. Uninstall the Gateway API Inference Extension CRDs:
138+
139+
```shell
140+
kubectl delete -k https://github.com/kubernetes-sigs/gateway-api-inference-extension/config/crd --ignore-not-found
141+
```
142+
143+
3. Uninstall Inference Gateway and HTTPRoute:
144+
145+
```shell
146+
kubectl delete -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/examples/inference/gateway.yaml
147+
kubectl delete -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/examples/snippets-filter/httproute.yaml
148+
```
149+
150+
4. Uninstall NGINX Gateway Fabric:
151+
152+
```shell
153+
helm uninstall ngf -n nginx-gateway
154+
```
155+
If needed, replace ngf with your chosen release name.
156+
157+
5. Remove namespace and NGINX Gateway Fabric CRDs:
158+
159+
```shell
160+
kubectl delete ns nginx-gateway
161+
kubectl delete -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/deploy/crds.yaml
162+
```
163+
164+
6. Remove the Gateway API CRDs:
165+
166+
{{< include "/ngf/installation/uninstall-gateway-api-resources.md" >}}
167+
168+
## See also
169+
170+
- [Gateway API Inference Exntension Introduction](https://gateway-api-inference-extension.sigs.k8s.io/): for introductory details to the project.
171+
- [Gateway API Inference Extension API Overview](https://gateway-api-inference-extension.sigs.k8s.io/concepts/api-overview/): for an API overview.
172+
- [Gateway API Inference Extension User Guides](https://gateway-api-inference-extension.sigs.k8s.io/guides/): for additional use cases and guides.

0 commit comments

Comments
 (0)