You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### Deploy the InferencePool and Endpoint Picker Extension
87
+
88
+
Install an InferencePool named `vllm-llama3-8b-instruct` that selects from endpoints with label `app: vllm-llama3-8b-instruct` and listening on port 8000. The Helm install command automatically installs the endpoint-picker, inferencepool along with provider specific resources.
Choose one of the following options to deploy an Inference Gateway.
@@ -267,22 +318,6 @@ A cluster with:
267
318
kubectl get httproute llm-route -o yaml
268
319
```
269
320
270
-
271
-
### Deploy the InferencePool and Endpoint Picker Extension
272
-
273
-
Install an InferencePool named `vllm-llama3-8b-instruct` that selects from endpoints with label app: vllm-llama3-8b-instruct and listening on port 8000, you can run the following command:
274
-
275
-
```bash
276
-
export GATEWAY_PROVIDER=none # See [README](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/main/config/charts/inferencepool/README.md#configuration) for valid configurations
0 commit comments