You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: site-src/concepts/api-overview.md
+14-2Lines changed: 14 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,24 @@
1
1
# API Overview
2
2
3
3
## Background
4
-
The Gateway API Inference Extension project is an extension of the Kubernetes Gateway API for serving Generative AI models on Kubernetes. Gateway API Inference Extension facilitates standardization of APIs for Kubernetes cluster operators and developers running generative AI inference, while allowing flexibility for underlying gateway implementations (such as Envoy Proxy) to iterate on mechanisms for optimized serving of models.
4
+
Gateway API Inference Extension optimizes self-hosting Generative AI Models on Kubernetes.
5
+
It provides optimized load-balancing for self-hosted Generative AI Models on Kubernetes.
6
+
The project’s goal is to improve and standardize routing to inference workloads across the ecosystem.
5
7
6
-
<imgsrc="/images/inference-overview.svg"alt="Overview of API integration"class="center"width="1000" />
8
+
This is achieved by leveraging Envoy's [External Processing](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/ext_proc_filter) to extend any gateway that supports both ext-proc and [Gateway API](https://github.com/kubernetes-sigs/gateway-api) into an [inference gateway](../index.md#concepts-and-definitions).
9
+
This extension extends popular gateways like Envoy Gateway, kgateway, and GKE Gateway - to become [Inference Gateway](../index.md#concepts-and-definitions) -
10
+
supporting inference platform teams self-hosting Generative Models (with a current focus on large language models) on Kubernetes.
11
+
This integration makes it easy to expose and control access to your local [OpenAI-compatible chat completion endpoints](https://platform.openai.com/docs/api-reference/chat)
12
+
to other workloads on or off cluster, or to integrate your self-hosted models alongside model-as-a-service providers
13
+
in a higher level **AI Gateways** like [LiteLLM](https://www.litellm.ai/), [Gloo AI Gateway](https://www.solo.io/products/gloo-ai-gateway), or [Apigee](https://cloud.google.com/apigee).
7
14
8
15
## API Resources
9
16
17
+
Gateway API Inference Extension introduces two inference-focused API resources with distinct responsibilities,
18
+
each aligning with a specific user persona in the Generative AI serving workflow.
19
+
20
+
<imgsrc="/images/inference-overview.svg"alt="Overview of API integration"class="center"width="1000" />
21
+
10
22
### InferencePool
11
23
12
24
InferencePool represents a set of Inference-focused Pods and an extension that will be used to route to them. Within the broader Gateway API resource model, this resource is considered a "backend". In practice, that means that you'd replace a Kubernetes Service with an InferencePool. This resource has some similarities to Service (a way to select Pods and specify a port), but has some unique capabilities. With InferencePool, you can configure a routing extension as well as inference-specific routing optimizations. For more information on this resource, refer to our [InferencePool documentation](/api-types/inferencepool) or go directly to the [InferencePool spec](/reference/spec/#inferencepool).
Copy file name to clipboardExpand all lines: site-src/guides/serve-multiple-lora-adapters.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
# Serve LoRA adapters on a shared pool
2
2
A company wants to serve LLMs for document analysis and focuses on audiences in multiple languages, such as English and Spanish.
3
3
They have a fine-tuned LoRA adapter for each language, but need to efficiently use their GPU and TPU capacity.
4
-
You can use Gateway API Inference Extension to deploy dynamic LoRA fine-tuned adapters for each language (for example, `english-bot` and `spanish-bot`) on a common base model and accelerator.
4
+
You can use an Inference Gateway to deploy dynamic LoRA fine-tuned adapters for each language (for example, `english-bot` and `spanish-bot`) on a common base model and accelerator.
5
5
This lets you reduce the number of required accelerators by densely packing multiple models in a shared pool.
6
6
7
7
## How
8
-
The following diagram illustrates how Gateway API Inference Extension serves multiple LoRA adapters on a shared pool.
8
+
The following diagram illustrates how Inference Gateway serves multiple LoRA adapters on a shared pool.
9
9

10
10
This example illustrates how you can densely serve multiple LoRA adapters with distinct workload performance objectives on a common InferencePool.
workloads. It simplifies the deployment, management, and observability of AI
20
+
inference workloads.
21
+
-**Inference Scheduler**: An extendable component that makes decisions about which endpoint is optimal (best cost /
22
+
best performance) for an inference request based on `Metrics and Capabilities`
23
+
from [Model Serving](/docs/proposals/003-model-server-protocol/README.md).
24
+
-**Metrics and Capabilities**: Data provided by model serving platforms about
25
+
performance, availability and capabilities to optimize routing. Includes
26
+
things like [Prefix Cache] status or [LoRA Adapters] availability.
27
+
-**Endpoint Picker(EPP)**: An implementation of an `Inference Scheduler` with additional Routing, Flow, and Request Control layers to allow for sophisticated routing strategies. Additional info on the architecture of the EPP [here](https://github.com/kubernetes-sigs/gateway-api-inference-extension/tree/main/docs/proposals/0683-epp-architecture-proposal).
28
+
29
+
[Inference Gateway]:#concepts-and-definitions
30
+
14
31
## Key Features
15
-
Gateway API Inference Extension, along with a reference implementation in Envoy Proxy, provides the following key features:
32
+
Gateway API Inference Extension optimizes self-hosting Generative AI Models on Kubernetes.
33
+
It provides optimized load-balancing for self-hosted Generative AI Models on Kubernetes.
34
+
The project’s goal is to improve and standardize routing to inference workloads across the ecosystem.
35
+
36
+
This is achieved by leveraging Envoy's [External Processing](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/ext_proc_filter) to extend any gateway that supports both ext-proc and [Gateway API](https://github.com/kubernetes-sigs/gateway-api) into an [inference gateway](../index.md#concepts-and-definitions).
37
+
This extension extends popular gateways like Envoy Gateway, kgateway, and GKE Gateway - to become [Inference Gateway](../index.md#concepts-and-definitions) -
38
+
supporting inference platform teams self-hosting Generative Models (with a current focus on large language models) on Kubernetes.
39
+
This integration makes it easy to expose and control access to your local [OpenAI-compatible chat completion endpoints](https://platform.openai.com/docs/api-reference/chat)
40
+
to other workloads on or off cluster, or to integrate your self-hosted models alongside model-as-a-service providers
41
+
in a higher level **AI Gateways** like [LiteLLM](https://www.litellm.ai/), [Gloo AI Gateway](https://www.solo.io/products/gloo-ai-gateway), or [Apigee](https://cloud.google.com/apigee).
16
42
17
-
-**Model-aware routing**: Instead of simply routing based on the path of the request, Gateway API Inference Extension allows you to route to models based on the model names. This is enabled by support for GenAI Inference API specifications (such as OpenAI API) in the gateway implementations such as in Envoy Proxy. This model-aware routing also extends to Low-Rank Adaptation (LoRA) fine-tuned models.
18
43
19
-
-**Serving priority**: Gateway API Inference Extension allows you to specify the serving priority of your models. For example, you can specify that your models for online inference of chat tasks (which is more latency sensitive) have a higher [*Criticality*](/reference/spec/#criticality) than a model for latency tolerant tasks such as a summarization.
44
+
-**Model-aware routing**: Instead of simply routing based on the path of the request, an **[inference gateway]** allows you to route to models based on the model names. This is enabled by support for GenAI Inference API specifications (such as OpenAI API) in the gateway implementations such as in Envoy Proxy. This model-aware routing also extends to Low-Rank Adaptation (LoRA) fine-tuned models.
20
45
21
-
-**Model rollouts**: Gateway API Inference Extension allows you to incrementally roll out new model versions by traffic splitting definitions based on the model names.
46
+
-**Serving priority**: an **[inference gateway]**allows you to specify the serving priority of your models. For example, you can specify that your models for online inference of chat tasks (which is more latency sensitive) have a higher [*Criticality*](/reference/spec/#criticality) than a model for latency tolerant tasks such as a summarization.
22
47
23
-
-**Extensibility for Inference Services**: Gateway API Inference Extension defines extensibility pattern for additional Inference services to create bespoke routing capabilities should out of the box solutions not fit your needs.
48
+
-**Model rollouts**: an **[inference gateway]** allows you to incrementally roll out new model versions by traffic splitting definitions based on the model names.
24
49
50
+
-**Extensibility for Inference Services**: an **[inference gateway]** defines extensibility pattern for additional Inference services to create bespoke routing capabilities should out of the box solutions not fit your needs.
25
51
26
-
-**Customizable Load Balancing for Inference**: Gateway API Inference Extension defines a pattern for customizable load balancing and request routing that is optimized for Inference. Gateway API Inference Extension provides a reference implementation of model endpoint picking leveraging metrics emitted from the model servers. This endpoint picking mechanism can be used in lieu of traditional load balancing mechanisms. Model Server-aware load balancing ("smart" load balancing as its sometimes referred to in this repo) has been proven to reduce the serving latency and improve utilization of accelerators in your clusters.
52
+
-**Customizable Load Balancing for Inference**: an **[inference gateway]**defines a pattern for customizable load balancing and request routing that is optimized for Inference. An **[inference gateway]** provides a reference implementation of model endpoint picking leveraging metrics emitted from the model servers. This endpoint picking mechanism can be used in lieu of traditional load balancing mechanisms. Model Server-aware load balancing ("smart" load balancing as its sometimes referred to in this repo) has been proven to reduce the serving latency and improve utilization of accelerators in your clusters.
27
53
54
+
By achieving these, the project aims to reduce latency and improve accelerator (GPU) utilization for AI workloads.
28
55
29
56
## API Resources
30
57
@@ -42,7 +69,7 @@ that are relevant to this project:
42
69
Gateway API has [more than 25
43
70
implementations](https://gateway-api.sigs.k8s.io/implementations/). As this
44
71
pattern stabilizes, we expect a wide set of these implementations to support
45
-
this project.
72
+
this project to become an **[inference gateway]**
46
73
47
74
### Endpoint Picker
48
75
@@ -71,16 +98,16 @@ to any Gateway API users or implementers.
71
98
2. If the request should be routed to an InferencePool, the Gateway will forward
72
99
the request information to the endpoint selection extension for that pool.
73
100
74
-
3. The extension will fetch metrics from whichever portion of the InferencePool
101
+
3. The inference gateway will fetch metrics from whichever portion of the InferencePool
75
102
endpoints can best achieve the configured objectives. Note that this kind of
76
-
metrics probing may happen asynchronously, depending on the extension.
103
+
metrics probing may happen asynchronously, depending on the inference gateway.
77
104
78
-
4. The extension will instruct the Gateway which endpoint the request should be
105
+
4. The inference gateway will instruct the Gateway which endpoint the request should be
79
106
routed to.
80
107
81
108
5. The Gateway will route the request to the desired endpoint.
82
109
83
-
<imgsrc="/images/request-flow.png"alt="Gateway API Inference Extension Request Flow"class="center" />
0 commit comments