You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**Description**
The configurability for "/v1" was introduced in envoyproxy#1020. However, it is
unnecessary configuration no one asked for at the moment given that we
keep the rootPrefix for the separation concern between AIGatewayRoutes
vs HTTPRoutes.
This partially reverts envoyproxy#1020, and removes the config so that we can have
a simpler config overall. We can revisit this if anyone asks for it
later. If so, I think it will be a time to think about Gateway level
CRD.
**Related Issues/PRs (if applicable)**
Follow up on envoyproxy#1020
---------
Signed-off-by: Takeshi Yoneda <[email protected]>
Copy file name to clipboardExpand all lines: manifests/charts/ai-gateway-helm/values.yaml
+5-13Lines changed: 5 additions & 13 deletions
Original file line number
Diff line number
Diff line change
@@ -6,22 +6,14 @@
6
6
# Default values for ai-gateway-helm.
7
7
8
8
# Global configuration for the endpoints supported by the AI Gateway.
9
-
#
10
-
# By default, the AI Gateway will assume that the downstream client's OpenAI SDK will talk to the Gateway using the base_url
11
-
# set to "http://<gateway-hostname>/v1" which has the default "/v1" prefix in the base_url.
12
-
#
13
-
# By using this configuration, you can change the prefix for the OpenAI endpoints as well as the future non-OpenAI endpoints.
14
-
# For example, when you can configure the rootPrefix to "/ai" and the openAIPrefix to "/openai/v1" which will result in the
15
-
# OpenAI endpoints being served at "http://<gateway-hostname>/ai/openai/v1". This *will* become useful when you add support for
16
-
# other input schemas like Anthropic, Google Gemini, etc. and you want to serve them under a different prefix to avoid conflicts.
17
-
# Follow the issues https://github.com/envoyproxy/ai-gateway/issues/847 as well as https://github.com/envoyproxy/ai-gateway/issues/948 for detail.
18
9
endpointConfig:
19
10
# The prefix for all the routes served by the AI Gateway. Defaulting to "/". All the generated routes will have this prefix.
11
+
#
12
+
# With the default "/", the AI Gateway will assume that the downstream client's OpenAI SDK will talk to the Gateway using the base_url
13
+
# set to "http://<gateway-hostname>/v1" which has the default "/v1" prefix in the base_url.
14
+
#
15
+
# This can be used for providing a separation between AIGatewayRoutes and normal HTTPRoutes when the top level "/v1/" is not desired.
20
16
rootPrefix: "/"
21
-
# The prefix for the OpenAI endpoints. Defaulting to "/v1". This comes **after** the rootPrefix. E.g. if the rootPrefix is "/ai" and the openAIPrefix is "/v1",
22
-
# the OpenAI endpoints will be served at "/ai/v1" which requires the base_url set to "http://<gateway-hostname>/ai/v1".
23
-
openAIPrefix: "/v1"
24
-
# TODO: addr more input schemas. E.g. Anthropic https://github.com/envoyproxy/ai-gateway/issues/847
Copy file name to clipboardExpand all lines: site/docs/capabilities/llm-integrations/supported-endpoints.md
-5Lines changed: 0 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,11 +10,6 @@ The Envoy AI Gateway provides OpenAI-compatible API endpoints for routing and ma
10
10
11
11
The Envoy AI Gateway acts as a proxy that accepts OpenAI-compatible requests and routes them to various AI providers. While it maintains compatibility with the OpenAI API specification, it currently supports a subset of the full OpenAI API.
12
12
13
-
:::tip
14
-
`/v1` prefix on OpenAI API endpoints is configurable via Envoy AI Gateway installation options. The default is `/v1` unless specified otherwise.
15
-
Please refer to the `endpointConfig` option in the [helm values file](https://github.com/envoyproxy/ai-gateway/blob/main/manifests/charts/ai-gateway-helm/values.yaml) for details.
0 commit comments