Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions cmd/epp/runner/runner.go
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,7 @@ var (
modelServerMetricsScheme = flag.String("model-server-metrics-scheme", "http", "Scheme to scrape metrics from pods")
modelServerMetricsHttpsInsecureSkipVerify = flag.Bool("model-server-metrics-https-insecure-skip-verify", true, "When using 'https' scheme for 'model-server-metrics-scheme', configure 'InsecureSkipVerify' (default to true)")
haEnableLeaderElection = flag.Bool("ha-enable-leader-election", false, "Enables leader election for high availability. When enabled, readiness probes will only pass on the leader.")
tracing = flag.Bool("tracing", true, "Enables emitting traces")

setupLog = ctrl.Log.WithName("setup")
)
Expand Down Expand Up @@ -141,6 +142,13 @@ func (r *Runner) Run(ctx context.Context) error {
flag.Parse()
initLogging(&opts)

if *tracing {
err := common.InitTracing(ctx, setupLog)
if err != nil {
return err
}
}

setupLog.Info("GIE build", "commit-sha", version.CommitSHA, "build-ref", version.BuildRef)

// Validate flags
Expand Down
67 changes: 43 additions & 24 deletions config/charts/inferencepool/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,30 +166,34 @@ $ helm uninstall pool-1

The following table list the configurable parameters of the chart.

| **Parameter Name** | **Description** |
|---------------------------------------------|------------------------------------------------------------------------------------------------------------------------|
| `inferencePool.apiVersion` | The API version of the InferencePool resource. Defaults to `inference.networking.k8s.io/v1`. This can be changed to `inference.networking.x-k8s.io/v1alpha2` to support older API versions. |
| `inferencePool.targetPortNumber` | Target port number for the vllm backends, will be used to scrape metrics by the inference extension. Defaults to 8000. |
| `inferencePool.modelServerType` | Type of the model servers in the pool, valid options are [vllm, triton-tensorrt-llm], default is vllm. |
| `inferencePool.modelServers.matchLabels` | Label selector to match vllm backends managed by the inference pool. |
| `inferenceExtension.replicas` | Number of replicas for the endpoint picker extension service. If More than one replica is used, EPP will run in HA active-passive mode. Defaults to `1`. |
| `inferenceExtension.image.name` | Name of the container image used for the endpoint picker. |
| `inferenceExtension.image.hub` | Registry URL where the endpoint picker image is hosted. |
| `inferenceExtension.image.tag` | Image tag of the endpoint picker. |
| `inferenceExtension.image.pullPolicy` | Image pull policy for the container. Possible values: `Always`, `IfNotPresent`, or `Never`. Defaults to `Always`. |
| `inferenceExtension.env` | List of environment variables to set in the endpoint picker container as free-form YAML. Defaults to `[]`. |
| `inferenceExtension.extraContainerPorts` | List of additional container ports to expose. Defaults to `[]`. |
| `inferenceExtension.extraServicePorts` | List of additional service ports to expose. Defaults to `[]`. |
| `inferenceExtension.flags` | List of flags which are passed through to endpoint picker. Example flags, enable-pprof, grpc-port etc. Refer [runner.go](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/main/cmd/epp/runner/runner.go) for complete list. |
| `inferenceExtension.affinity` | Affinity for the endpoint picker. Defaults to `{}`. |
| `inferenceExtension.tolerations` | Tolerations for the endpoint picker. Defaults to `[]`. | |
| `inferenceExtension.monitoring.interval` | Metrics scraping interval for monitoring. Defaults to `10s`. |
| `inferenceExtension.monitoring.secret.name` | Name of the service account token secret for metrics authentication. Defaults to `inference-gateway-sa-metrics-reader-secret`. |
| `inferenceExtension.monitoring.prometheus.enabled` | Enable Prometheus ServiceMonitor creation for EPP metrics collection. Defaults to `false`. |
| `inferenceExtension.monitoring.gke.enabled` | Enable GKE monitoring resources (`PodMonitoring` and RBAC). Defaults to `false`. |
| `inferenceExtension.pluginsCustomConfig` | Custom config that is passed to EPP as inline yaml. |
| `provider.name` | Name of the Inference Gateway implementation being used. Possible values: [`none`, `gke`, or `istio`]. Defaults to `none`. |
| `provider.gke.autopilot` | Set to `true` if the cluster is a GKE Autopilot cluster. This is only used if `provider.name` is `gke`. Defaults to `false`. |
| **Parameter Name** | **Description** |
|----------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `inferencePool.apiVersion` | The API version of the InferencePool resource. Defaults to `inference.networking.k8s.io/v1`. This can be changed to `inference.networking.x-k8s.io/v1alpha2` to support older API versions. |
| `inferencePool.targetPortNumber` | Target port number for the vllm backends, will be used to scrape metrics by the inference extension. Defaults to 8000. |
| `inferencePool.modelServerType` | Type of the model servers in the pool, valid options are [vllm, triton-tensorrt-llm], default is vllm. |
| `inferencePool.modelServers.matchLabels` | Label selector to match vllm backends managed by the inference pool. |
| `inferenceExtension.replicas` | Number of replicas for the endpoint picker extension service. If More than one replica is used, EPP will run in HA active-passive mode. Defaults to `1`. |
| `inferenceExtension.image.name` | Name of the container image used for the endpoint picker. |
| `inferenceExtension.image.hub` | Registry URL where the endpoint picker image is hosted. |
| `inferenceExtension.image.tag` | Image tag of the endpoint picker. |
| `inferenceExtension.image.pullPolicy` | Image pull policy for the container. Possible values: `Always`, `IfNotPresent`, or `Never`. Defaults to `Always`. |
| `inferenceExtension.env` | List of environment variables to set in the endpoint picker container as free-form YAML. Defaults to `[]`. |
| `inferenceExtension.extraContainerPorts` | List of additional container ports to expose. Defaults to `[]`. |
| `inferenceExtension.extraServicePorts` | List of additional service ports to expose. Defaults to `[]`. |
| `inferenceExtension.flags` | List of flags which are passed through to endpoint picker. Example flags, enable-pprof, grpc-port etc. Refer [runner.go](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/main/cmd/epp/runner/runner.go) for complete list. |
| `inferenceExtension.affinity` | Affinity for the endpoint picker. Defaults to `{}`. |
| `inferenceExtension.tolerations` | Tolerations for the endpoint picker. Defaults to `[]`. |
| `inferenceExtension.monitoring.interval` | Metrics scraping interval for monitoring. Defaults to `10s`. |
| `inferenceExtension.monitoring.secret.name` | Name of the service account token secret for metrics authentication. Defaults to `inference-gateway-sa-metrics-reader-secret`. |
| `inferenceExtension.monitoring.prometheus.enabled` | Enable Prometheus ServiceMonitor creation for EPP metrics collection. Defaults to `false`. |
| `inferenceExtension.monitoring.gke.enabled` | Enable GKE monitoring resources (`PodMonitoring` and RBAC). Defaults to `false`. |
| `inferenceExtension.pluginsCustomConfig` | Custom config that is passed to EPP as inline yaml. |
| `inferenceExtension.tracing.enabled` | Enables or disables OpenTelemetry tracing globally for the EndpointPicker. |
| `inferenceExtension.tracing.otelExporterEndpoint` | OpenTelemetry collector endpoint. |
| `inferenceExtension.tracing.sampling.sampler` | The trace sampler to use. Currently, only `parentbased_traceidratio` is supported. This sampler respects the parent span’s sampling decision when present, and applies the configured ratio for root spans. |
| `inferenceExtension.tracing.sampling.samplerArg` | Sampler-specific argument. For `parentbased_traceidratio`, this defines the base sampling rate for new traces (root spans), as a float string in the range [0.0, 1.0]. For example, "0.1" enables 10% sampling. |
| `provider.name` | Name of the Inference Gateway implementation being used. Possible values: [`none`, `gke`, or `istio`]. Defaults to `none`. |
| `provider.gke.autopilot` | Set to `true` if the cluster is a GKE Autopilot cluster. This is only used if `provider.name` is `gke`. Defaults to `false`. |

### Provider Specific Configuration

Expand All @@ -214,6 +218,21 @@ These are the options available to you with `provider.name` set to `istio`:
| `istio.destinationRule.host` | Custom host value for the destination rule. If not set this will use the default value which is derrived from the epp service name and release namespace to gerenate a valid service address. |
| `istio.destinationRule.trafficPolicy.connectionPool` | Configure the connectionPool level settings of the traffic policy |

#### OpenTelemetry

The EndpointPicker supports OpenTelemetry-based tracing. To enable trace collection, use the following configuration:
```yaml
inferenceExtension:
tracing:
enabled: true
otelExporterEndpoint: "http://localhost:4317"
sampling:
sampler: "parentbased_traceidratio"
samplerArg: "0.1"
```
Make sure that the `otelExporterEndpoint` points to your OpenTelemetry collector endpoint.
Current only the `parentbased_traceidratio` sampler is supported. You can adjust the base sampling ratio using the `samplerArg` (e.g., 0.1 means 10% of traces will be sampled).

## Notes

This chart will only deploy an InferencePool and its corresponding EndpointPicker extension. Before install the chart, please make sure that the inference extension CRDs are installed in the cluster. For more details, please refer to the [getting started guide](https://gateway-api-inference-extension.sigs.k8s.io/guides/).
30 changes: 30 additions & 0 deletions config/charts/inferencepool/templates/epp-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,12 @@ spec:
- "--{{ .name }}"
- "{{ .value }}"
{{- end }}
- "--tracing"
{{- if .Values.inferenceExtension.tracing.enabled }}
- "true"
{{- else }}
- "false"
{{- end }}
ports:
- name: grpc
containerPort: 9002
Expand Down Expand Up @@ -101,6 +107,30 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{- if .Values.inferenceExtension.tracing.enabled }}
- name: OTEL_SERVICE_NAME
value: "gateway-api-inference-extension"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: {{ .Values.inferenceExtension.tracing.otelExporterEndpoint | default "http://localhost:4317" | quote }}
- name: OTEL_TRACES_EXPORTER
value: "otlp"
- name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: OTEL_RESOURCE_ATTRIBUTES
value: 'k8s.namespace.name=$(NAMESPACE),k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME)'
- name: OTEL_TRACES_SAMPLER
value: {{ .Values.inferenceExtension.tracing.sampling.sampler | default "parentbased_traceidratio" | quote }}
- name: OTEL_TRACES_SAMPLER_ARG
value: {{ .Values.inferenceExtension.tracing.sampling.samplerArg | default "0.1" | quote }}
{{- end }}
{{- if .Values.inferenceExtension.env }}
{{- toYaml .Values.inferenceExtension.env | nindent 8 }}
{{- end }}
Expand Down
8 changes: 7 additions & 1 deletion config/charts/inferencepool/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,12 @@ inferenceExtension:

gke:
enabled: false
tracing:
enabled: false
otelExporterEndpoint: "http://localhost:4317"
sampling:
sampler: "parentbased_traceidratio"
samplerArg: "0.1"

inferencePool:
targetPorts:
Expand Down Expand Up @@ -85,4 +91,4 @@ istio:
trafficPolicy: {}
# connectionPool:
# http:
# maxRequestsPerConnection: 256000
# maxRequestsPerConnection: 256000
Loading