Skip to content

Commit 2e57ea2

Browse files
committed
Force user to specify modelServers etc. and update README
1 parent 68a2272 commit 2e57ea2

File tree

3 files changed

+19
-9
lines changed

3 files changed

+19
-9
lines changed

config/charts/inferencepool/README.md

Lines changed: 15 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -103,9 +103,21 @@ The following table list the configurable parameters of the chart.
103103
| `inferenceExtension.image.pullPolicy` | Image pull policy for the container. Possible values: `Always`, `IfNotPresent`, or `Never`. Defaults to `Always`. |
104104
| `inferenceExtension.extProcPort` | Port where the endpoint picker service is served for external processing. Defaults to `9002`. |
105105
| `inferenceExtension.env` | List of environment variables to set in the endpoint picker container as free-form YAML. Defaults to `[]`. |
106-
| `inferenceExtension.extraContainerPorts` | List of additional container ports to expose. Defaults to `[]`. |
107-
| `inferenceExtension.extraServicePorts` | List of additional service ports to expose. Defaults to `[]`. |
108-
| `inferenceExtension.logVerbosity` | Logging verbosity level for the endpoint picker. Defaults to `"3"`. |
106+
| `inferenceExtension.enablePprof` | Enables pprof for profiling and debugging |
107+
| `inferenceExtension.modelServerMetricsPath` | Flag to have model server metrics |
108+
| `inferenceExtension.modelServerMetricsScheme` | Flag to have model server metrics scheme |
109+
| `inferenceExtension.modelServerMetricsPort` | Flag for have model server metrics port |
110+
| `inferenceExtension.modelServerMetricsHttpsInsecureSkipVerify` | When using 'https' scheme for 'model-server-metrics-scheme', configure 'InsecureSkipVerify' (default to true) |
111+
| `inferenceExtension.secureServing` | Enables secure serving. Defaults to true. |
112+
| `inferenceExtension.healthChecking` | Enables health checking |
113+
| `inferenceExtension.certPath` | The path to the certificate for secure serving. The certificate and private key files are assumed to be named tls.crt and tls.key, respectively. If not set, and secureServing is enabled, then a self-signed certificate is used. |
114+
| `inferenceExtension.refreshMetricsInterval` | Interval to refresh metrics |
115+
| `inferenceExtension.refreshPrometheusMetricsInterval` | Interval to flush prometheus metrics |
116+
| `inferenceExtension.metricsStalenessThreshold` | Duration after which metrics are considered stale. This is used to determine if a pod's metrics are fresh enough. |
117+
| `inferenceExtension.totalQueuedRequestsMetric` | Prometheus metric for the number of queued requests. |
118+
| `inferenceExtension.extraContainerPorts` | List of additional container ports to expose. Defaults to `[]`. |
119+
| `inferenceExtension.extraServicePorts` | List of additional service ports to expose. Defaults to `[]`. |
120+
| `inferenceExtension.logVerbosity` | Logging verbosity level for the endpoint picker. Defaults to `"3"`. |
109121
| `provider.name` | Name of the Inference Gateway implementation being used. Possible values: `gke`. Defaults to `none`. |
110122

111123
## Notes

config/charts/inferencepool/templates/epp-deployment.yaml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,6 @@ spec:
5454
- "--refresh-metrics-interval={{ .Values.inferenceExtension.refreshMetricsInterval }}"
5555
- "--refresh-prometheus-metrics-interval={{ .Values.inferenceExtension.refreshPrometheusMetricsInterval }}"
5656
- "--metrics-staleness-threshold={{ .Values.inferenceExtension.metricsStalenessThreshold }}"
57-
- "--config-text={{ .Values.inferenceExtension.configText }}"
5857
{{- if eq (.Values.inferencePool.modelServerType | default "vllm") "triton-tensorrt-llm" }}
5958
- --total-queued-requests-metric
6059
- "nv_trt_llm_request_metrics{request_type=waiting}"

config/charts/inferencepool/values.yaml

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
inferenceExtension:
2+
# Number of replicas
23
replicas: 1
34
image:
45
name: epp
@@ -24,8 +25,6 @@ inferenceExtension:
2425
kvCacheUsagePercentageMetric: "vllm:gpu_cache_usage_perc"
2526
loraInfoMetric: "vllm:lora_requests_info"
2627
certPath: ""
27-
configFile: ""
28-
configText: ""
2928
metricsStalenessThreshold: "2s"
3029

3130
pluginsConfigFile: "default-plugins.yaml"
@@ -62,9 +61,9 @@ inferenceExtension:
6261
inferencePool:
6362
targetPortNumber: 8000
6463
modelServerType: vllm # vllm, triton-tensorrt-llm
65-
modelServers:
66-
matchLabels:
67-
app: vllm-llama3-8b-instruct
64+
# modelServers:
65+
# matchLabels:
66+
# app: vllm-llama3-8b-instruct
6867

6968
provider:
7069
name: none

0 commit comments

Comments
 (0)