|
| 1 | +:_module-type: PROCEDURE |
| 2 | + |
| 3 | +[id="configuring-metric-based-autoscaling_{context}"] |
| 4 | += Configuring metric-based autoscaling |
| 5 | + |
| 6 | +[role="_abstract"] |
| 7 | +While knative-based autoscaling features are not available in standard deployment modes, you can enable metrics-based autoscaling for an inference service in these deployments. This capability helps you efficiently manage accelerator resources, lower operational costs, and ensure that your inference services meet performance requirements. |
| 8 | + |
| 9 | +To setup autoscaling for your inference service in standard deployments, you must install and configure the Openshift Custom Metrics Autoscaler (CMA), which is based on Kubernetes Event-driven Autoscaling (KEDA). You can then utilize various model runtime metrics available in OpenShift Monitoring, such as KVCache utilization, Time to First Token (TTFT), and concurrency, to trigger autoscaling of your inference service. |
| 10 | + |
| 11 | +.Prerequisites |
| 12 | +* You have cluster administrator privileges for your {openshift-platform} cluster. |
| 13 | +* You have installed the CMA operator on your cluster. For more informatipn, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/{ocp-latest-version}/html/nodes/automatically-scaling-pods-with-the-custom-metrics-autoscaler-operator#nodes-cma-autoscaling-custom-install[Installing the custom metrics autoscaler]. |
| 14 | ++ |
| 15 | +[NOTE] |
| 16 | +==== |
| 17 | +The `odh-controller` automatically creates the `TriggerAuthentication`, `ServiceAccount`, `Role`, `RoleBinding`, and `Secret` resources to allow CMA access to OpenShift Monitoring metrics. |
| 18 | +==== |
| 19 | +* You have enabled User Workload Monitoring (UWM) for your cluster. For more information, see https://docs.redhat.com/en/documentation/openshift_container_platform/{ocp-latest-version}/html/monitoring/configuring-user-workload-monitoring[Configuring user workload monitoring]. |
| 20 | +* You have deployed a model on the single-model serving platform in standard deployment mode. |
| 21 | + |
| 22 | +.Procedure |
| 23 | + |
| 24 | +. Log in to the OpenShift console as a cluster administrator. |
| 25 | +. In the *Administrator* perspective, click *Home* -> *Search*. |
| 26 | +. Select the project where you have deployed your model. |
| 27 | +. From the *Resources* dropdown menu, select *InferenceService*. |
| 28 | +. Click the `InferenceService` for your deployed model and then click *YAML*. |
| 29 | +. Under `spec.predictor`, define a metric-based autoscaling policy similar to the following example: |
| 30 | ++ |
| 31 | +[source] |
| 32 | +---- |
| 33 | +spec: |
| 34 | + predictor: |
| 35 | + # … |
| 36 | + minReplicas: 1 |
| 37 | + maxReplicas: 5 |
| 38 | + autoScaling: |
| 39 | + metrics: |
| 40 | + - type: External |
| 41 | + external: |
| 42 | + metric: |
| 43 | + backend: "prometheus" |
| 44 | + serverAddress: "http://<thanos-service>.<monitoring-namespace>.svc.cluster.local:9092" |
| 45 | + query: vllm:num_requests_waiting |
| 46 | + authenticationRef: |
| 47 | + name: openshift-monitoring-metrics-auth |
| 48 | + target: |
| 49 | + type: Value |
| 50 | + value: "2" |
| 51 | +---- |
| 52 | ++ |
| 53 | +The example configures the inference service to autoscale between 1-5 replicas based on the number of requests waiting to be processed, as determined by the `vllm:num_requests_waiting` metric. |
| 54 | +. Click *Save* |
| 55 | + |
| 56 | +//[role="_additional-resources"] |
| 57 | +//.Additional resources |
| 58 | +// link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.19/html/monitoring/index[Monitoring] |
0 commit comments