Skip to content

Adds information about cooldown periods for trained model autoscaling in Serverless #2498

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions deploy-manage/autoscaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,10 @@ The available resources of self-managed deployments are static, so trained model

Trained model autoscaling automatically adjusts the resources allocated to trained model deployments based on demand. This feature is available on all cloud deployments (ECE, ECK, ECH) and {{serverless-short}}. See [Trained model autoscaling](/deploy-manage/autoscaling/trained-model-autoscaling.md) for details.

::::{note}
{applies_to}`serverless: ga` In {{serverless-short}}, trained model deployments remain active for 24 hours after the last inference request. After that, they scale down to zero. When scaled up again, they stay active for 5 minutes before they can scale down. These cooldown periods prevent unnecessary scaling and ensure models are available when needed.
::::

Trained model autoscaling supports:
* Scaling trained model deployments

Expand Down
2 changes: 1 addition & 1 deletion deploy-manage/autoscaling/trained-model-autoscaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ There are two ways to enable autoscaling:
* through APIs by enabling adaptive allocations
* in {{kib}} by enabling adaptive resources

For {{serverless-short}} projects, trained model autoscaling is automatically enabled and cannot be disabled.
{applies_to}`serverless: ga` For {{serverless-short}} projects, trained model autoscaling is always enabled and cannot be turned off. Trained model deployments remain active for 24 hours after the last inference request before scaling down to zero. When scaled up again, they stay active for 5 minutes before they can scale down. These cooldown periods prevent unnecessary scaling and ensure models are available when needed.

::::{important}
To fully leverage model autoscaling in {{ech}}, {{ece}}, and {{eck}}, it is highly recommended to enable [{{es}} deployment autoscaling](../../deploy-manage/autoscaling.md).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,10 @@ You can control costs using the following strategies:

* When starting or updating a trained model deployment, [Enable adaptive resources](../../autoscaling/trained-model-autoscaling.md#enabling-autoscaling-in-kibana-adaptive-resources) and set the VCU usage level to **Low**.
* When using the inference API for {{es}} or ELSER, [enable `adaptive_allocations`](../../autoscaling/trained-model-autoscaling.md#enabling-autoscaling-through-apis-adaptive-allocations).

::::{note}
{applies_to}`serverless: ga` In {{serverless-short}}, trained model deployments scale down to zero only after 24 hours without any inference requests. After scaling up, they remain active for 5 minutes before they can scale down again. During these cooldown periods, you will continue to be billed for the active resources.
::::

* **Indexing Strategies:** Consider your indexing strategies and how they might impact overall VCU usage and costs:

Expand Down
Loading