Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
63 commits
Select commit Hold shift + click to select a range
d37dccd
autoscaling overview edit
charlotte-hoblik Feb 21, 2025
7763ba7
autoscaling overview edit 2
charlotte-hoblik Feb 24, 2025
6f82354
fix links
charlotte-hoblik Feb 24, 2025
30ba95a
clean up files
charlotte-hoblik Feb 25, 2025
b1196dc
refine trained modell autoscaling
charlotte-hoblik Feb 25, 2025
8eae6ef
refine autoscaling-deciders
charlotte-hoblik Feb 25, 2025
b6b985b
add autoscaling deciders overview
charlotte-hoblik Feb 26, 2025
a84415b
cleanup
charlotte-hoblik Feb 26, 2025
d352aef
clean up files
charlotte-hoblik Feb 26, 2025
5a047cd
fix links
charlotte-hoblik Feb 26, 2025
cc453a7
fix screenshot
charlotte-hoblik Feb 26, 2025
fb910df
rename file
charlotte-hoblik Feb 26, 2025
ad4c634
fix broken link
charlotte-hoblik Feb 27, 2025
c009863
fix typos and adjust overview
charlotte-hoblik Feb 27, 2025
f9026ab
Update deploy-manage/autoscaling/autoscaling-deciders.md
charlotte-hoblik Feb 28, 2025
30c7a19
Update deploy-manage/autoscaling/autoscaling-deciders.md
charlotte-hoblik Feb 28, 2025
67aa2f5
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Feb 28, 2025
7dc485d
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Feb 28, 2025
0cdae4b
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Feb 28, 2025
5d81059
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Feb 28, 2025
97a14c7
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Feb 28, 2025
a68df3b
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Feb 28, 2025
340176e
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Feb 28, 2025
6c1702a
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Feb 28, 2025
16e0535
Update deploy-manage/autoscaling.md
charlotte-hoblik Feb 28, 2025
a192525
Update deploy-manage/autoscaling.md
charlotte-hoblik Feb 28, 2025
5e22b6c
Update deploy-manage/autoscaling.md
charlotte-hoblik Feb 28, 2025
5ec4f46
Update deploy-manage/autoscaling.md
charlotte-hoblik Feb 28, 2025
42bfa5f
Update deploy-manage/autoscaling.md
charlotte-hoblik Feb 28, 2025
4018d51
remove anchors
charlotte-hoblik Feb 28, 2025
b028338
fix steplist
charlotte-hoblik Feb 28, 2025
dd85fc7
merge ech and ece api example
charlotte-hoblik Feb 28, 2025
90876be
fix link
charlotte-hoblik Feb 28, 2025
e2ab397
resize image
charlotte-hoblik Feb 28, 2025
3bb4f87
trained model autoscaling restructure
charlotte-hoblik Feb 28, 2025
5a4e3e7
change admonition
charlotte-hoblik Feb 28, 2025
fa506ae
restructure ece, ech autoscaling page
charlotte-hoblik Feb 28, 2025
10ead1b
fix mapped pages
charlotte-hoblik Feb 28, 2025
2886a68
moving autoscale examples
charlotte-hoblik Feb 28, 2025
d195fb7
restructure autoscaling
charlotte-hoblik Feb 28, 2025
aec6275
restructuring autoscaling in eck
charlotte-hoblik Feb 28, 2025
4845ad9
fix link
charlotte-hoblik Feb 28, 2025
0c3c781
Update deploy-manage/autoscaling/autoscaling-deciders.md
charlotte-hoblik Mar 3, 2025
4366b42
rename tables
charlotte-hoblik Mar 3, 2025
ae55053
include TM autoscale and serverless
charlotte-hoblik Mar 3, 2025
3e56145
cleanup trained model auto
charlotte-hoblik Mar 3, 2025
fbba685
TM autoscale on-prem
charlotte-hoblik Mar 3, 2025
82afd62
Update deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md
charlotte-hoblik Mar 5, 2025
4f36634
Update deploy-manage/autoscaling/autoscaling-in-eck.md
charlotte-hoblik Mar 5, 2025
38c49da
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Mar 5, 2025
3138e2b
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Mar 5, 2025
6a85950
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Mar 5, 2025
c0dac5e
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Mar 5, 2025
40e2cda
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Mar 5, 2025
2ec6154
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Mar 5, 2025
036afd9
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Mar 5, 2025
d376b3b
restructuring autoscaling overview
charlotte-hoblik Mar 5, 2025
1ff1776
cleanup
charlotte-hoblik Mar 5, 2025
0151e80
set redirects
charlotte-hoblik Mar 5, 2025
40f8243
remove ECK from cloude console setting
charlotte-hoblik Mar 6, 2025
6bef87c
adaptive resources with ECK
charlotte-hoblik Mar 6, 2025
e11a51d
add note to TM autoscale
charlotte-hoblik Mar 6, 2025
bc59177
Update deploy-manage/autoscaling/trained-model-autoscaling.md
charlotte-hoblik Mar 6, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
84 changes: 34 additions & 50 deletions deploy-manage/autoscaling.md
Original file line number Diff line number Diff line change
@@ -1,70 +1,54 @@
---
mapped_urls:
- https://www.elastic.co/guide/en/cloud-heroku/current/ech-autoscaling.html
- https://www.elastic.co/guide/en/cloud/current/ec-autoscaling.html
- https://www.elastic.co/guide/en/cloud-enterprise/current/ece-autoscaling.html
- https://www.elastic.co/guide/en/elasticsearch/reference/current/xpack-autoscaling.html
applies_to:
deployment:
ece: ga
ess: ga
eck: ga
serverless: all
---

# Autoscaling

% What needs to be done: Refine
The autoscaling feature adjusts resources based on demand. A deployment can use autoscaling to scale resources as needed, ensuring sufficient capacity to meet workload requirements. In {{ece}}, {{eck}}, and {{ech}} deployments, autoscaling follows predefined policies, while in {{serverless-short}}, it is fully managed and automatic.

% GitHub issue: https://github.com/elastic/docs-projects/issues/344
:::{{tip}} - Serverless handles autoscaling for you
By default, {{serverless-full}} automatically scales your {{es}} resources based on your usage. You don't need to enable autoscaling.
:::

% Scope notes: Creating a new landing page and subheadings/pages for different deployment types. Merge content when appropriate
## Cluster autoscaling

% Use migrated content from existing pages that map to this page:
::::{admonition} Indirect use only
This feature is designed for indirect use by {{ech}}, {{ece}}, and {{eck}}. Direct use is not supported.
::::

% - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-autoscaling.md
% Notes: 1 child
% - [ ] ./raw-migrated-files/cloud/cloud/ec-autoscaling.md
% Notes: 2 children
% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-autoscaling.md
% Notes: 2 children
% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/xpack-autoscaling.md
Cluster autoscaling allows an operator to create tiers of nodes that monitor themselves and determine if scaling is needed based on an operator-defined policy. An Elasticsearch cluster can use the autoscaling API to report when additional resources are required. For example, an operator can define a policy that scales a warm tier based on available disk space. Elasticsearch monitors disk space in the warm tier. If it predicts low disk space for current and future shard copies, the autoscaling API reports that the cluster needs to scale. It remains the responsibility of the operator to add the additional resources that the cluster signals it requires.

% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc):
A policy is composed of a list of roles and a list of deciders. The policy governs the nodes matching the roles. The deciders provide independent estimates of the capacity required. See [Autoscaling deciders](../deploy-manage/autoscaling/autoscaling-deciders.md) for details on available deciders.

$$$ec-autoscaling-intro$$$
Cluster autoscaling supports:
* Scaling machine learning nodes up and down.
* Scaling data nodes up based on storage.

$$$ec-autoscaling-factors$$$
## Trained model autoscaling

$$$ec-autoscaling-notifications$$$
:::{admonition} Trained model auto-scaling for self-managed deployments
The available resources of self-managed deployments are static, so trained model autoscaling is not applicable. However, available resources are still segmented based on the settings described in this section.
:::

$$$ec-autoscaling-restrictions$$$
Trained model autoscaling automatically adjusts the resources allocated to trained model deployments based on demand. This feature is available on all cloud deployments (ECE, ECK, ECH) and {{serverless-short}}. See [Trained model autoscaling](/deploy-manage/autoscaling/trained-model-autoscaling.md) for details.

$$$ec-autoscaling-enable$$$
Trained model autoscaling supports:
* Scaling trained model deployments

$$$ec-autoscaling-update$$$
::::{note}
Autoscaling is not supported on Debian 8.
::::

$$$ece-autoscaling-intro$$$
Find instructions on setting up and managing autoscaling, including supported environments, configuration options, and examples:

$$$ece-autoscaling-factors$$$

$$$ece-autoscaling-notifications$$$

$$$ece-autoscaling-restrictions$$$

$$$ece-autoscaling-enable$$$

$$$ece-autoscaling-update$$$

$$$ech-autoscaling-intro$$$

$$$ech-autoscaling-factors$$$

$$$ech-autoscaling-notifications$$$

$$$ech-autoscaling-restrictions$$$

$$$ech-autoscaling-enable$$$

$$$ech-autoscaling-update$$$

**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages:

* [/raw-migrated-files/cloud/cloud-heroku/ech-autoscaling.md](/raw-migrated-files/cloud/cloud-heroku/ech-autoscaling.md)
* [/raw-migrated-files/cloud/cloud/ec-autoscaling.md](/raw-migrated-files/cloud/cloud/ec-autoscaling.md)
* [/raw-migrated-files/cloud/cloud-enterprise/ece-autoscaling.md](/raw-migrated-files/cloud/cloud-enterprise/ece-autoscaling.md)
* [/raw-migrated-files/elasticsearch/elasticsearch-reference/xpack-autoscaling.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/xpack-autoscaling.md)
* [Autoscaling in {{ece}} and {{ech}}](/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md)
* [Autoscaling in {{eck}}](/deploy-manage/autoscaling/autoscaling-in-eck.md)
* [Autoscaling deciders](/deploy-manage/autoscaling/autoscaling-deciders.md)
* [Trained model autoscaling](/deploy-manage/autoscaling/trained-model-autoscaling.md)
226 changes: 203 additions & 23 deletions deploy-manage/autoscaling/autoscaling-deciders.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,36 +8,216 @@ mapped_urls:
- https://www.elastic.co/guide/en/elasticsearch/reference/current/autoscaling-frozen-existence-decider.html
- https://www.elastic.co/guide/en/elasticsearch/reference/current/autoscaling-machine-learning-decider.html
- https://www.elastic.co/guide/en/elasticsearch/reference/current/autoscaling-fixed-decider.html
applies_to:
ece:
eck:
ess:
---

# Autoscaling deciders
# Autoscaling deciders [autoscaling-deciders]

% What needs to be done: Refine
[Autoscaling](/deploy-manage/autoscaling.md) in Elasticsearch enables dynamic resource allocation based on predefined policies. A key component of this mechanism is autoscaling deciders, which independently assess resource requirements and determine when scaling actions are necessary. Deciders analyze various factors, such as storage usage, indexing rates, and machine learning workloads, to ensure clusters maintain optimal performance without manual intervention.

% GitHub issue: https://github.com/elastic/docs-projects/issues/344
::::{admonition} Indirect use only
This feature is designed for indirect use by {{ech}}, {{ece}}, and {{eck}}. Direct use is not supported.
::::

% Scope notes: Collapse to a single page, explain what deciders are
[Reactive storage decider](#autoscaling-reactive-storage-decider)
: Estimates required storage capacity of current data set. Available for policies governing data nodes.

% Use migrated content from existing pages that map to this page:
[Proactive storage decider](#autoscaling-proactive-storage-decider)
: Estimates required storage capacity based on current ingestion into hot nodes. Available for policies governing hot data nodes.

% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-deciders.md
% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-reactive-storage-decider.md
% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-proactive-storage-decider.md
% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-shards-decider.md
% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-storage-decider.md
% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-existence-decider.md
% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-machine-learning-decider.md
% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-fixed-decider.md
[Frozen shards decider](#autoscaling-frozen-shards-decider)
: Estimates required memory capacity based on the number of partially mounted shards. Available for policies governing frozen data nodes.

⚠️ **This page is a work in progress.** ⚠️
[Frozen storage decider](#autoscaling-frozen-storage-decider)
: Estimates required storage capacity as a percentage of the total data set of partially mounted indices. Available for policies governing frozen data nodes.

The documentation team is working to combine content pulled from the following pages:
[Frozen existence decider](#autoscaling-frozen-existence-decider)
: Estimates a minimum require frozen memory and storage capacity when any index is in the frozen [ILM](../../manage-data/lifecycle/index-lifecycle-management.md) phase.

* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-deciders.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-deciders.md)
* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-reactive-storage-decider.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-reactive-storage-decider.md)
* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-proactive-storage-decider.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-proactive-storage-decider.md)
* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-shards-decider.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-shards-decider.md)
* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-storage-decider.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-storage-decider.md)
* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-existence-decider.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-existence-decider.md)
* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-machine-learning-decider.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-machine-learning-decider.md)
* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-fixed-decider.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-fixed-decider.md)
[Machine learning decider](#autoscaling-machine-learning-decider)
: Estimates required memory capacity based on machine learning jobs. Available for policies governing machine learning nodes.

[Fixed decider](#autoscaling-fixed-decider)
: Responds with a fixed required capacity. This decider is intended for testing only.

## Reactive storage decider [autoscaling-reactive-storage-decider]

The [autoscaling](../../deploy-manage/autoscaling.md) reactive storage decider (`reactive_storage`) calculates the storage required to contain the current data set. It signals that additional storage capacity is necessary when existing capacity has been exceeded (reactively).

The reactive storage decider is enabled for all policies governing data nodes and has no configuration options.

The decider relies partially on using [data tier preference](../../manage-data/lifecycle/data-tiers.md#data-tier-allocation) allocation rather than node attributes. In particular, scaling a data tier into existence (starting the first node in a tier) will result in starting a node in any data tier that is empty if not using allocation based on data tier preference. Using the [ILM migrate](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/index-lifecycle-actions/ilm-migrate.md) action to migrate between tiers is the preferred way of allocating to tiers and fully supports scaling a tier into existence.

## Proactive storage decider [autoscaling-proactive-storage-decider]

The [autoscaling](../../deploy-manage/autoscaling.md) proactive storage decider (`proactive_storage`) calculates the storage required to contain the current data set plus an estimated amount of expected additional data.

The proactive storage decider is enabled for all policies governing nodes with the `data_hot` role.

The estimation of expected additional data is based on past indexing that occurred within the `forecast_window`. Only indexing into data streams contributes to the estimate.

### Configuration settings [autoscaling-proactive-storage-decider-settings]

`forecast_window`
: (Optional, [time value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#time-units)) The window of time to use for forecasting. Defaults to 30 minutes.


### {{api-examples-title}} [autoscaling-proactive-storage-decider-examples]

This example puts an autoscaling policy named `my_autoscaling_policy`, overriding the proactive decider’s `forecast_window` to be 10 minutes.

```console
PUT /_autoscaling/policy/my_autoscaling_policy
{
"roles" : [ "data_hot" ],
"deciders": {
"proactive_storage": {
"forecast_window": "10m"
}
}
}
```

The API returns the following result:

```console-result
{
"acknowledged": true
}
```

## Frozen shards decider [autoscaling-frozen-shards-decider]

The [autoscaling](../../deploy-manage/autoscaling.md) frozen shards decider (`frozen_shards`) calculates the memory required to search the current set of partially mounted indices in the frozen tier. Based on a required memory amount per shard, it calculates the necessary memory in the frozen tier.

### Configuration settings [autoscaling-frozen-shards-decider-settings]

`memory_per_shard`
: (Optional, [byte value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) The memory needed per shard, in bytes. Defaults to 2000 shards per 64 GB node (roughly 32 MB per shard). Notice that this is total memory, not heap, assuming that the Elasticsearch default heap sizing mechanism is used and that nodes are not bigger than 64 GB.

## Frozen storage decider [autoscaling-frozen-storage-decider]

The [autoscaling](../../deploy-manage/autoscaling.md) frozen storage decider (`frozen_storage`) calculates the local storage required to search the current set of partially mounted indices based on a percentage of the total data set size of such indices. It signals that additional storage capacity is necessary when existing capacity is less than the percentage multiplied by total data set size.

The frozen storage decider is enabled for all policies governing frozen data nodes and has no configuration options.

### Configuration settings [autoscaling-frozen-storage-decider-settings]

`percentage`
: (Optional, number value) Percentage of local storage relative to the data set size. Defaults to 5.

## Frozen existence decider [autoscaling-frozen-existence-decider]

The [autoscaling](../../deploy-manage/autoscaling.md) frozen existence decider (`frozen_existence`) ensures that once the first index enters the frozen ILM phase, the frozen tier is scaled into existence.

The frozen existence decider is enabled for all policies governing frozen data nodes and has no configuration options.

## Machine learning decider [autoscaling-machine-learning-decider]

The [autoscaling](../../deploy-manage/autoscaling.md) {{ml}} decider (`ml`) calculates the memory and CPU requirements to run {{ml}} jobs and trained models.

The {{ml}} decider is enabled for policies governing `ml` nodes.

::::{note}
For {{ml}} jobs to open when the cluster is not appropriately scaled, set `xpack.ml.max_lazy_ml_nodes` to the largest number of possible {{ml}} nodes (refer to [Advanced machine learning settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/machine-learning-settings.md#advanced-ml-settings) for more information). In {{ess}}, this is automatically set.
::::


### Configuration settings [autoscaling-machine-learning-decider-settings]

Both `num_anomaly_jobs_in_queue` and `num_analytics_jobs_in_queue` are designed to delay a scale-up event. If the cluster is too small, these settings indicate how many jobs of each type can be unassigned from a node. Both settings are only considered for jobs that can be opened given the current scale. If a job is too large for any node size or if a job can’t be assigned without user intervention (for example, a user calling `_stop` against a real-time {{anomaly-job}}), the numbers are ignored for that particular job.

`num_anomaly_jobs_in_queue`
: (Optional, integer) Specifies the number of queued {{anomaly-jobs}} to allow. Defaults to `0`.

`num_analytics_jobs_in_queue`
: (Optional, integer) Specifies the number of queued {{dfanalytics-jobs}} to allow. Defaults to `0`.

`down_scale_delay`
: (Optional, [time value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#time-units)) Specifies the time to delay before scaling down. Defaults to 1 hour. If a scale down is possible for the entire time window, then a scale down is requested. If the cluster requires a scale up during the window, the window is reset.


### {{api-examples-title}} [autoscaling-machine-learning-decider-examples]

This example creates an autoscaling policy named `my_autoscaling_policy` that overrides the default configuration of the {{ml}} decider.

```console
PUT /_autoscaling/policy/my_autoscaling_policy
{
"roles" : [ "ml" ],
"deciders": {
"ml": {
"num_anomaly_jobs_in_queue": 5,
"num_analytics_jobs_in_queue": 3,
"down_scale_delay": "30m"
}
}
}
```

The API returns the following result:

```console-result
{
"acknowledged": true
}
```

## Fixed decider [autoscaling-fixed-decider]

::::{warning}
This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
::::


::::{warning}
The fixed decider is intended for testing only. Do not use this decider in production.
::::


The [autoscaling](../../deploy-manage/autoscaling.md) `fixed` decider responds with a fixed required capacity. It is not enabled by default but can be enabled for any policy by explicitly configuring it.

### Configuration settings [_configuration_settings]

`storage`
: (Optional, [byte value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) Required amount of node-level storage. Defaults to `-1` (disabled).

`memory`
: (Optional, [byte value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) Required amount of node-level memory. Defaults to `-1` (disabled).

`processors`
: (Optional, float) Required number of processors. Defaults to disabled.

`nodes`
: (Optional, integer) Number of nodes to use when calculating capacity. Defaults to `1`.


### {{api-examples-title}} [autoscaling-fixed-decider-examples]

This example puts an autoscaling policy named `my_autoscaling_policy`, enabling and configuring the fixed decider.

```console
PUT /_autoscaling/policy/my_autoscaling_policy
{
"roles" : [ "data_hot" ],
"deciders": {
"fixed": {
"storage": "1tb",
"memory": "32gb",
"processors": 2.3,
"nodes": 8
}
}
}
```

The API returns the following result:

```console-result
{
"acknowledged": true
}
```
Loading