You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/service-fabric/service-fabric-cluster-resource-manager-autoscaling.md
+28-28Lines changed: 28 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,42 +10,42 @@ ms.date: 07/14/2022
10
10
---
11
11
12
12
# Introduction to Auto Scaling
13
-
Auto scaling is an additional capability of Service Fabric to dynamically scale your services based on the load that services are reporting, or based on their usage of resources. Auto scaling gives great elasticity and enables provisioning of additional instances or partitions of your service on demand. The entire auto scaling process is automated and transparent, and once you set up your policies on a service there is no need for manual scaling operations at the service level. Auto scaling can be turned on either at service creation time, or at any time by updating the service.
13
+
Auto scaling is another capability of Service Fabric to dynamically scale your services based on the load that services are reporting, or based on their usage of resources. Auto scaling gives great elasticity and enables provisioning of extra instances or partitions of your service on demand. The entire auto scaling process is automated and transparent, and once you set up your policies on a service there is no need for manual scaling operations at the service level. Auto scaling can be turned on either at service creation time, or at any time by updating the service.
14
14
15
15
A common scenario where auto-scaling is useful is when the load on a particular service varies over time. For example, a service such as a gateway can scale based on the amount of resources necessary to handle incoming requests. Let's take a look at an example of what those scaling rules could look like:
16
-
* If all instances of my gateway are using more than two cores on average, then scale the gateway service out by adding one more instance. Do this every hour, but never have more than seven instances in total.
17
-
* If all instances of my gateway are using less than 0.5 cores on average, then scale the service in by removing one instance. Do this every hour, but never have fewer than three instances in total.
16
+
* If all instances of my gateway are using more than two cores on average, then scale out the gateway service by adding one more instance. Do this addition every hour, but never have more than seven instances in total.
17
+
* If all instances of my gateway are using less than 0.5 cores on average, then scale the service in by removing one instance. Do this removal every hour, but never have fewer than three instances in total.
18
18
19
19
Auto scaling is supported for both containers and regular Service Fabric services. In order to use auto scaling, you need to be running on version 6.2 or above of the Service Fabric runtime.
20
20
21
21
The rest of this article describes the scaling policies, ways to enable or to disable auto scaling, and gives examples on how to use this feature.
22
22
23
23
## Describing auto scaling
24
24
Auto scaling policies can be defined for each service in a Service Fabric cluster. Each scaling policy consists of two parts:
25
-
***Scaling trigger** describes when scaling of the service will be performed. Conditions that are defined in the trigger are checked periodically to determine if a service should be scaled or not.
25
+
***Scaling trigger** describes when scaling of the service is performed. Conditions that are defined in the trigger are checked periodically to determine if a service should be scaled or not.
26
26
27
-
***Scaling mechanism** describes how scaling will be performed when it is triggered. Mechanism is only applied when the conditions from the trigger are met.
27
+
***Scaling mechanism** describes how scaling is performed when it is triggered. Mechanism is only applied when the conditions from the trigger are met.
28
28
29
-
All triggers that are currently supported work either with [logical load metrics](service-fabric-cluster-resource-manager-metrics.md), or with physical metrics like CPU or memory usage. Either way, Service Fabric will monitor the reported load for the metric, and will evaluate the trigger periodically to determine if scaling is needed.
29
+
All triggers that are currently supported work either with [logical load metrics](service-fabric-cluster-resource-manager-metrics.md), or with physical metrics like CPU or memory usage. Either way, Service Fabric monitors the reported load for the metric, and will evaluate the trigger periodically to determine if scaling is needed.
30
30
31
31
There are two mechanisms that are currently supported for auto scaling. The first one is meant for stateless services or for containers where auto scaling is performed by adding or removing [instances](service-fabric-concepts-replica-lifecycle.md). For both stateful and stateless services, auto scaling can also be performed by adding or removing named [partitions](service-fabric-concepts-partitioning.md) of the service.
32
32
33
33
> [!NOTE]
34
34
> Currently there is support for only one scaling policy per service, and only one scaling trigger per scaling policy.
35
35
36
36
## Average partition load trigger with instance based scaling
37
-
The first type of trigger is based on the load of instances in a stateless service partition. Metric loads are first smoothed to obtain the load for every instance of a partition, and then these values are averaged across all instances of the partition. There are three factors that determine when the service will be scaled:
37
+
The first type of trigger is based on the load of instances in a stateless service partition. Metric loads are first smoothed to obtain the load for every instance of a partition, and then these values are averaged across all instances of the partition. There are three factors that determine when the service is scaled:
38
38
39
-
*_Lower load threshold_ is a value that determines when the service will be **scaled in**. If the average load of all instances of the partitions is lower than this value, then the service will be scaled in.
40
-
*_Upper load threshold_ is a value that determines when the service will be **scaled out**. If the average load of all instances of the partition is higher than this value, then the service will be scaled out.
41
-
*_Scaling interval_ determines how often the trigger will be checked. Once the trigger is checked, if scaling is needed the mechanism will be applied. If scaling is not needed, then no action will be taken. In both cases, trigger will not be checked again before scaling interval expires again.
39
+
*_Lower load threshold_ is a value that determines when the service is **scaled in**. If the average load of all instances of the partitions is lower than this value, then the service is scaled in.
40
+
*_Upper load threshold_ is a value that determines when the service is **scaled out**. If the average load of all instances of the partition is higher than this value, then the service is scaled out.
41
+
*_Scaling interval_ determines how often the trigger is checked. Once the trigger is checked, if scaling is needed the mechanism will be applied. If scaling is not needed, then no action will be taken. In both cases, trigger will not be checked again before scaling interval expires again.
42
42
43
-
This trigger can be used only with stateless services (either stateless containers or Service Fabric services). In case when a service has multiple partitions, the trigger is evaluated for each partition separately, and each partition will have the specified mechanism applied to it independently. Hence, in this case, it is possible that some of the partitions of the service will be scaled out, some will be scaled in, and some won't be scaled at all at the same time, based on their load.
43
+
This trigger can be used only with stateless services (either stateless containers or Service Fabric services). In case when a service has multiple partitions, the trigger is evaluated for each partition separately, and each partition has the specified mechanism applied to it independently. Hence, the scaling behaviors of service partitions could vary based on their load. It is possible that some partitions of the service are scaled out, while some others are scaled in. Some partitions might not be scaled at all at the same time.
44
44
45
45
The only mechanism that can be used with this trigger is PartitionInstanceCountScaleMechanism. There are three factors that determine how this mechanism is applied:
46
-
*_Scale Increment_ determines how many instances will be added or removed when mechanism is triggered.
47
-
*_Maximum Instance Count_ defines the upper limit for scaling. If number of instances of the partition reaches this limit, then the service will not be scaled out, regardless of the load. It is possible to omit this limit by specifying value of -1, and in that case the service will be scaled out as much as possible (the limit is the number of nodes that are available in the cluster).
48
-
*_Minimum Instance Count_ defines the lower limit for scaling. If number of instances of the partition reaches this limit, then service will not be scaled in regardless of the load.
46
+
*_Scale Increment_ determines how many instances are added or removed when mechanism is triggered.
47
+
*_Maximum Instance Count_ defines the upper limit for scaling. If number of instances of the partition reaches this limit, then the service is scaled out, regardless of the load. It is possible to omit this limit by specifying value of -1, and in that case the service is scaled out as much as possible (the limit is the number of nodes that are available in the cluster).
48
+
*_Minimum Instance Count_ defines the lower limit for scaling. If number of instances of the partition reaches this limit, then service is not scaled in regardless of the load.
49
49
50
50
## Setting auto scaling policy for instance based scaling
## Average service load trigger with partition based scaling
107
-
The second trigger is based on the load of all partitions of one service. Metric loads are first smoothed to obtain the load for every replica or instance of a partition. For stateful services, the load of the partition is considered to be the load of the primary replica, while for stateless services the load of the partition is the average load of all instances of the partition. These values are averaged across all partitions of the service, and this value is used to trigger the auto scaling. Same as in previous mechanism, there are three factors that determine when the service will be scaled:
107
+
The second trigger is based on the load of all partitions of one service. Metric loads are first smoothed to obtain the load for every replica or instance of a partition. For stateful services, the load of the partition is considered to be the load of the primary replica, while for stateless services the load of the partition is the average load of all instances of the partition. These values are averaged across all partitions of the service, and this value is used to trigger the auto scaling. Same as in previous mechanism, there are three factors that determine when the service is scaled:
108
108
109
-
*_Lower load threshold_ is a value that determines when the service will be **scaled in**. If the average load of all partitions of the service is lower than this value, then the service will be scaled in.
110
-
*_Upper load threshold_ is a value that determines when the service will be **scaled out**. If the average load of all partitions of the service is higher than this value, then the service will be scaled out.
111
-
*_Scaling interval_ determines how often the trigger will be checked. Once the trigger is checked, if scaling is needed the mechanism will be applied. If scaling is not needed, then no action will be taken. In both cases, trigger will not be checked again before scaling interval expires again.
109
+
*_Lower load threshold_ is a value that determines when the service is **scaled in**. If the average load of all partitions of the service is lower than this value, then the service is scaled in.
110
+
*_Upper load threshold_ is a value that determines when the service is **scaled out**. If the average load of all partitions of the service is higher than this value, then the service is scaled out.
111
+
*_Scaling interval_ determines how often the trigger is checked. Once the trigger is checked, if scaling is needed the mechanism is applied. If scaling is not needed, then no action is taken. In both cases, trigger is checked again before scaling interval expires again.
112
112
113
-
This trigger can be used both with stateful and stateless services. The only mechanism that can be used with this trigger is AddRemoveIncrementalNamedPartitionScalingMechanism. When service is scaled out then a new partition is added, and when service is scaled in one of existing partitions is removed. There are restrictions that will be checked when service is created or updated and service creation/update will fail if these conditions are not met:
113
+
This trigger can be used both with stateful and stateless services. The only mechanism that can be used with this trigger is AddRemoveIncrementalNamedPartitionScalingMechanism. When service is scaled out then a new partition is added, and when service is scaled in one of existing partitions is removed. There are restrictions that are checked when service is created or updated and service creation/update fails if these conditions are not met:
114
114
* Named partition scheme must be used for the service.
115
-
* Partition names must be consecutive integer numbers, like "0", "1", ...
116
-
* First partition name must be "0".
115
+
* Partition names must be consecutive integer numbers, like "0," "1," ...
116
+
* First partition name must be "0."
117
117
118
118
For example, if a service is initially created with three partitions, the only valid possibility for partition names is "0", "1" and "2".
119
119
120
-
The actual auto scaling operation that is performed will respect this naming scheme as well:
121
-
* If current partitions of the service are named "0", "1" and "2", then the partition that will be added for scaling out will be named "3".
122
-
* If current partitions of the service are named "0", "1" and "2", then the partition that will be removed for scaling in is partition with name "2".
120
+
The actual auto scaling operation that is performed respects this naming scheme as well:
121
+
* If current partitions of the service are named "0," "1" and "2," then the partition added for scaling out is named "3."
122
+
* If current partitions of the service are named "0," "1" and "2," then the partition removed for scaling in is partition with name "2."
123
123
124
124
Same as with mechanism that uses scaling by adding or removing instances, there are three parameters that determine how this mechanism is applied:
125
-
*_Scale Increment_ determines how many partitions will be added or removed when mechanism is triggered.
126
-
*_Maximum Partition Count_ defines the upper limit for scaling. If number of partitions of the service reaches this limit, then the service will not be scaled out, regardless of the load. It is possible to omit this limit by specifying value of -1, and in that case the service will be scaled out as much as possible (the limit is the actual capacity of the cluster).
127
-
*_Minimum Instance Count_ defines the lower limit for scaling. If number of partitions of the service reaches this limit, then service will not be scaled in regardless of the load.
125
+
*_Scale Increment_ determines how many partitions added or removed when mechanism is triggered.
126
+
*_Maximum Partition Count_ defines the upper limit for scaling. If number of partitions of the service reaches this limit, then the service is not scaled out, regardless of the load. It is possible to omit this limit by specifying value of -1, and in that case the service is scaled out as much as possible (the limit is the actual capacity of the cluster).
127
+
*_Minimum Partition Count_ defines the lower limit for scaling. If number of partitions of the service reaches this limit, then service is not scaled in regardless of the load.
128
128
129
129
> [!WARNING]
130
130
> When AddRemoveIncrementalNamedPartitionScalingMechanism is used with stateful services, Service Fabric will add or remove partitions **without notification or warning**. Repartitioning of data will not be performed when scaling mechanism is triggered. In case of scale out operation, new partitions will be empty, and in case of scale in operation, **partition will be deleted together with all the data that it contains**.
0 commit comments