You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 1, 2022. It is now read-only.
+ 100% [Flow](https://flowtype.org/) static type checking coverage
14
15
15
16
## Disclaimer
16
17
@@ -24,46 +25,51 @@ connection with, the use of this code.
24
25
25
26
## Getting started
26
27
28
+
Note: dynamodb-lambda-autoscale uses [Flow](https://flowtype.org/) extensively for static type
29
+
checking, we highly recommend you use [Nuclide](https://nuclide.io/) when making modification to code /
30
+
configuration. Please see the respective websites for advantages / reasons.
31
+
27
32
1. Build and package the code
28
33
1. Fork the repo
29
34
2. Clone your fork
30
35
3. Create a new file in the root folder called 'config.env.production'
31
36
4. Put your AWS credentials into the file in the following format, only if you want to run a local test (not needed for lambda)
32
-
~~~~
33
-
AWS_ACCESS_KEY_ID="###################"
34
-
AWS_SECRET_ACCESS_KEY="###############"
35
-
~~~~
36
-
3. Run 'npm install'
37
-
4. Run 'npm run build'
38
-
5. Verify this has created a 'dist.zip' file
39
-
6. Optionally, run a local test by running 'npm run start'
37
+
~~~~
38
+
AWS_ACCESS_KEY_ID="###################"
39
+
AWS_SECRET_ACCESS_KEY="###############"
40
+
~~~~
41
+
5. Update [Region.json](./src/configuration/Region.json) to match the region of your DynamoDB instance
42
+
5. Run 'npm install'
43
+
6. Run 'npm run build'
44
+
7. Verify this has created a 'dist.zip' file
45
+
8. Optionally, run a local test by running 'npm run start'
40
46
41
47
## Running on AWS Lambda
42
48
43
49
1. Follow the steps in 'Running locally'
44
50
2. Create an AWS Policy and Role
45
51
1. Create a policy called 'DynamoDBLambdaAutoscale'
46
52
2. Use the following content to give access to dynamoDB, cloudwatch and lambda logging
47
-
~~~~
48
-
{
49
-
"Version": "2012-10-17",
50
-
"Statement": [
51
-
{
52
-
"Action": [
53
-
"dynamodb:ListTables",
54
-
"dynamodb:DescribeTable",
55
-
"dynamodb:UpdateTable",
56
-
"cloudwatch:GetMetricStatistics",
57
-
"logs:CreateLogGroup",
58
-
"logs:CreateLogStream",
59
-
"logs:PutLogEvents"
60
-
],
61
-
"Effect": "Allow",
62
-
"Resource": "*"
63
-
}
64
-
]
65
-
}
66
-
~~~~
53
+
~~~~
54
+
{
55
+
"Version": "2012-10-17",
56
+
"Statement": [
57
+
{
58
+
"Action": [
59
+
"dynamodb:ListTables",
60
+
"dynamodb:DescribeTable",
61
+
"dynamodb:UpdateTable",
62
+
"cloudwatch:GetMetricStatistics",
63
+
"logs:CreateLogGroup",
64
+
"logs:CreateLogStream",
65
+
"logs:PutLogEvents"
66
+
],
67
+
"Effect": "Allow",
68
+
"Resource": "*"
69
+
}
70
+
]
71
+
}
72
+
~~~~
67
73
3. Create a role called 'DynamoDBLambdaAutoscale'
68
74
4. Attach the newly created policy to the role
69
75
3. Create a AWS Lambda function
@@ -75,70 +81,168 @@ connection with, the use of this code.
75
81
6. Set the Role to 'DynamoDBLambdaAutoscale'
76
82
7. Set the Memory to the lowest value initially but test different values at a later date to see how it affects performance
77
83
8. Set the Timeout to approximately 5 seconds (higher or lower depending on the amount of tables you have and the selected memory setting)
78
-
9. Once the function is created, attach a 'scheduled event' event source and make it run every minute. Event Sources > Add Event Source > Event Type = Cloudwatch Events - Schedule. Set the name to 'DynamoDBLambdaAutoscale' and the schedule expression to 'rate(1 miunte)
84
+
9. Once the function is created, attach a 'scheduled event' event source and make it run every minute. Event Sources > Add Event Source > Event Type = Cloudwatch Events - Schedule. Set the name to 'DynamoDBLambdaAutoscale' and the schedule expression to 'rate(1 minute)'
79
85
80
86
## Configuration
81
87
82
-
The default setup of the configuration is to apply autoscaling to all tables,
83
-
allowing for a no touch quick setup.
84
-
85
-
dynamodb-lambda-autoscale takes a different approach to autoscaling
86
-
configuration compared to other community projects. Rather than making well
87
-
defined changes to a config file this provides a callback function called
88
-
'getTableUpdate' which must be implemented.
89
-
88
+
The default setup in the [Provisioner.js](./src/Provisioner.js) allows for a quick no touch setup.
89
+
A breakdown of the configuration behaviour is as follows:
90
+
- AWS region is set to 'us-east-1' via [Region.json](./src/configuration/Region.json) configuration
91
+
- Autoscales all tables and indexes
92
+
- Autoscaling 'Strategy' settings are defined in [DefaultProvisioner.json](./src/configuration/DefaultProvisioner.json) and are as follows
93
+
- Separate 'Read' and 'Write' capacity adjustment strategies
94
+
- Separate asymmetric 'Increment' and 'Decrement' capacity adjustment strategies
95
+
- Read/Write provisioned capacity increased
96
+
- when capacity utilisation > 90%
97
+
- by 3 units or to 110% of the current consumed capacity, which ever is the greater
98
+
- with hard min/max limits of 1 and 10 respectively
99
+
- Read/Write provisioned capacity decreased
100
+
- when capacity utilisation < 30% AND
101
+
- when at least 60 minutes have passed since the last increment AND
102
+
- when at least 60 minutes have passed since the last decrement AND
103
+
- when the adjustment will be at least 5 units AND
104
+
- when we are allowed to utilise 1 of our 4 AWS enforced decrements
105
+
- to the consumed throughput value
106
+
- with hard min/max limits of 1 and 10 respectively
107
+
108
+
## Strategy Settings
109
+
110
+
The strategy settings described above uses a simple schema which applies to both Read/Write and to
111
+
both the Increment/Decrement. Using the options below many different strategies can be constructed:
- ReadCapacity.Max : (Optional) Define a maximum allowed capacity, otherwise unlimited
114
+
- ReadCapacity.Increment : (Optional) Defined an increment strategy
115
+
- ReadCapacity.Increment.When : (Required) Define when capacity should be incremented
116
+
- ReadCapacity.Increment.When.UtilisationIsAbovePercent : (Optional) Define a percentage utilisation upper threshold at which capacity is subject to recalculation
117
+
- ReadCapacity.Increment.When.UtilisationIsBelowPercent : (Optional) Define a percentage utilisation lower threshold at which capacity is subject to recalculation, possible but non sensical for increments however.
118
+
- ReadCapacity.Increment.When.AfterLastIncrementMinutes : (Optional) Define a grace period based off the previous increment in which capacity adjustments should not occur
119
+
- ReadCapacity.Increment.When.AfterLastDecrementMinutes : (Optional) Define a grace period based off the previous decrement in which capacity adjustments should not occur
120
+
- ReadCapacity.Increment.When.UnitAdjustmentGreaterThan : (Optional) Define a minimum unit adjustment so that only capacity adjustments of a certain size are allowed
121
+
- ReadCapacity.Increment.By : (Optional) Define a 'relative' value to change the capacity by
122
+
- ReadCapacity.Increment.By.ConsumedPercent : (Optional) Define a 'relative' percentage adjustment based on the current ConsumedCapacity
123
+
- ReadCapacity.Increment.By.ProvisionedPercent : (Optional) Define a 'relative' percentage adjustment based on the current ProvisionedCapacity
124
+
- ReadCapacity.Increment.By.Units : (Optional) Define a 'relative' unit adjustment
125
+
- ReadCapacity.Increment.To : (Optional) Define an 'absolute' value to change the capacity to
126
+
- ReadCapacity.Increment.To.ConsumedPercent : (Optional) Define an 'absolute' percentage adjustment based on the current ConsumedCapacity
127
+
- ReadCapacity.Increment.To.ProvisionedPercent : (Optional) Define an 'absolute' percentage adjustment based on the current ProvisionedCapacity
128
+
- ReadCapacity.Increment.To.Units : (Optional) Define an 'absolute' unit adjustment
Flexibility is great, but implementing all the logic required for a robust autoscaling
219
+
strategy isn't something everyone wants to do. Hence, the default 'Provisioner' builds upon the base
220
+
class in a layered approach. The layers are as follows:
221
+
- [Provisioner.js](./src/Provisioner.js) concrete implementation which provides very robust autoscaling logic which can be manipulated with a 'strategy' settings json object
222
+
- [ProvisionerConfigurableBase.js](./src/provisioning/ProvisionerConfigurableBase.js) abstract base class which breaks out the 'getTableUpdateAsync' function into more manageable abstract methods
223
+
- [ProvisionerBase.js](./src/provisioning/ProvisionerBase.js) the root abstract base class which defines the minimum contract
224
+
225
+
## Rate Limited Decrement
226
+
227
+
AWS only allows 4 table decrements in a calendar day. To account for this we have an included
228
+
an algorithm which segments the remaining time to midnight by the amount of decrements we have left.
229
+
This logic allows us to utilise each 4 decrements as efficiently as possible. The increments on the
230
+
other hand are unlimited, so the algorithm follows a unique 'sawtooth' profile, dropping the
231
+
provisioned capacity all the way down to the consumed throughput rather than gradually. Please see
232
+
[RateLimitedDecrement.js](./src/utils/RateLimitedDecrement.js) for full implementation.
233
+
234
+
## Capacity Calculation
235
+
236
+
As well as implementing the correct Provisioning logic it is also important to calculate the
237
+
ConsumedCapacity for the current point in time. We have provided a default algorithm in
238
+
[CapacityCalculator.js](./src/CapacityCalculator.js) which should be good enough for most purposes
239
+
but it could be swapped out with perhaps an improved version. The newer version could potentially
240
+
take a series of data points and plot a linear regression line through them for example.
138
241
139
242
## Dependencies
140
243
141
-
This project has the following main dependencies:
244
+
This project has the following main dependencies (n.b. all third party dependencies are compiled
245
+
into a single javascript file before being zipped and uploaded to lambda):
142
246
+ aws-sdk - Access to AWS services
143
247
+ dotenv - Environment variable configuration useful for lambda
0 commit comments