Skip to content
This repository was archived by the owner on Dec 1, 2022. It is now read-only.

Commit 4e9734b

Browse files
committed
Updated readme and simplified location of configuration settings for #17
1 parent 4a827b9 commit 4e9734b

File tree

7 files changed

+212
-113
lines changed

7 files changed

+212
-113
lines changed

README.md

Lines changed: 183 additions & 79 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@
1111
+ AWS credential configuration via 'dotenv'
1212
+ Optimised lambda package via 'webpack'
1313
+ ES7 code
14+
+ 100% [Flow](https://flowtype.org/) static type checking coverage
1415

1516
## Disclaimer
1617

@@ -24,46 +25,51 @@ connection with, the use of this code.
2425

2526
## Getting started
2627

28+
Note: dynamodb-lambda-autoscale uses [Flow](https://flowtype.org/) extensively for static type
29+
checking, we highly recommend you use [Nuclide](https://nuclide.io/) when making modification to code /
30+
configuration. Please see the respective websites for advantages / reasons.
31+
2732
1. Build and package the code
2833
1. Fork the repo
2934
2. Clone your fork
3035
3. Create a new file in the root folder called 'config.env.production'
3136
4. Put your AWS credentials into the file in the following format, only if you want to run a local test (not needed for lambda)
32-
~~~~
33-
AWS_ACCESS_KEY_ID="###################"
34-
AWS_SECRET_ACCESS_KEY="###############"
35-
~~~~
36-
3. Run 'npm install'
37-
4. Run 'npm run build'
38-
5. Verify this has created a 'dist.zip' file
39-
6. Optionally, run a local test by running 'npm run start'
37+
~~~~
38+
AWS_ACCESS_KEY_ID="###################"
39+
AWS_SECRET_ACCESS_KEY="###############"
40+
~~~~
41+
5. Update [Region.json](./src/configuration/Region.json) to match the region of your DynamoDB instance
42+
5. Run 'npm install'
43+
6. Run 'npm run build'
44+
7. Verify this has created a 'dist.zip' file
45+
8. Optionally, run a local test by running 'npm run start'
4046
4147
## Running on AWS Lambda
4248
4349
1. Follow the steps in 'Running locally'
4450
2. Create an AWS Policy and Role
4551
1. Create a policy called 'DynamoDBLambdaAutoscale'
4652
2. Use the following content to give access to dynamoDB, cloudwatch and lambda logging
47-
~~~~
48-
{
49-
"Version": "2012-10-17",
50-
"Statement": [
51-
{
52-
"Action": [
53-
"dynamodb:ListTables",
54-
"dynamodb:DescribeTable",
55-
"dynamodb:UpdateTable",
56-
"cloudwatch:GetMetricStatistics",
57-
"logs:CreateLogGroup",
58-
"logs:CreateLogStream",
59-
"logs:PutLogEvents"
60-
],
61-
"Effect": "Allow",
62-
"Resource": "*"
63-
}
64-
]
65-
}
66-
~~~~
53+
~~~~
54+
{
55+
"Version": "2012-10-17",
56+
"Statement": [
57+
{
58+
"Action": [
59+
"dynamodb:ListTables",
60+
"dynamodb:DescribeTable",
61+
"dynamodb:UpdateTable",
62+
"cloudwatch:GetMetricStatistics",
63+
"logs:CreateLogGroup",
64+
"logs:CreateLogStream",
65+
"logs:PutLogEvents"
66+
],
67+
"Effect": "Allow",
68+
"Resource": "*"
69+
}
70+
]
71+
}
72+
~~~~
6773
3. Create a role called 'DynamoDBLambdaAutoscale'
6874
4. Attach the newly created policy to the role
6975
3. Create a AWS Lambda function
@@ -75,70 +81,168 @@ connection with, the use of this code.
7581
6. Set the Role to 'DynamoDBLambdaAutoscale'
7682
7. Set the Memory to the lowest value initially but test different values at a later date to see how it affects performance
7783
8. Set the Timeout to approximately 5 seconds (higher or lower depending on the amount of tables you have and the selected memory setting)
78-
9. Once the function is created, attach a 'scheduled event' event source and make it run every minute. Event Sources > Add Event Source > Event Type = Cloudwatch Events - Schedule. Set the name to 'DynamoDBLambdaAutoscale' and the schedule expression to 'rate(1 miunte)
84+
9. Once the function is created, attach a 'scheduled event' event source and make it run every minute. Event Sources > Add Event Source > Event Type = Cloudwatch Events - Schedule. Set the name to 'DynamoDBLambdaAutoscale' and the schedule expression to 'rate(1 minute)'
7985
8086
## Configuration
8187
82-
The default setup of the configuration is to apply autoscaling to all tables,
83-
allowing for a no touch quick setup.
84-
85-
dynamodb-lambda-autoscale takes a different approach to autoscaling
86-
configuration compared to other community projects. Rather than making well
87-
defined changes to a config file this provides a callback function called
88-
'getTableUpdate' which must be implemented.
89-
88+
The default setup in the [Provisioner.js](./src/Provisioner.js) allows for a quick no touch setup.
89+
A breakdown of the configuration behaviour is as follows:
90+
- AWS region is set to 'us-east-1' via [Region.json](./src/configuration/Region.json) configuration
91+
- Autoscales all tables and indexes
92+
- Autoscaling 'Strategy' settings are defined in [DefaultProvisioner.json](./src/configuration/DefaultProvisioner.json) and are as follows
93+
- Separate 'Read' and 'Write' capacity adjustment strategies
94+
- Separate asymmetric 'Increment' and 'Decrement' capacity adjustment strategies
95+
- Read/Write provisioned capacity increased
96+
- when capacity utilisation > 90%
97+
- by 3 units or to 110% of the current consumed capacity, which ever is the greater
98+
- with hard min/max limits of 1 and 10 respectively
99+
- Read/Write provisioned capacity decreased
100+
- when capacity utilisation < 30% AND
101+
- when at least 60 minutes have passed since the last increment AND
102+
- when at least 60 minutes have passed since the last decrement AND
103+
- when the adjustment will be at least 5 units AND
104+
- when we are allowed to utilise 1 of our 4 AWS enforced decrements
105+
- to the consumed throughput value
106+
- with hard min/max limits of 1 and 10 respectively
107+
108+
## Strategy Settings
109+
110+
The strategy settings described above uses a simple schema which applies to both Read/Write and to
111+
both the Increment/Decrement. Using the options below many different strategies can be constructed:
112+
- ReadCapacity.Min : (Optional) Define a minimum allowed capacity, otherwise 1
113+
- ReadCapacity.Max : (Optional) Define a maximum allowed capacity, otherwise unlimited
114+
- ReadCapacity.Increment : (Optional) Defined an increment strategy
115+
- ReadCapacity.Increment.When : (Required) Define when capacity should be incremented
116+
- ReadCapacity.Increment.When.UtilisationIsAbovePercent : (Optional) Define a percentage utilisation upper threshold at which capacity is subject to recalculation
117+
- ReadCapacity.Increment.When.UtilisationIsBelowPercent : (Optional) Define a percentage utilisation lower threshold at which capacity is subject to recalculation, possible but non sensical for increments however.
118+
- ReadCapacity.Increment.When.AfterLastIncrementMinutes : (Optional) Define a grace period based off the previous increment in which capacity adjustments should not occur
119+
- ReadCapacity.Increment.When.AfterLastDecrementMinutes : (Optional) Define a grace period based off the previous decrement in which capacity adjustments should not occur
120+
- ReadCapacity.Increment.When.UnitAdjustmentGreaterThan : (Optional) Define a minimum unit adjustment so that only capacity adjustments of a certain size are allowed
121+
- ReadCapacity.Increment.By : (Optional) Define a 'relative' value to change the capacity by
122+
- ReadCapacity.Increment.By.ConsumedPercent : (Optional) Define a 'relative' percentage adjustment based on the current ConsumedCapacity
123+
- ReadCapacity.Increment.By.ProvisionedPercent : (Optional) Define a 'relative' percentage adjustment based on the current ProvisionedCapacity
124+
- ReadCapacity.Increment.By.Units : (Optional) Define a 'relative' unit adjustment
125+
- ReadCapacity.Increment.To : (Optional) Define an 'absolute' value to change the capacity to
126+
- ReadCapacity.Increment.To.ConsumedPercent : (Optional) Define an 'absolute' percentage adjustment based on the current ConsumedCapacity
127+
- ReadCapacity.Increment.To.ProvisionedPercent : (Optional) Define an 'absolute' percentage adjustment based on the current ProvisionedCapacity
128+
- ReadCapacity.Increment.To.Units : (Optional) Define an 'absolute' unit adjustment
129+
130+
A sample of the strategy setting json is...
90131
```javascript
91132
{
92-
connection: {
93-
dynamoDB: { apiVersion: '2012-08-10', region: 'us-east-1' },
94-
cloudWatch: { apiVersion: '2010-08-01', region: 'us-east-1' }
133+
"ReadCapacity": {
134+
"Min": 1,
135+
"Max": 10,
136+
"Increment": {
137+
"When": {
138+
"UtilisationIsAbovePercent": 90
139+
},
140+
"By": {
141+
"Units": 3
142+
},
143+
"To": {
144+
"ConsumedPercent": 110
145+
}
146+
},
147+
"Decrement": {
148+
"When": {
149+
"UtilisationIsBelowPercent": 30,
150+
"AfterLastIncrementMinutes": 60,
151+
"AfterLastDecrementMinutes": 60,
152+
"UnitAdjustmentGreaterThan": 5
153+
},
154+
"To": {
155+
"ConsumedPercent": 100
156+
}
157+
}
95158
},
96-
getTableUpdate: (description, consumedCapacityDescription) => {
97-
// Logic goes here....
159+
"WriteCapacity": {
160+
"Min": 1,
161+
"Max": 10,
162+
"Increment": {
163+
"When": {
164+
"UtilisationIsAbovePercent": 90
165+
},
166+
"By": {
167+
"Units": 3
168+
},
169+
"To": {
170+
"ConsumedPercent": 110
171+
}
172+
},
173+
"Decrement": {
174+
"When": {
175+
"UtilisationIsBelowPercent": 30,
176+
"AfterLastIncrementMinutes": 60,
177+
"AfterLastDecrementMinutes": 60,
178+
"UnitAdjustmentGreaterThan": 5
179+
},
180+
"To": {
181+
"ConsumedPercent": 100
182+
}
183+
}
98184
}
99-
};
185+
}
100186
```
101187
188+
## Advanced Configuration
189+
190+
This project takes a 'React' style code first approach over declarative configuration traditionally
191+
used by other autoscaling community projects. Rather than being limited to a structured
192+
configuration file or even the 'strategy' settings above you have the option to extend the [ProvisionerBase.js](./src/provisioning/ProvisionerBase.js)
193+
abstract base class for yourself and programmatically implement any desired logic.
194+
195+
The following three functions are all that is required to complete the provisioning functionality.
196+
As per the 'React' style, only actual updates to the ProvisionedCapacity will be sent to AWS.
197+
198+
```javascript
199+
getDynamoDBRegion(): string {
200+
// Return the AWS region as a string
201+
}
202+
203+
async getTableNamesAsync(): Promise<string[]> {
204+
// Return the table names to apply autoscaling to as a string array promise
205+
}
206+
207+
async getTableUpdateAsync(
208+
tableDescription: TableDescription,
209+
tableConsumedCapacityDescription: TableConsumedCapacityDescription):
210+
Promise<?UpdateTableRequest> {
211+
// Given an AWS DynamoDB TableDescription and AWS CloudWatch ConsumedCapacity metrics
212+
// return an AWS DynamoDB UpdateTable request
213+
}
214+
```
102215
[DescribeTable.ResponseSyntax](http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.html#API_DescribeTable_ResponseSyntax)
103-
[UpdateTable.ResponseSyntax](http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html#API_UpdateTable_ResponseSyntax)
104-
105-
The function is given the information such as the table name, current table
106-
provisioned throughput and the consumed throughput for the past minute.
107-
Table updates will only be sent to AWS if the values are different for the
108-
current, this approach follows the popular code first pattern used in React.
109-
110-
In most cases the default [Config.js](./src/Config.js) which uses the supplied
111-
[ConfigurableProvisioner.js](./src/ConfigurableProvisioner.js) will provide
112-
enough functionality out of box such that additional coding is not required.
113-
The default provisioner provides the following features.
114-
115-
- Separate 'Read' and 'Write' capacity adjustment
116-
- Separate 'Increment' and 'Decrement' capacity adjustment
117-
- Read/Write provisioned capacity increased
118-
- if capacity utilisation > 90%
119-
- by either 100% or 3 units, which ever is the greater
120-
- with hard min/max limits of 1 and 10 respectively
121-
- Read/Write provisioned capacity decreased
122-
- if capacity utilisation < 30% AND
123-
- if at least 60 minutes have passed since the last increment AND
124-
- if at least 60 minutes have passed since the last decrement AND
125-
- if the adjustment will be at least 3 units AND
126-
- if we are allowed to utilise 1 of our 4 AWS enforced decrements
127-
- to the consumed throughput value
128-
- with a hard min limit of 1
129-
130-
As AWS only allows 4 table decrements in a calendar day we have an intelligent
131-
algorithm which segments the remaining time to midnight by the amount of
132-
decrements we have left. This logic allows us to utilise each 4 decrements
133-
efficiently. The increments are unlimited so the algorithm follows a unique
134-
'sawtooth' profile, dropping the provisioned throughput all the way down to
135-
the consumed throughput rather than gradually. Please see
136-
[RateLimitedDecrement.js](./src/RateLimitedDecrement.js) for full
137-
implementation.
216+
[UpdateTable.RequestSyntax](http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html#API_UpdateTable_RequestSyntax)
217+
218+
Flexibility is great, but implementing all the logic required for a robust autoscaling
219+
strategy isn't something everyone wants to do. Hence, the default 'Provisioner' builds upon the base
220+
class in a layered approach. The layers are as follows:
221+
- [Provisioner.js](./src/Provisioner.js) concrete implementation which provides very robust autoscaling logic which can be manipulated with a 'strategy' settings json object
222+
- [ProvisionerConfigurableBase.js](./src/provisioning/ProvisionerConfigurableBase.js) abstract base class which breaks out the 'getTableUpdateAsync' function into more manageable abstract methods
223+
- [ProvisionerBase.js](./src/provisioning/ProvisionerBase.js) the root abstract base class which defines the minimum contract
224+
225+
## Rate Limited Decrement
226+
227+
AWS only allows 4 table decrements in a calendar day. To account for this we have an included
228+
an algorithm which segments the remaining time to midnight by the amount of decrements we have left.
229+
This logic allows us to utilise each 4 decrements as efficiently as possible. The increments on the
230+
other hand are unlimited, so the algorithm follows a unique 'sawtooth' profile, dropping the
231+
provisioned capacity all the way down to the consumed throughput rather than gradually. Please see
232+
[RateLimitedDecrement.js](./src/utils/RateLimitedDecrement.js) for full implementation.
233+
234+
## Capacity Calculation
235+
236+
As well as implementing the correct Provisioning logic it is also important to calculate the
237+
ConsumedCapacity for the current point in time. We have provided a default algorithm in
238+
[CapacityCalculator.js](./src/CapacityCalculator.js) which should be good enough for most purposes
239+
but it could be swapped out with perhaps an improved version. The newer version could potentially
240+
take a series of data points and plot a linear regression line through them for example.
138241
139242
## Dependencies
140243
141-
This project has the following main dependencies:
244+
This project has the following main dependencies (n.b. all third party dependencies are compiled
245+
into a single javascript file before being zipped and uploaded to lambda):
142246
+ aws-sdk - Access to AWS services
143247
+ dotenv - Environment variable configuration useful for lambda
144248
+ measured - Statistics gathering

package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"name": "dynamodb-lambda-autoscale",
3-
"version": "0.2.0",
3+
"version": "0.3.0",
44
"description": "Autoscale DynamoDB provisioned capacity using AWS Lambda",
55
"contributors": [
66
"Thomas Mitchell <[email protected]>"

src/CapacityCalculator.js

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
/* @flow */
22
import { invariant } from './Global';
3+
import { Region } from './configuration/Region';
34
import CapacityCalculatorBase from './capacity/CapacityCalculatorBase';
45
import type { GetMetricStatisticsResponse } from 'aws-sdk-promise';
56
import type { StatisticSettings } from './flow/FlowTypes';
@@ -8,7 +9,7 @@ export default class CapacityCalculator extends CapacityCalculatorBase {
89

910
// Get the region
1011
getCloudWatchRegion() {
11-
return 'us-east-1';
12+
return Region;
1213
}
1314

1415
getStatisticSettings(): StatisticSettings {

0 commit comments

Comments
 (0)