Use DynamoDB autoscaling with lambda!
It's Dynamic DynamoDB's simple lambda version
- Configuration is now stored in a DynamoDB table
- Improve accuracy by using consumed capacity based on Sum statistics instead of Average statistics (more about it)
- CloudFormation stack:
- Lambda deployment,
- Rule to execute the function every 5 minute,
- SNS topic creation to schedule tables on alarm.
- Autoscale DynamoDB's provisioned read/write capacity
- Run by lambda function with scheduled event
- SNS topic creation to schedule tables on alarm (optional)
- index.js - main handler & flow source. using async
- tasks.js - Detail work sources. using AWS SDK
config.js - capacity scaling rule configuration
$ git clone https://github.com/RedbirdHQ/dynamic-dynamodb-lambda.gitor download zip$ npm installto download npm modules- Create a "configuration" table in DynamoDB to store increase/decrease information.
- almost same with Dynamic DynamoDB's option
// Table JSON item sample
{
"app": "dynamicDynamoDB",
"conf": {
"region" : "us-east-1", // region
"timeframeMin" : 5, // evaluation timeframe (minute)
"tables" :
[
{
"tableName" : "testTable", // table name
"reads_upper_threshold" : 90, // read incrase threshold (%)
"reads_lower_threshold" : 30, // read decrase threshold (%)
"increase_reads_with" : 90, // read incrase amount (%)
"decrease_reads_with" : 30, // read decrase amount (%)
"base_reads" : 5, // minimum read Capacity
"writes_upper_threshold" : 90, // write incrase amount (%)
"writes_lower_threshold" : 40, // write decrase amount (%)
"increase_writes_with" : 90, // write incrase amount (%)
"decrease_writes_with" : 30, // write incrase amount (%)
"base_writes" : 5 // minimum write Capacity
}
,
{
"tableName" : "testTable2",
"reads_upper_threshold" : 90,
"reads_lower_threshold" : 30,
"increase_reads_with" : 0, // to don't scale up reads
"decrease_reads_with" : 0, // to don't scale down reads
"base_reads" : 3,
"writes_upper_threshold" : 90,
"writes_lower_threshold" : 40,
"increase_writes_with" : 0, // to don't scale up writes
"decrease_writes_with" : 0, // to don't scale down writes
"base_writes" : 3
}
]
}
}-
Zip your Lambda and upload it on Amazon S3
-
Check the CloudFormation stack, adjust your settings and execute it
(6.) Add SNS alert on your table to call the SNS topic on alarm
OR
-
Deploy to lambda function with your favorite method (just zip, use tool like node-lambda)
-
Check lambda function's configuration
- Memory - 128MB, timeout - 60sec
- Set
Cloudwatch Event Ruleto run your lambda function. for detail, refer this - Set & attach
roleto lambda function - Example role policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:UpdateTable",
"CloudWatch:getMetricStatistics"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "",
"Resource": "*",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow"
}
]
}- More stable
- Add SNS noti when scaled/failed
- Update index capacity on the same scheme
Average based on Sum statistic and timeframe instead of taking directly the AWS average is more accurate. Indeed, as the Amazon CloudWatch Developer Guide specification explains: "the Average statistic for the ThrottledRequests metric is simply 1". So, if the Average statistic are used, and there are ThrottledRequests, the average return by AWS is 1, and there isn't capacity upscale. To avoid this problem, we simply used the Sum statistic, divided by a timeframe ("For the ThrottledRequests metric, use the listed Valid Statistics (either Sum or SampleCount) to see the trend of ThrottledRequests over a specified time period.").