Skip to content

Lambda cold start Practices #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 60 additions & 0 deletions lambda-optimizations/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
<h1> AWS Lambda Performance Optimizations</h1>
<p>
Among the biggest issues we – a 100% serverless company – have faced, are AWS Lambda cold starts,
the extra time it takes for a function to execute when it hasn’t recently been invoked.
</p>
<p>
<strong>"Keep Lambda warm"</strong> - one of the most important topics in optimizations of Lambdas
</p>

<article>
<strong>Cold starts can be a killer to Lambda performance</strong>, especially if you’re developing a customer-facing application that needs to operate in real time.
They happen because if your Lambda is not already running, AWS needs to deploy your code and spin up a new container before the request can begin.
This can mean a request takes much longer to execute, and only when the container is ready can your lambda start running.
</article>
<hr/>
<img src="https://cdn-media-1.freecodecamp.org/images/1*HsUccdkDffywiUjM7AYdFw.png"/>
<hr/>
<article>
The serverless cold start is the first time your code is being executed by your cloud provider, and requires it to be downloaded,
containerised, booted, and primed to be run. This can add significant overhead — up to 1.5s of latency. <br/><br/>
<strong>But good news:</strong> these cold starts are expected to be outliers, only affecting 5% of executions.
So, while they don’t happen all the time, they are important to think about when designing your application.
</article>
<h2> Get Lambdas out of VPC </h2>
<article>
Unless it’s really necessary (for instance, to access resources within a VPC) try to get your Lambdas running outside of your VPC,
as the attachment of the ENI interfaces can have a huge impact on Lambda cold start times. <br/>
In these experiments run by Alessandro Morandi, he found that a Lambda with 3GB of memory took up to 30x longer to invoke from a cold start
when inside a VPC compared to outside a VPC.
<br/><br/>
The added complexity of having a Lambda function live inside a VPC introduces new latencies.
These latencies are due to creating an Elastic Network Interface and then waiting for Lambda to assign itself that IP.
Also be careful, each Lambda function requires an IP address and you don’t want to run out!
<hr/>
<img src="https://cdn-media-1.freecodecamp.org/images/1*FCpFITtI7oxassyWOQdrKw.png" />
<hr/>
</article>
<h2>Ways to improve AWS Lambda cold start performance</h2>
<article>
While AWS Lambda cold start times are a real issue, the good news is that there are very useful tools and approaches
that can help to mitigate the problem, either by avoiding cold starts altogether or reducing their duration.<br/>
<h3>Keep lambdas warm</h3>
Tools invokes the lambdas at a given interval to ensure that the containers aren’t destroyed:
<ul>
<li><a href="https://www.npmjs.com/package/lambda-warmer">Lambda Warmer</a></li>
<li><a href="https://www.npmjs.com/package/serverless-plugin-warmup">Serverless WarmUp Plugin</a></li>
</ul>
<h3>Reduce the number of packages</h3>
We’ve seen that the biggest impact on AWS Lambda cold start times is not the size of the package but the initialization time when the package is actually loaded for the first time.
Tools which can help you reduce the number of packages
<ul>
<li><a href="http://browserify.org">Browserify</a></li>
<li><a href="https://www.npmjs.com/package/serverless-plugin-optimize">Serverless Optimize Plugin</a></li>
</ul>
<h3>Get Lambdas out of VPC</h3>
You should really only be putting your Lambda functions inside a VPC when you absolutely need access to resources that can’t be exposed to the outside world.
Otherwise, you are going to be paying for it in start up times and it matters.
As Yan Cui highlighted in his article ‘You’re thinking about cold starts wrong’, cold starts can happen at anytime, and especially during peak service usage.
<img src="https://cdn-media-1.freecodecamp.org/images/1*4RwbY1CiZC2jzsD7jBtIlQ.png" />
</article>
6 changes: 6 additions & 0 deletions lambda-optimizations/examples/lambda-warmer/.npmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# package directories
node_modules
jspm_packages

# Serverless directories
.serverless
18 changes: 18 additions & 0 deletions lambda-optimizations/examples/lambda-warmer/handler.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
const warmer = require('lambda-warmer');

module.exports.hello = async (event) => {
if (await warmer( event ))
return 'warmed';

return {
statusCode: 200,
body: JSON.stringify(
{
message: 'Go Serverless v1.0! Your function executed successfully!',
input: event,
},
null,
2
),
};
};
13 changes: 13 additions & 0 deletions lambda-optimizations/examples/lambda-warmer/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

14 changes: 14 additions & 0 deletions lambda-optimizations/examples/lambda-warmer/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
{
"name": "lambda-warmer-example",
"version": "1.0.0",
"description": "",
"main": "handler.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"lambda-warmer": "^1.2.1"
}
}
28 changes: 28 additions & 0 deletions lambda-optimizations/examples/lambda-warmer/serverless.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
service: lambda-warmer
frameworkVersion: '2'

provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221

iamRoleStatements:
- Effect: "Allow"
Action:
- "lambda:InvokeFunction"
Resource: "arn:aws:lambda:us-east-1:*"

functions:
hello:
handler: handler.hello
events:
- httpApi:
path: /hello
method: get
- schedule:
name: warmer-schedule-name
rate: rate(5 minutes)
enabled: true
input:
warmer: true
concurrency: 1
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# package directories
node_modules
jspm_packages

# Serverless directories
.serverless
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
'use strict';

/** Generated by Serverless WarmUp Plugin at 2021-03-22T10:32:04.308Z */

const AWSXRay = require('aws-xray-sdk-core');
const AWS = AWSXRay.captureAWS(require('aws-sdk'));;
const lambda = new AWS.Lambda({
apiVersion: '2015-03-31',
region: 'us-east-1',
httpOptions: {
connectTimeout: 1000, // 1 second
},
});
const functions = [
{
"name": "warmup-plugin-dev-hello",
"config": {
"enabled": "dev",
"payload": "{\"source\":\"serverless-plugin-warmup\"}",
"concurrency": 1
}
}
];

function getConcurrency(func, envVars) {
const functionConcurrency = envVars[`WARMUP_CONCURRENCY_${func.name.toUpperCase().replace(/-/g, '_')}`];

if (functionConcurrency) {
const concurrency = parseInt(functionConcurrency);
console.log(`Warming up function: ${func.name} with concurrency: ${concurrency} (from function-specific environment variable)`);
return concurrency;
}

if (envVars.WARMUP_CONCURRENCY) {
const concurrency = parseInt(envVars.WARMUP_CONCURRENCY);
console.log(`Warming up function: ${func.name} with concurrency: ${concurrency} (from global environment variable)`);
return concurrency;
}

const concurrency = parseInt(func.config.concurrency);
console.log(`Warming up function: ${func.name} with concurrency: ${concurrency}`);
return concurrency;
}

module.exports.warmUp = async (event, context) => {
console.log('Warm Up Start');

const invokes = await Promise.all(functions.map(async (func) => {
const concurrency = getConcurrency(func, process.env);

const clientContext = func.config.clientContext !== undefined
? func.config.clientContext
: func.config.payload;

const params = {
ClientContext: clientContext
? Buffer.from(`{"custom":${clientContext}}`).toString('base64')
: undefined,
FunctionName: func.name,
InvocationType: 'RequestResponse',
LogType: 'None',
Qualifier: func.config.alias || process.env.SERVERLESS_ALIAS,
Payload: func.config.payload
};

try {
await Promise.all(Array(concurrency).fill(0).map(async () => await lambda.invoke(params).promise()));
console.log(`Warm Up Invoke Success: ${func.name}`);
return true;
} catch (e) {
console.log(`Warm Up Invoke Error: ${func.name}`, e);
return false;
}
}));

console.log(`Warm Up Finished with ${invokes.filter(r => !r).length} invoke errors`);
}
Loading