Skip to content

Commit 2493a57

Browse files
Updates to README.md
1 parent bce0ae7 commit 2493a57

File tree

1 file changed

+88
-18
lines changed

1 file changed

+88
-18
lines changed

kinesisfirehose-logs-extension-demo/README.md

Lines changed: 88 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,40 +1,100 @@
1-
# Centralized logging with Kinesis Firehose using Lambda Extensions
1+
# Centralize log collection with Kinesis Firehose using Lambda Extensions
22

33
## Introduction
44

5-
The provided code sample shows how to get send logs directly to kinesis firehose without sending them to AWS CloudWatch
5+
This pattern walks through an approach to centralize log collection for lambda function with Kinesis firehose using external extensions. The provided code sample shows how to get send logs directly to kinesis firehose without sending them to AWS CloudWatch service.
66

77
> Note: This is a simple example extension to help you investigate an approach to centralize the log aggregation. This example code is not production ready. Use it with your own discretion after testing thoroughly.
88
99
This sample extension:
1010

11-
* Subscribes to receive platform and function logs.
12-
* Runs with a main, and a helper goroutine: The main goroutine registers to ExtensionAPI and process its invoke and shutdown events (see nextEvent call). The helper goroutine:
13-
- starts a local HTTP server at the provided port (default 1234) that receives requests from Logs API
14-
- puts the logs in a synchronized queue (Producer) to be processed by the main goroutine (Consumer)
15-
* Writes the received logs to AWS Kinesis firehose
11+
* Subscribes to receive `platform` and `function` logs.
12+
* Runs with a main, and a helper goroutine: The main goroutine registers to `ExtensionAPI` and process its `invoke` and `shutdown` events. The helper goroutine:
13+
* starts a local HTTP server at the provided port (default 1234) that receives requests from Logs API with `NextEvent` method call
14+
* puts the logs in a synchronized queue (Producer) to be processed by the main goroutine (Consumer)
15+
* The main goroutine writes the received logs to AWS Kinesis firehose, which gets stored in AWS S3
16+
17+
## Amazon Kinesis Data firehose
18+
19+
Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics services. It can capture, transform, and deliver streaming data to Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, generic HTTP endpoints, and service providers like Datadog, New Relic, MongoDB, and Splunk, read more about it [here](https://aws.amazon.com/kinesis/data-firehose)
20+
21+
> Note: The code sample provided part of this pattern delivers logs from Kinesis firehose to Amazon S3
22+
23+
## Lambda extensions
24+
25+
Lambda Extensions, a new way to easily integrate Lambda with your favorite monitoring, observability, security, and governance tools. Extensions are a new way for tools to integrate deeply into the Lambda environment. There is no complex installation or configuration, and this simplified experience makes it easier for you to use your preferred tools across your application portfolio today. You can use extensions for use-cases such as:
26+
27+
* capturing diagnostic information before, during, and after function invocation
28+
* automatically instrumenting your code without needing code changes
29+
* fetching configuration settings or secrets before the function invocation
30+
* detecting and alerting on function activity through hardened security agents, which can run as separate processes from the function
31+
32+
read more about it [here](https://aws.amazon.com/blogs/compute/introducing-aws-lambda-extensions-in-preview/)
33+
34+
> Note: The code sample provided part of this pattern uses **external** extension to listen to log events from the lambda function
35+
36+
## Need to centralize log collection
37+
38+
Having a centeralized log collection mechanism using kinesis firehose provides the following benefits:
39+
40+
* helps to collect logs from different sources in one place. Even though the sample provided sends logs from Lambda, log routers like `Fluentbit` and `Firelens` can be used to send logs directly to kinesis firehose from container orchestrators like `EKS` and `ECS`.
41+
* define and standarize the transformations before the log gets delivered to downstream systems like S3, elastic search, redshift etc
42+
* provides a secure storage area for log data, before it gets written out to the disk. In the event of machine/application failure we still have access to the logs emitted from the source machine/application
1643

1744
## Architecture
1845

46+
### AWS Services
47+
48+
* AWS Lambda
49+
* AWS Lambda extension
50+
* AWS KinesisFirehose
51+
* AWS S3
52+
53+
### High level architecture
54+
1955
Here is the high level view of all the components
2056

2157
![architecture](images/centralized-logging.svg)
2258

2359
Once deployed the overall flow looks like below:
2460

25-
* On start-up, the extension registers to receive logs for `Platform` and `Function` events via a local HTTP server.
26-
* When the extension receives these logs, it takes care of buffering the data and writing it to AWS Kinesis Firehose using direct `PUT` records
61+
* On start-up, the extension subscribes to receive logs for `Platform` and `Function` events.
62+
* A local HTTP server is started inside the external extension which receives the logs.
63+
* The extension also takes care of buffering the recieved log events in a synchronized queue and writing it to AWS Kinesis Firehose via direct `PUT` records
2764

2865
> Note: Firehose stream name gets specified as an environment variable (`AWS_KINESIS_STREAM_NAME`)
2966
67+
* The lambda function won't be able to send any logs events to AWS CloudWatch service due to the following explict `DENY` policy:
68+
69+
```yaml
70+
Sid: CloudWatchLogsDeny
71+
Effect: Deny
72+
Action:
73+
- logs:CreateLogGroup
74+
- logs:CreateLogStream
75+
- logs:PutLogEvents
76+
Resource: arn:aws:logs:*:*:*
77+
```
78+
3079
* The Kinesis Firehose stream configured part of this sample sends log directly to `AWS S3` (gzip compressed).
3180

3281
## Build and Deploy
3382

3483
AWS SAM template available part of the root directory can be used for deploying the sample lambda function with this extension
3584

85+
### Pre-requistes
86+
87+
* AWS SAM CLI needs to get installed, follow the [link](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html) to learn how to install them
88+
3689
### Build
3790

91+
Check out the code by running the following command:
92+
93+
```bash
94+
mkdir kinesisfirehose-logs-extension-demo && cd kinesisfirehose-logs-extension-demo
95+
git clone https://github.com/hariohmprasath/load-testing-serverless-apps.git .
96+
```
97+
3898
Run the following command from the root directory
3999

40100
```bash
@@ -66,15 +126,25 @@ Commands you can use next
66126
[*] Deploy: sam deploy --guided
67127
```
68128

69-
### Deploy
129+
### Deployment
70130

71131
Run the following command to deploy the sample lambda function with the extension
72132

73133
```bash
74134
sam deploy --guided
75135
```
76136

77-
> Note: Either you can customize the parameters, or leave it as default to start the deployment
137+
The following parameters can be customized part of the deployment
138+
139+
| Parameter | Description | Default |
140+
| ------------- | ------------- | ----|
141+
| FirehoseStreamName | Firehose stream name | lambda-logs-direct-s3-no-cloudwatch |
142+
| FirehoseS3Prefix | The S3 Key prefix for Kinesis Firehose | lambda-logs-direct-s3-no-cloudwatch |
143+
| FirehoseCompressionFormat | Compression format used by Kinesis Firehose, allowed value - `UNCOMPRESSED, GZIP, Snappy` | GZIP |
144+
| FirehoseBufferingInterval | How long Firehose will wait before writing a new batch into S3 | 60 |
145+
| FirehoseBufferingSize | Maximum batch size in MB | 10 |
146+
147+
> Note: We can either customize the parameters, or leave it as default to proceed with the deployment
78148

79149
**Output**
80150

@@ -89,7 +159,7 @@ Value arn:aws:lambda:us-east-1:xxx:layer:kinesisfirehose-logs-exte
89159
90160
Key BucketName
91161
Description The bucket where data will be stored
92-
Value sam-app-deliverybucket-1lrmn02k8mxbc
162+
Value sam-app-deliverybucket-xxxx
93163
94164
Key KinesisFireHoseIamRole
95165
Description Kinesis firehose IAM role
@@ -125,15 +195,15 @@ The function should return ```"StatusCode": 200```, with the below output
125195
}
126196
```
127197

128-
After invoking the function and receiving the shutdown event, you should now see log messages from the example extension written to an S3 bucket.
198+
In a few minutes after the successfully invocation of the lambda function, we should start seeing the log messages from the example extension written to an S3 bucket.
129199

130200
* Login to AWS console:
131-
* Navigate to the S3 folder (`BucketName`) available part of the SAM output.
132-
* We can see the logs successly written to the S3 bucket, partitioned based on date
201+
* Navigate to the S3 bucket mentioned under the parameter `BucketName` in the SAM output.
202+
* We can see the logs successly written to the S3 bucket, partitioned based on date in `GZIP` format.
133203
![s3](images/S3.png)
134204

135-
* Navigate to "/aws/lambda/${functionname}" log group inside AWS CloudWatch service.
136-
* We shouldn't see any logs created under this log group as we have denied access to write any logs from the lambda function.
205+
* Navigate to `"/aws/lambda/${functionname}"` log group inside AWS CloudWatch service.
206+
* We shouldn't see any logs created under this log group as we have denied access to write any logs from the lambda function.
137207
![cloudwatch](images/CloudWatch.png)
138208

139209
## Cleanup
@@ -152,4 +222,4 @@ aws cloudformation delete-stack --stack-name sam-app
152222

153223
## Conclusion
154224

155-
This extension provides an approach to streamline and centralize the logs using Kinesis firehose.
225+
This extension provides an approach to streamline and centralize log collection using Kinesis firehose.

0 commit comments

Comments
 (0)