You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: kinesisfirehose-logs-extension-demo/README.md
+16-17Lines changed: 16 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
## Introduction
4
4
5
-
This pattern walks through an approach to centralize log collection for Lambda function with Kinesis firehose using external extensions. The provided code sample shows how to get send logs directly to Kinesis firehose without sending them to AWS CloudWatch service.
5
+
This example show how to centralize log collection for a Lambda function using Kinesis Data Firehose. The provided code sample uses Lambda extensions to receive logs from Lambda and send them directly to Kinesis Data Firehose without sending them to Amazon CloudWatch service.
6
6
7
7
> Note: This is a simple example extension to help you investigate an approach to centralize the log aggregation. This example code is not production ready. Use it with your own discretion after testing thoroughly.
8
8
@@ -12,13 +12,13 @@ This sample extension:
12
12
* Runs with a main, and a helper goroutine: The main goroutine registers to `ExtensionAPI` and process its `invoke` and `shutdown` events. The helper goroutine:
13
13
* starts a local HTTP server at the provided port (default 1234, the port can be overridden with Lambda environment variable `HTTP_LOGS_LISTENER_PORT` ) that receives requests from Logs API with `NextEvent` method call
14
14
* puts the logs in a synchronized queue (Producer) to be processed by the main goroutine (Consumer)
15
-
* The main goroutine writes the received logs to Amazon Kinesis firehose, which gets stored in Amazon S3
15
+
* The main goroutine writes the received logs to Amazon Kinesis Data Firehose, which gets stored in Amazon S3
16
16
17
-
## Amazon Kinesis Data firehose
17
+
## Amazon Kinesis Data Firehose
18
18
19
19
Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics services. It can capture, transform, and deliver streaming data to Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, generic HTTP endpoints, and service providers like Datadog, New Relic, MongoDB, and Splunk, read more about it [here](https://aws.amazon.com/kinesis/data-firehose)
20
20
21
-
> Note: The code sample provided part of this pattern delivers logs from Kinesis firehose to Amazon S3
21
+
> Note: The code sample provided part of this pattern delivers logs from Kinesis Data Firehose to Amazon S3
22
22
23
23
## Lambda extensions
24
24
@@ -35,18 +35,18 @@ read more about it [here](https://aws.amazon.com/blogs/compute/introducing-aws-l
35
35
36
36
## Need to centralize log collection
37
37
38
-
Having a centralized log collecting mechanism using Kinesis firehose provides the following benefits:
38
+
Having a centralized log collecting mechanism using Kinesis Data Firehose provides the following benefits:
39
39
40
-
* Helps to collect logs from different sources in one place. Even though the sample provided sends logs from Lambda, log routers like `Fluentbit` and `Firelens` can send logs directly to Kinesis Data firehose from container orchestrators like `EKS` and `ECS`.
40
+
* Helps to collect logs from different sources in one place. Even though the sample provided sends logs from Lambda, log routers like `Fluentbit` and `Firelens` can send logs directly to Kinesis Data Firehose from container orchestrators like ["Amazon Elastic Kubernetes Service (EKS)"](https://aws.amazon.com/eks) and ["Amazon Elastic Container Service (ECS)"](https://aws.amazon.com/ecs)
41
41
* Define and standardize the transformations before the log gets delivered to downstream systems like S3, elastic search, redshift, etc
42
42
* Provides a secure storage area for log data, before it gets written out to the disk. In the event of machine/application failure, we still have access to the logs emitted from the source machine/application
43
43
44
44
## Architecture
45
45
46
46
### AWS Services
47
47
48
-
*Amazon Lambda
49
-
*Amazon Lambda extension
48
+
*AWS Lambda
49
+
*AWS Lambda extension
50
50
* Amazon Kinesis Data Firehose
51
51
* Amazon S3
52
52
@@ -62,7 +62,7 @@ Once deployed the overall flow looks like below:
62
62
* A local HTTP server is started inside the external extension which receives the logs.
63
63
* The extension also takes care of buffering the recieved log events in a synchronized queue and writing it to AWS Kinesis Firehose via direct `PUT` records
64
64
65
-
> Note: Firehose stream name gets specified as an environment variable (`AWS_KINESIS_STREAM_NAME`)
65
+
> Note: Kinesis Data Firehose stream name gets specified as an environment variable (`AWS_KINESIS_STREAM_NAME`)
66
66
67
67
* The Lambda function won't be able to send any logs events to Amazon CloudWatch service due to the following explicit `DENY` policy:
68
68
@@ -76,11 +76,11 @@ Action:
76
76
Resource: arn:aws:logs:*:*:*
77
77
```
78
78
79
-
* The Kinesis Firehose stream configured part of this sample sends log directly to `AWS S3` (gzip compressed).
79
+
* The Kinesis Data Firehose stream configured part of this sample sends log directly to `AWS S3` (gzip compressed).
80
80
81
81
## Build and Deploy
82
82
83
-
AWS SAM template available part of the root directory can be used for deploying the sample lambda function with this extension
83
+
AWS SAM template available part of the root directory can be used for deploying the sample Lambda function with this extension
84
84
85
85
### Pre-requistes
86
86
@@ -91,9 +91,8 @@ AWS SAM template available part of the root directory can be used for deploying
91
91
Check out the code by running the following command:
92
92
93
93
```bash
94
-
mkdir aws-lambda-extensions && cd aws-lambda-extensions
cd aws-lambda-extensions/kinesisfirehose-logs-extension-demo
97
96
```
98
97
99
98
Run the following command from the root directory
@@ -174,7 +173,7 @@ Value arn:aws:lambda:us-east-1:xxx:function:kinesisfirehose-logs-e
174
173
175
174
## Testing
176
175
177
-
You can invoke the Lambda function using the following CLI command
176
+
You can invoke the Lambda function using the [Lambda Console](https://console.aws.amazon.com/lambda/home), or the following CLI command
178
177
179
178
```bash
180
179
aws lambda invoke \
@@ -196,7 +195,7 @@ The function should return ```"StatusCode": 200```, with the below output
196
195
}
197
196
```
198
197
199
-
In a few minutes after the successful invocation of the Lambda function, we should start seeing the log messages from the example extension sent to Amazon Data Firehose which sends the messages to a Amazon S3 bucket.
198
+
In a few minutes after the successful invocation of the Lambda function, you should see the log messages from the example extension sent to Amazon Kinesis Data Firehose which sends the messages to a Amazon S3 bucket.
200
199
201
200
* Login to AWS console:
202
201
* Navigate to the S3 bucket mentioned under the parameter `BucketName` in the SAM output.
0 commit comments