You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,7 +55,7 @@ In this repository you will find a number of demos and sample projects from AWS
55
55
56
56
*[Demo: Logs to Amazon S3 extension: container image ](s3-logs-extension-demo-container-image/): Demo logs extension to receive logs directly from Lambda and send them to S3. This example packages the extension and function as separate container images. The demo is deployed using AWS SAM.
57
57
58
-
*[Demo: Logs to Kinesis firehose Logs API extension](kinesisfirehose-logs-extension-demo/): How to get a basic logs API extension for AWS Kinesis Firehose written in Go. The extension explains the overall approach to streamline and and centralize log collection using Kinesis firehose. The extension runs a local HTTP listener and subscribes to a stream of function and platform logs using the Logs API. It buffers them and sends them to Kinesis firehose periodically. The demo gets deployed using AWS SAM.
58
+
*[Demo: Logs to Kinesis firehose Logs API extension](kinesisfirehose-logs-extension-demo/): How to get a basic logs API extension for Amazon Kinesis Data Firehose written in Go. The extension explains the overall approach to streamline and centralize log collection using Amazon Kinesis Data firehose. The extension runs a local HTTP listener and subscribes to a stream of function and platform logs using the Logs API. It buffers them and sends them to Amazon Kinesis Data firehose periodically, which streams logs to Amazon S3. The demo gets deployed using AWS SAM.
Copy file name to clipboardExpand all lines: kinesisfirehose-logs-extension-demo/README.md
+20-19Lines changed: 20 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,18 +1,18 @@
1
-
# Centralize log collection with Kinesis Firehose using Lambda Extensions
1
+
# Centralize log collection with Amazon Kinesis Data Firehose using Lambda Extensions
2
2
3
3
## Introduction
4
4
5
-
This pattern walks through an approach to centralize log collection for lambda function with Kinesis firehose using external extensions. The provided code sample shows how to get send logs directly to kinesis firehose without sending them to AWS CloudWatch service.
5
+
This pattern walks through an approach to centralize log collection for Lambda function with Kinesis firehose using external extensions. The provided code sample shows how to get send logs directly to Kinesis firehose without sending them to AWS CloudWatch service.
6
6
7
7
> Note: This is a simple example extension to help you investigate an approach to centralize the log aggregation. This example code is not production ready. Use it with your own discretion after testing thoroughly.
8
8
9
9
This sample extension:
10
10
11
11
* Subscribes to receive `platform` and `function` logs.
12
12
* Runs with a main, and a helper goroutine: The main goroutine registers to `ExtensionAPI` and process its `invoke` and `shutdown` events. The helper goroutine:
13
-
* starts a local HTTP server at the provided port (default 1234) that receives requests from Logs API with `NextEvent` method call
13
+
* starts a local HTTP server at the provided port (default 1234, the port can be overridden with Lambda environment variable `HTTP_LOGS_LISTENER_PORT`) that receives requests from Logs API with `NextEvent` method call
14
14
* puts the logs in a synchronized queue (Producer) to be processed by the main goroutine (Consumer)
15
-
* The main goroutine writes the received logs to AWS Kinesis firehose, which gets stored in AWS S3
15
+
* The main goroutine writes the received logs to Amazon Kinesis firehose, which gets stored in Amazon S3
16
16
17
17
## Amazon Kinesis Data firehose
18
18
@@ -31,24 +31,24 @@ Lambda Extensions, a new way to easily integrate Lambda with your favorite monit
31
31
32
32
read more about it [here](https://aws.amazon.com/blogs/compute/introducing-aws-lambda-extensions-in-preview/)
33
33
34
-
> Note: The code sample provided part of this pattern uses **external** extension to listen to log events from the lambda function
34
+
> Note: The code sample provided part of this pattern uses **external** extension to listen to log events from the Lambda function
35
35
36
36
## Need to centralize log collection
37
37
38
-
Having a centralized log collection mechanism using kinesis firehose provides the following benefits:
38
+
Having a centralized log collecting mechanism using Kinesis firehose provides the following benefits:
39
39
40
-
* Helps to collect logs from different sources in one place. Even though the sample provided sends logs from Lambda, log routers like `Fluentbit` and `Firelens` can send logs directly to kinesis firehose from container orchestrators like `EKS` and `ECS`.
40
+
* Helps to collect logs from different sources in one place. Even though the sample provided sends logs from Lambda, log routers like `Fluentbit` and `Firelens` can send logs directly to Kinesis Data firehose from container orchestrators like `EKS` and `ECS`.
41
41
* Define and standardize the transformations before the log gets delivered to downstream systems like S3, elastic search, redshift, etc
42
42
* Provides a secure storage area for log data, before it gets written out to the disk. In the event of machine/application failure, we still have access to the logs emitted from the source machine/application
43
43
44
44
## Architecture
45
45
46
46
### AWS Services
47
47
48
-
*AWS Lambda
49
-
*AWS Lambda extension
50
-
*AWS KinesisFirehose
51
-
*AWS S3
48
+
*Amazon Lambda
49
+
*Amazon Lambda extension
50
+
*Amazon Kinesis Data Firehose
51
+
*Amazon S3
52
52
53
53
### High level architecture
54
54
@@ -64,7 +64,7 @@ Once deployed the overall flow looks like below:
64
64
65
65
> Note: Firehose stream name gets specified as an environment variable (`AWS_KINESIS_STREAM_NAME`)
66
66
67
-
* The lambda function won't be able to send any logs events to AWS CloudWatch service due to the following explict`DENY` policy:
67
+
* The Lambda function won't be able to send any logs events to Amazon CloudWatch service due to the following explicit`DENY` policy:
68
68
69
69
```yaml
70
70
Sid: CloudWatchLogsDeny
@@ -91,8 +91,9 @@ AWS SAM template available part of the root directory can be used for deploying
91
91
Check out the code by running the following command:
92
92
93
93
```bash
94
-
mkdir kinesisfirehose-logs-extension-demo && cd kinesisfirehose-logs-extension-demo
Run the following command to deploy the sample lambda function with the extension
132
+
Run the following command to deploy the sample Lambda function with the extension
132
133
133
134
```bash
134
135
sam deploy --guided
@@ -183,7 +184,7 @@ aws lambda invoke \
183
184
--log-type Tail
184
185
```
185
186
186
-
>Note: Make sure to replace `function-name` with the actual lambda function name
187
+
>Note: Make sure to replace `function-name` with the actual Lambda function name
187
188
188
189
The function should return ```"StatusCode": 200```, with the below output
189
190
@@ -195,15 +196,15 @@ The function should return ```"StatusCode": 200```, with the below output
195
196
}
196
197
```
197
198
198
-
In a few minutes after the successfully invocation of the lambda function, we should start seeing the log messages from the example extension written to an S3 bucket.
199
+
In a few minutes after the successful invocation of the Lambda function, we should start seeing the log messages from the example extension sent to Amazon Data Firehose which sends the messages to a Amazon S3 bucket.
199
200
200
201
* Login to AWS console:
201
202
* Navigate to the S3 bucket mentioned under the parameter `BucketName` in the SAM output.
202
-
* We can see the logs successly written to the S3 bucket, partitioned based on date in `GZIP` format.
203
+
* We can see the logs successfully written to the S3 bucket, partitioned based on date in `GZIP` format.
203
204

204
205
205
206
* Navigate to `"/aws/lambda/${functionname}"` log group inside AWS CloudWatch service.
206
-
* We shouldn't see any logs created under this log group as we have denied access to write any logs from the lambda function.
207
+
* We shouldn't see any logs created under this log group as we have denied access to write any logs from the Lambda function.
0 commit comments