Skip to content

Commit 35d9f01

Browse files
Incorporating code review comments
1 parent 54b52af commit 35d9f01

File tree

1 file changed

+16
-17
lines changed

1 file changed

+16
-17
lines changed

kinesisfirehose-logs-extension-demo/README.md

Lines changed: 16 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Introduction
44

5-
This pattern walks through an approach to centralize log collection for Lambda function with Kinesis firehose using external extensions. The provided code sample shows how to get send logs directly to Kinesis firehose without sending them to AWS CloudWatch service.
5+
This example show how to centralize log collection for a Lambda function using Kinesis Data Firehose. The provided code sample uses Lambda extensions to receive logs from Lambda and send them directly to Kinesis Data Firehose without sending them to Amazon CloudWatch service.
66

77
> Note: This is a simple example extension to help you investigate an approach to centralize the log aggregation. This example code is not production ready. Use it with your own discretion after testing thoroughly.
88
@@ -12,13 +12,13 @@ This sample extension:
1212
* Runs with a main, and a helper goroutine: The main goroutine registers to `ExtensionAPI` and process its `invoke` and `shutdown` events. The helper goroutine:
1313
* starts a local HTTP server at the provided port (default 1234, the port can be overridden with Lambda environment variable `HTTP_LOGS_LISTENER_PORT` ) that receives requests from Logs API with `NextEvent` method call
1414
* puts the logs in a synchronized queue (Producer) to be processed by the main goroutine (Consumer)
15-
* The main goroutine writes the received logs to Amazon Kinesis firehose, which gets stored in Amazon S3
15+
* The main goroutine writes the received logs to Amazon Kinesis Data Firehose, which gets stored in Amazon S3
1616

17-
## Amazon Kinesis Data firehose
17+
## Amazon Kinesis Data Firehose
1818

1919
Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics services. It can capture, transform, and deliver streaming data to Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, generic HTTP endpoints, and service providers like Datadog, New Relic, MongoDB, and Splunk, read more about it [here](https://aws.amazon.com/kinesis/data-firehose)
2020

21-
> Note: The code sample provided part of this pattern delivers logs from Kinesis firehose to Amazon S3
21+
> Note: The code sample provided part of this pattern delivers logs from Kinesis Data Firehose to Amazon S3
2222
2323
## Lambda extensions
2424

@@ -35,18 +35,18 @@ read more about it [here](https://aws.amazon.com/blogs/compute/introducing-aws-l
3535
3636
## Need to centralize log collection
3737

38-
Having a centralized log collecting mechanism using Kinesis firehose provides the following benefits:
38+
Having a centralized log collecting mechanism using Kinesis Data Firehose provides the following benefits:
3939

40-
* Helps to collect logs from different sources in one place. Even though the sample provided sends logs from Lambda, log routers like `Fluentbit` and `Firelens` can send logs directly to Kinesis Data firehose from container orchestrators like `EKS` and `ECS`.
40+
* Helps to collect logs from different sources in one place. Even though the sample provided sends logs from Lambda, log routers like `Fluentbit` and `Firelens` can send logs directly to Kinesis Data Firehose from container orchestrators like ["Amazon Elastic Kubernetes Service (EKS)"](https://aws.amazon.com/eks) and ["Amazon Elastic Container Service (ECS)"](https://aws.amazon.com/ecs)
4141
* Define and standardize the transformations before the log gets delivered to downstream systems like S3, elastic search, redshift, etc
4242
* Provides a secure storage area for log data, before it gets written out to the disk. In the event of machine/application failure, we still have access to the logs emitted from the source machine/application
4343

4444
## Architecture
4545

4646
### AWS Services
4747

48-
* Amazon Lambda
49-
* Amazon Lambda extension
48+
* AWS Lambda
49+
* AWS Lambda extension
5050
* Amazon Kinesis Data Firehose
5151
* Amazon S3
5252

@@ -62,7 +62,7 @@ Once deployed the overall flow looks like below:
6262
* A local HTTP server is started inside the external extension which receives the logs.
6363
* The extension also takes care of buffering the recieved log events in a synchronized queue and writing it to AWS Kinesis Firehose via direct `PUT` records
6464

65-
> Note: Firehose stream name gets specified as an environment variable (`AWS_KINESIS_STREAM_NAME`)
65+
> Note: Kinesis Data Firehose stream name gets specified as an environment variable (`AWS_KINESIS_STREAM_NAME`)
6666
6767
* The Lambda function won't be able to send any logs events to Amazon CloudWatch service due to the following explicit `DENY` policy:
6868

@@ -76,11 +76,11 @@ Action:
7676
Resource: arn:aws:logs:*:*:*
7777
```
7878
79-
* The Kinesis Firehose stream configured part of this sample sends log directly to `AWS S3` (gzip compressed).
79+
* The Kinesis Data Firehose stream configured part of this sample sends log directly to `AWS S3` (gzip compressed).
8080

8181
## Build and Deploy
8282

83-
AWS SAM template available part of the root directory can be used for deploying the sample lambda function with this extension
83+
AWS SAM template available part of the root directory can be used for deploying the sample Lambda function with this extension
8484

8585
### Pre-requistes
8686

@@ -91,9 +91,8 @@ AWS SAM template available part of the root directory can be used for deploying
9191
Check out the code by running the following command:
9292

9393
```bash
94-
mkdir aws-lambda-extensions && cd aws-lambda-extensions
95-
git clone https://github.com/aws-samples/aws-lambda-extensions.git .
96-
cd kinesisfirehose-logs-extension-demo
94+
git clone https://github.com/aws-samples/aws-lambda-extensions.git
95+
cd aws-lambda-extensions/kinesisfirehose-logs-extension-demo
9796
```
9897

9998
Run the following command from the root directory
@@ -174,7 +173,7 @@ Value arn:aws:lambda:us-east-1:xxx:function:kinesisfirehose-logs-e
174173

175174
## Testing
176175

177-
You can invoke the Lambda function using the following CLI command
176+
You can invoke the Lambda function using the [Lambda Console](https://console.aws.amazon.com/lambda/home), or the following CLI command
178177

179178
```bash
180179
aws lambda invoke \
@@ -196,7 +195,7 @@ The function should return ```"StatusCode": 200```, with the below output
196195
}
197196
```
198197

199-
In a few minutes after the successful invocation of the Lambda function, we should start seeing the log messages from the example extension sent to Amazon Data Firehose which sends the messages to a Amazon S3 bucket.
198+
In a few minutes after the successful invocation of the Lambda function, you should see the log messages from the example extension sent to Amazon Kinesis Data Firehose which sends the messages to a Amazon S3 bucket.
200199

201200
* Login to AWS console:
202201
* Navigate to the S3 bucket mentioned under the parameter `BucketName` in the SAM output.
@@ -223,4 +222,4 @@ aws cloudformation delete-stack --stack-name sam-app
223222

224223
## Conclusion
225224

226-
This extension provides an approach to streamline and centralize log collection using Kinesis firehose.
225+
This extension provides an approach to streamline and centralize log collection using Kinesis Data Firehose.

0 commit comments

Comments
 (0)