Skip to content

Commit 7a22db5

Browse files
committed
Markdown linting
1 parent f66b30d commit 7a22db5

File tree

5 files changed

+155
-30
lines changed

5 files changed

+155
-30
lines changed

content/en/ninja-workshops/6-lambda-kinesis/2-auto-instrumentation.md

Lines changed: 62 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -7,40 +7,49 @@ weight: 2
77
The first part of our workshop will demonstrate how auto-instrumentation with OpenTelemetry allows the OpenTelemetry Collector to auto-detect what language your function is written in, and start capturing traces for those applications.
88

99
### The Auto-Instrumentation Workshop Directory & Contents
10+
1011
First, let us take a look at the `o11y-lambda-workshop/auto` directory, and some of its files. This is where all the content for the auto-instrumentation portion of our workshop resides.
1112

1213
#### The `auto` Directory
14+
1315
- Run the following command to get into the `o11y-lambda-workshop/auto` directory:
16+
1417
```bash
1518
cd ~/o11y-lambda-workshop/auto
1619
```
1720

1821
- Inspect the contents of this directory:
22+
1923
```bash
2024
ls
2125
```
26+
2227
- _The output should include the following files and directories:_
28+
2329
```bash
2430
get_logs.py main.tf send_message.py
2531
handler outputs.tf terraform.tf
2632
```
2733

2834
#### The `main.tf` file
35+
2936
- Take a closer look at the `main.tf` file:
37+
3038
```bash
3139
cat main.tf
3240
```
3341

3442
{{% notice title="Workshop Questions" style="tip" icon="question" %}}
3543

36-
* Can you identify which AWS resources are being created by this template?
37-
* Can you identify where OpenTelemetry instrumentation is being set up?
38-
* _Hint: study the lambda function definitions_
39-
* Can you determine which instrumentation information is being provided by the environment variables we set earlier?
44+
- Can you identify which AWS resources are being created by this template?
45+
- Can you identify where OpenTelemetry instrumentation is being set up?
46+
- _Hint: study the lambda function definitions_
47+
- Can you determine which instrumentation information is being provided by the environment variables we set earlier?
4048

4149
{{% /notice %}}
4250

4351
You should see a section where the environment variables for each lambda function are being set.
52+
4453
```bash
4554
environment {
4655
variables = {
@@ -55,44 +64,53 @@ You should see a section where the environment variables for each lambda functio
5564
```
5665

5766
By using these environment variables, we are configuring our auto-instrumentation in a few ways:
67+
5868
- We are setting environment variables to inform the OpenTelemetry collector of which Splunk Observability Cloud organization we would like to have our data exported to.
69+
5970
```bash
6071
SPLUNK_ACCESS_TOKEN = var.o11y_access_token
6172
SPLUNK_ACCESS_TOKEN = var.o11y_realm
6273
```
6374

6475
- We are also setting variables that help OpenTelemetry identify our function/service, as well as the environment/application it is a part of.
76+
6577
```bash
6678
OTEL_SERVICE_NAME = "producer-lambda" # consumer-lambda in the case of the consumer function
6779
OTEL_RESOURCE_ATTRIBUTES = "deployment.environment=${var.prefix}-lambda-shop"
6880
```
6981

7082
- We are setting an environment variable that lets OpenTelemetry know what wrappers it needs to apply to our function's handler so as to capture trace data automatically, based on our code language.
83+
7184
```bash
7285
AWS_LAMBDA_EXEC_WRAPPER - "/opt/nodejs-otel-handler"
7386
```
7487
7588
- In the case of the `producer-lambda` function, we are setting an environment variable to let the function know what Kinesis Stream to put our record to.
89+
7690
```bash
7791
KINESIS_STREAM = aws_kinesis_stream.lambda_streamer.name
7892
```
7993
8094
- These values are sourced from the environment variables we set in the Prerequisites section, as well as resources that will be deployed as a part of this Terraform configuration file.
8195
8296
You should also see an argument for setting the Splunk OpenTelemetry Lambda layer on each function
97+
8398
```bash
8499
layers = var.otel_lambda_layer
85100
```
101+
86102
- The OpenTelemetry Lambda layer is a package that contains the libraries and dependencies necessary to collector, process and export telemetry data for Lambda functions at the moment of invocation.
87103
88104
- While there is a general OTel Lambda layer that has all the libraries and dependencies for all OpenTelemetry-supported languages, there are also language-specific Lambda layers, to help make your function even more lightweight.
89105
90106
- _You can see the relevant Splunk OpenTelemetry Lambda layer ARNs (Amazon Resource Name) and latest versions for each AWS region [HERE](https://github.com/signalfx/lambda-layer-versions/blob/main/splunk-apm/splunk-apm.md)_
91107
92108
#### The `producer.mjs` file
109+
93110
Next, let's take a look at the `producer-lambda` function code:
94111

95112
- Run the following command to view the contents of the `producer.mjs` file:
113+
96114
```bash
97115
cat ~/o11y-lambda-workshop/auto/handler/producer.mjs
98116
```
@@ -101,26 +119,33 @@ Next, let's take a look at the `producer-lambda` function code:
101119
- Essentially, this function receives a message, and puts that message as a record to the targeted Kinesis Stream
102120

103121
### Deploying the Lambda Functions & Generating Trace Data
122+
104123
Now that we are familiar with the contents of our `auto` directory, we can deploy the resources for our workshop, and generate some trace data from our Lambda functions.
105124

106125
#### Initialize Terraform in the `auto` directory
126+
107127
In order to deploy the resources defined in the `main.tf` file, you first need to make sure that Terraform is initialized in the same folder as that file.
108128

109129
- Ensure you are in the `auto` directory:
130+
110131
```bash
111132
pwd
112133
```
113-
- _The expected output would be **~/o11y-lambda-workshop/auto**_
134+
135+
- _The expected output would be **~/o11y-lambda-workshop/auto**_
114136

115137
- If you are not in the `auto` directory, run the following command:
138+
116139
```bash
117140
cd ~/o11y-lambda-workshop/auto
118141
```
119142

120143
- Run the following command to initialize Terraform in this directory
144+
121145
```bash
122146
terraform init
123147
```
148+
124149
- This command will create a number of elements in the same folder:
125150
- `.terraform.lock.hcl` file: to record the providers it will use to provide resources
126151
- `.terraform` directory: to store the provider configurations
@@ -130,15 +155,19 @@ In order to deploy the resources defined in the `main.tf` file, you first need t
130155
- This enables Terraform to manage the creation, state and destruction of resources, as defined within the `main.tf` file of the `auto` directory
131156

132157
#### Deploy the Lambda functions and other AWS resources
158+
133159
Once we've initialized Terraform in this directory, we can go ahead and deploy our resources.
134160
135161
- Run the Terraform command to have the Lambda function and other supporting resources deployed from the `main.tf` file:
162+
136163
```bash
137164
terraform apply
138165
```
166+
139167
- respond `yes` when you see the `Enter a value:` prompt
140168
141169
- This will result in the following outputs:
170+
142171
```bash
143172
Outputs:
144173
@@ -152,75 +181,90 @@ Once we've initialized Terraform in this directory, we can go ahead and deploy o
152181
```
153182
154183
#### Send some traffic to the `producer-lambda` endpoint (`base_url`)
184+
155185
To start getting some traces from our deployed Lambda functions, we would need to generate some traffic. We will send a message to our `producer-lambda` function's endpoint, which should be put as a record into our Kinesis Stream, and then pulled from the Stream by the `consumer-lambda` function.
156186

157187
- Ensure you are in the `auto` directory:
188+
158189
```bash
159190
pwd
160191
```
161-
- _The expected output would be **~/o11y-lambda-workshop/auto**_
192+
193+
- _The expected output would be **~/o11y-lambda-workshop/auto**_
162194

163195
- If you are not in the `auto` directory, run the following command
164-
```bash
165-
cd ~/o11y-lambda-workshop/auto
166-
```
196+
197+
```bash
198+
cd ~/o11y-lambda-workshop/auto
199+
```
167200

168201
The `send_message.py` script is a Python script that will take input at the command line, add it to a JSON dictionary, and send it to your `producer-lambda` function's endpoint repeatedly, as part of a while loop.
169202
170203
- Run the `send_message.py` script as a background process
171204
- _It requires the `--name` and `--superpower` arguments_
205+
172206
```bash
173207
nohup ./send_message.py --name CHANGEME --superpower CHANGEME &
174208
```
209+
175210
- You should see an output similar to the following if your message is successful
211+
176212
```bash
177213
[1] 79829
178214
user@host manual % appending output to nohup.out
179215
```
216+
180217
- _The two most import bits of information here are:_
181218
- _The process ID on the first line (`79829` in the case of my example), and_
182219
- _The `appending output to nohup.out` message_
183-
184220
- _The `nohup` command ensures the script will not hang up when sent to the background. It also captures any output from our command in a nohup.out file in the same folder as the one you're currently in._
185-
186221
- _The `&` tells our shell process to run this process in the background, thus freeing our shell to run other commands._
187222

188223
- Next, check the contents of the `nohup.out` file, to ensure your output confirms your requests to your `producer-lambda` endpoint are successful:
224+
189225
```bash
190226
cat nohup.out
191227
```
228+
192229
- You should see the following output among the lines printed to your screen if your message is successful:
193-
```bash
194-
{"message": "Message placed in the Event Stream: hostname-eventStream"}
195-
```
230+
231+
```bash
232+
{"message": "Message placed in the Event Stream: hostname-eventStream"}
233+
```
196234

197235
- If unsuccessful, you will see:
198-
```bash
199-
{"message": "Internal server error"}
200-
```
236+
237+
```bash
238+
{"message": "Internal server error"}
239+
```
201240

202241
> [!IMPORTANT]
203242
> If this occurs, ask one of the workshop facilitators for assistance.
204243

205244
#### View the Lambda Function Logs
245+
206246
Next, let's take a look at the logs for our Lambda functions.
207247
208248
- Run the following script to view your `producer-lambda` logs:
249+
209250
```bash
210251
./get_logs.py --function producer
211252
```
253+
212254
- Hit `[CONTROL-C]` to stop the live stream after some log events show up
213255
214256
- Run the following to view your `consumer-lambda` logs:
257+
215258
```bash
216259
./get_logs.py --function consumer
217260
```
261+
218262
- Hit `[CONTROL-C]` to stop the live stream after some log events show up
219263
220264
Examine the logs carefully.
221265
222266
{{% notice title="Workshop Question" style="tip" icon="question" %}}
223267
224-
* Do you see OpenTelemetry being loaded? Look out for the lines with `splunk-extension-wrapper`
268+
- Do you see OpenTelemetry being loaded? Look out for the lines with `splunk-extension-wrapper`
225269
226270
{{% /notice %}}

content/en/ninja-workshops/6-lambda-kinesis/3-lambdas-in-splunk.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,11 @@ weight: 3
77
The Lambda functions should be generating a sizeable amount of trace data, which we would need to take a look at. Through the combination of environment variables and the OpenTelemetry Lambda layer configured in the resource definition for our Lambda functions, we should now be ready to view our functions and traces in Splunk APM.
88

99
#### View your Environment name in the Splunk APM Overview
10+
1011
Let's start by making sure that Splunk APM is aware of our `Environment` from the trace data it is receiving. This is the `deployment.name` we set as part of the `OTEL_RESOURCE_ATTRIBUTES` variable we set on our Lambda function definitions in `main.tf`.
1112

1213
In Splunk Observability Cloud:
14+
1315
- Click on the `APM` Button from the Main Menu on the left. This will take you to the Splunk APM Overview.
1416

1517
- Select your APM Environment from the `Environment:` dropdown.
@@ -21,6 +23,7 @@ In Splunk Observability Cloud:
2123
![Splunk APM, Environment Name](../images/02-Auto-APM-EnvironmentName.png)
2224

2325
#### View your Environment's Service Map
26+
2427
Once you've selected your Environment name from the Environment drop down, you can take a look at the Service Map for your Lambda functions.
2528

2629
- Click the `Service Map` Button on the right side of the APM Overview page. This will take you to your Service Map view.
@@ -58,35 +61,46 @@ Not yet, at least...
5861
Let's see how we work around this in the next section of this workshop. But before that, let's clean up after ourselves!
5962

6063
### Clean Up
64+
6165
The resources we deployed as part of this auto-instrumenation exercise need to be cleaned. Likewise, the script that was generating traffice against our `producer-lambda` endpoint needs to be stopped, if it's still running. Follow the below steps to clean up.
6266

6367
#### Kill the `send_message`
68+
6469
- If the `send_message.py` script is still running, stop it with the follwing commands:
70+
6571
```bash
6672
fg
6773
```
74+
6875
- This brings your background process to the foreground.
6976
- Next you can hit `[CONTROL-C]` to kill the process.
7077

7178
#### Destroy all AWS resources
79+
7280
Terraform is great at managing the state of our resources individually, and as a deployment. It can even update deployed resources with any changes to their definitions. But to start afresh, we will destroy the resources and redeploy them as part of the manual instrumentation portion of this workshop.
7381

7482
Please follow these steps to destroy your resources:
83+
7584
- Ensure you are in the `auto` directory:
85+
7686
```bash
7787
pwd
7888
```
89+
7990
- _The expected output would be **~/o11y-lambda-workshop/auto**_
8091

8192
- If you are not in the `auto` directory, run the following command:
93+
8294
```bash
8395
cd ~/o11y-lambda-workshop/auto
8496
```
8597

8698
- Destroy the Lambda functions and other AWS resources you deployed earlier:
99+
87100
```bash
88101
terraform destroy
89102
```
103+
90104
- respond `yes` when you see the `Enter a value:` prompt
91105
- This will result in the resources being destroyed, leaving you with a clean environment
92106

0 commit comments

Comments
 (0)