You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/ninja-workshops/6-lambda-kinesis/2-auto-instrumentation.md
+62-18Lines changed: 62 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,40 +7,49 @@ weight: 2
7
7
The first part of our workshop will demonstrate how auto-instrumentation with OpenTelemetry allows the OpenTelemetry Collector to auto-detect what language your function is written in, and start capturing traces for those applications.
8
8
9
9
### The Auto-Instrumentation Workshop Directory & Contents
10
+
10
11
First, let us take a look at the `o11y-lambda-workshop/auto` directory, and some of its files. This is where all the content for the auto-instrumentation portion of our workshop resides.
11
12
12
13
#### The `auto` Directory
14
+
13
15
- Run the following command to get into the `o11y-lambda-workshop/auto` directory:
16
+
14
17
```bash
15
18
cd~/o11y-lambda-workshop/auto
16
19
```
17
20
18
21
- Inspect the contents of this directory:
22
+
19
23
```bash
20
24
ls
21
25
```
26
+
22
27
-_The output should include the following files and directories:_
* Can you identify which AWS resources are being created by this template?
37
-
* Can you identify where OpenTelemetry instrumentation is being set up?
38
-
* _Hint: study the lambda functiondefinitions_
39
-
* Can you determine which instrumentation information is being provided by the environment variables we set earlier?
44
+
- Can you identify which AWS resources are being created by this template?
45
+
- Can you identify where OpenTelemetry instrumentation is being set up?
46
+
- _Hint: study the lambda functiondefinitions_
47
+
- Can you determine which instrumentation information is being provided by the environment variables we set earlier?
40
48
41
49
{{% /notice %}}
42
50
43
51
You should see a section where the environment variables for each lambda functionare being set.
52
+
44
53
```bash
45
54
environment {
46
55
variables = {
@@ -55,44 +64,53 @@ You should see a section where the environment variables for each lambda functio
55
64
```
56
65
57
66
By using these environment variables, we are configuring our auto-instrumentation in a few ways:
67
+
58
68
- We are setting environment variables to inform the OpenTelemetry collector of which Splunk Observability Cloud organization we would like to have our data exported to.
69
+
59
70
```bash
60
71
SPLUNK_ACCESS_TOKEN = var.o11y_access_token
61
72
SPLUNK_ACCESS_TOKEN = var.o11y_realm
62
73
```
63
74
64
75
- We are also setting variables that help OpenTelemetry identify our function/service, as well as the environment/application it is a part of.
76
+
65
77
```bash
66
78
OTEL_SERVICE_NAME = "producer-lambda"# consumer-lambda in the case of the consumer function
- We are setting an environment variable that lets OpenTelemetry know what wrappers it needs to apply to our function's handler so as to capture trace data automatically, based on our code language.
- In the case of the `producer-lambda` function, we are setting an environment variable to let the function know what Kinesis Stream to put our record to.
- These values are sourced from the environment variables we set in the Prerequisites section, as well as resources that will be deployed as a part of this Terraform configuration file.
81
95
82
96
You should also see an argument for setting the Splunk OpenTelemetry Lambda layer on each function
97
+
83
98
```bash
84
99
layers = var.otel_lambda_layer
85
100
```
101
+
86
102
- The OpenTelemetry Lambda layer is a package that contains the libraries and dependencies necessary to collector, process and export telemetry data for Lambda functions at the moment of invocation.
87
103
88
104
- While there is a general OTel Lambda layer that has all the libraries and dependencies for all OpenTelemetry-supported languages, there are also language-specific Lambda layers, to help make your function even more lightweight.
89
105
90
106
- _You can see the relevant Splunk OpenTelemetry Lambda layer ARNs (Amazon Resource Name) and latest versions for each AWS region [HERE](https://github.com/signalfx/lambda-layer-versions/blob/main/splunk-apm/splunk-apm.md)_
91
107
92
108
#### The `producer.mjs` file
109
+
93
110
Next, let's take a look at the `producer-lambda`functioncode:
94
111
95
112
- Run the following command to view the contents of the `producer.mjs` file:
@@ -101,26 +119,33 @@ Next, let's take a look at the `producer-lambda` function code:
101
119
- Essentially, this functionreceives a message, and puts that message as a record to the targeted Kinesis Stream
102
120
103
121
### Deploying the Lambda Functions & Generating Trace Data
122
+
104
123
Now that we are familiar with the contents of our `auto` directory, we can deploy the resources for our workshop, and generate some trace data from our Lambda functions.
105
124
106
125
#### Initialize Terraform in the `auto` directory
126
+
107
127
In order to deploy the resources defined in the `main.tf` file, you first need to make sure that Terraform is initialized in the same folder as that file.
108
128
109
129
- Ensure you are in the `auto` directory:
130
+
110
131
```bash
111
132
pwd
112
133
```
113
-
- _The expected output would be **~/o11y-lambda-workshop/auto**_
134
+
135
+
- _The expected output would be **~/o11y-lambda-workshop/auto**_
114
136
115
137
- If you are not in the `auto` directory, run the following command:
138
+
116
139
```bash
117
140
cd~/o11y-lambda-workshop/auto
118
141
```
119
142
120
143
- Run the following command to initialize Terraform in this directory
144
+
121
145
```bash
122
146
terraform init
123
147
```
148
+
124
149
- This command will create a number of elements in the same folder:
125
150
- `.terraform.lock.hcl` file: to record the providers it will use to provide resources
126
151
- `.terraform` directory: to store the provider configurations
@@ -130,15 +155,19 @@ In order to deploy the resources defined in the `main.tf` file, you first need t
130
155
- This enables Terraform to manage the creation, state and destruction of resources, as defined within the `main.tf` file of the `auto` directory
131
156
132
157
#### Deploy the Lambda functions and other AWS resources
158
+
133
159
Once we've initialized Terraform in this directory, we can go ahead and deploy our resources.
134
160
135
161
- Run the Terraform command to have the Lambda function and other supporting resources deployed from the `main.tf` file:
162
+
136
163
```bash
137
164
terraform apply
138
165
```
166
+
139
167
- respond `yes` when you see the `Enter a value:` prompt
140
168
141
169
- This will result in the following outputs:
170
+
142
171
```bash
143
172
Outputs:
144
173
@@ -152,75 +181,90 @@ Once we've initialized Terraform in this directory, we can go ahead and deploy o
152
181
```
153
182
154
183
#### Send some traffic to the `producer-lambda` endpoint (`base_url`)
184
+
155
185
To start getting some traces from our deployed Lambda functions, we would need to generate some traffic. We will send a message to our `producer-lambda` function's endpoint, which should be put as a record into our Kinesis Stream, and then pulled from the Stream by the `consumer-lambda` function.
156
186
157
187
- Ensure you are in the `auto` directory:
188
+
158
189
```bash
159
190
pwd
160
191
```
161
-
- _The expected output would be **~/o11y-lambda-workshop/auto**_
192
+
193
+
- _The expected output would be **~/o11y-lambda-workshop/auto**_
162
194
163
195
- If you are not in the `auto` directory, run the following command
164
-
```bash
165
-
cd~/o11y-lambda-workshop/auto
166
-
```
196
+
197
+
```bash
198
+
cd~/o11y-lambda-workshop/auto
199
+
```
167
200
168
201
The `send_message.py` script is a Python script that will take input at the command line, add it to a JSON dictionary, and send it to your `producer-lambda` function's endpoint repeatedly, as part of a while loop.
169
202
170
203
- Run the `send_message.py` script as a background process
171
204
- _It requires the `--name` and `--superpower` arguments_
- You should see an output similar to the following if your message is successful
211
+
176
212
```bash
177
213
[1] 79829
178
214
user@host manual % appending output to nohup.out
179
215
```
216
+
180
217
- _The two most import bits of information here are:_
181
218
- _The process ID on the first line (`79829` in the case of my example), and_
182
219
- _The `appending output to nohup.out` message_
183
-
184
220
- _The `nohup` command ensures the script will not hang up when sent to the background. It also captures any output from our command in a nohup.out file in the same folder as the one you're currently in._
185
-
186
221
- _The `&` tells our shell process to run this process in the background, thus freeing our shell to run other commands._
187
222
188
223
- Next, check the contents of the `nohup.out` file, to ensure your output confirms your requests to your `producer-lambda` endpoint are successful:
224
+
189
225
```bash
190
226
cat nohup.out
191
227
```
228
+
192
229
- You should see the following output among the lines printed to your screen if your message is successful:
193
-
```bash
194
-
{"message": "Message placed in the Event Stream: hostname-eventStream"}
195
-
```
230
+
231
+
```bash
232
+
{"message": "Message placed in the Event Stream: hostname-eventStream"}
233
+
```
196
234
197
235
- If unsuccessful, you will see:
198
-
```bash
199
-
{"message": "Internal server error"}
200
-
```
236
+
237
+
```bash
238
+
{"message": "Internal server error"}
239
+
```
201
240
202
241
> [!IMPORTANT]
203
242
> If this occurs, ask one of the workshop facilitators for assistance.
204
243
205
244
#### View the Lambda Function Logs
245
+
206
246
Next, let's take a look at the logs for our Lambda functions.
207
247
208
248
- Run the following script to view your `producer-lambda` logs:
249
+
209
250
```bash
210
251
./get_logs.py --function producer
211
252
```
253
+
212
254
- Hit `[CONTROL-C]` to stop the live stream after some log events show up
213
255
214
256
- Run the following to view your `consumer-lambda` logs:
257
+
215
258
```bash
216
259
./get_logs.py --function consumer
217
260
```
261
+
218
262
- Hit `[CONTROL-C]` to stop the live stream after some log events show up
Copy file name to clipboardExpand all lines: content/en/ninja-workshops/6-lambda-kinesis/3-lambdas-in-splunk.md
+14Lines changed: 14 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,9 +7,11 @@ weight: 3
7
7
The Lambda functions should be generating a sizeable amount of trace data, which we would need to take a look at. Through the combination of environment variables and the OpenTelemetry Lambda layer configured in the resource definition for our Lambda functions, we should now be ready to view our functions and traces in Splunk APM.
8
8
9
9
#### View your Environment name in the Splunk APM Overview
10
+
10
11
Let's start by making sure that Splunk APM is aware of our `Environment` from the trace data it is receiving. This is the `deployment.name` we set as part of the `OTEL_RESOURCE_ATTRIBUTES` variable we set on our Lambda function definitions in `main.tf`.
11
12
12
13
In Splunk Observability Cloud:
14
+
13
15
- Click on the `APM` Button from the Main Menu on the left. This will take you to the Splunk APM Overview.
14
16
15
17
- Select your APM Environment from the `Environment:` dropdown.
Once you've selected your Environment name from the Environment drop down, you can take a look at the Service Map for your Lambda functions.
25
28
26
29
- Click the `Service Map` Button on the right side of the APM Overview page. This will take you to your Service Map view.
@@ -58,35 +61,46 @@ Not yet, at least...
58
61
Let's see how we work around this in the next section of this workshop. But before that, let's clean up after ourselves!
59
62
60
63
### Clean Up
64
+
61
65
The resources we deployed as part of this auto-instrumenation exercise need to be cleaned. Likewise, the script that was generating traffice against our `producer-lambda` endpoint needs to be stopped, if it's still running. Follow the below steps to clean up.
62
66
63
67
#### Kill the `send_message`
68
+
64
69
- If the `send_message.py` script is still running, stop it with the follwing commands:
70
+
65
71
```bash
66
72
fg
67
73
```
74
+
68
75
- This brings your background process to the foreground.
69
76
- Next you can hit `[CONTROL-C]` to kill the process.
70
77
71
78
#### Destroy all AWS resources
79
+
72
80
Terraform is great at managing the state of our resources individually, and as a deployment. It can even update deployed resources with any changes to their definitions. But to start afresh, we will destroy the resources and redeploy them as part of the manual instrumentation portion of this workshop.
73
81
74
82
Please follow these steps to destroy your resources:
83
+
75
84
- Ensure you are in the `auto` directory:
85
+
76
86
```bash
77
87
pwd
78
88
```
89
+
79
90
-_The expected output would be **~/o11y-lambda-workshop/auto**_
80
91
81
92
- If you are not in the `auto` directory, run the following command:
93
+
82
94
```bash
83
95
cd~/o11y-lambda-workshop/auto
84
96
```
85
97
86
98
- Destroy the Lambda functions and other AWS resources you deployed earlier:
99
+
87
100
```bash
88
101
terraform destroy
89
102
```
103
+
90
104
- respond `yes` when you see the `Enter a value:` prompt
91
105
- This will result in the resources being destroyed, leaving you with a clean environment
0 commit comments