You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use the **otlphttp** exporter as the default method to send traces to Splunk Observability Cloud.
49
-
This component is now included in the default configuration of the Splunk Distribution of the OpenTelemetry Collector to send traces and metrics to Splunk Observability Cloud when deploying in host monitoring (agent) mode
44
+
The use of the **otlphttp** exporter is now the default method to send metric and traces to Splunk Observability Cloud.
45
+
This component is included in the default configuration of the Splunk Distribution of the OpenTelemetry Collector to send traces and metrics to Splunk Observability Cloud when deploying in host monitoring (agent) mode
50
46
51
47
The older *apm* and *signalfx* exporters you may be familiar with, will be phased out over time
48
+
49
+
The `headers:` key with its sub key `X-SF-Token:` is the OpenTelemetry way to pass a access token.
50
+
51
+
To enable the passthrough mode, we did set `include_metadata:` to `true` on the `otlp` receiver in the gateway. It’ll ensure that headers passed to the collector are passed along with the data down in the collector’s pipeline.
Copy file name to clipboardExpand all lines: content/en/ninja-workshops/10-advanced-otel/20-gateway-setup/2-test-agent-gateway.md
+315-4Lines changed: 315 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ Verify the gateway is running in its own shell and is ready to receive data, the
12
12
[WORKSHOP]/otelcol --config=agent.yaml
13
13
```
14
14
15
-
The agent should start to send *cpu* metrics again and both the agent and the gateway should reflect that in their debug output:
15
+
The agent should start to send *cpu* metrics again and both the agent and the gateway should reflect that in their debug output similar like the following snippet from the console's:
Check to see if `gateway-metrics.out` has been created.
42
+
At the the moment you started the agent, it did collect the above metrics and send it along via the gateway.
43
+
The Gateway should have created a file called `./gateway-metrics.out` as part of the exporting phase of the pipeline service.
43
44
44
-
Next, run the curl command to send a trace:
45
+
Check to see if `gateway-metrics.out` has been created. It should contain at leasts the following metrics for `cpu1`
46
+
47
+
{{% tabs %}}
48
+
{{% tab title="Compact JSON" %}}
49
+
50
+
```json
51
+
{"resourceMetrics":[{"resource":{"attributes":[{"key":"host.name","value":{"stringValue":"YOUR_HOST_NAME"}},{"key":"os.type","value":{"stringValue":"YOUR_OS"}},{"key":"otelcol.service.mode","value":{"stringValue":"agent"}}]},"scopeMetrics":[{"scope":{"name":"github.com/open-telemetry/opentelemetry-collector-contrib/receiver/hostmetricsreceiver/internal/scraper/cpuscraper","version":"v0.116.0"},"metrics":[{"name":"system.cpu.time","description":"Total seconds each logical CPU spent on each mode.","unit":"s","sum":{"dataPoints":[{"attributes":[{"key":"cpu","value":{"stringValue":"cpu0"}},{"key":"state","value":{"stringValue":"user"}}],"startTimeUnixNano":"1733753908000000000","timeUnixNano":"1737133726158376000","asDouble":1168005.59},{"attributes":[{"key":"cpu","value":{"stringValue":"cpu0"}},{"key":"state","value":{"stringValue":"system"}}],"startTimeUnixNano":"1733753908000000000","timeUnixNano":"1737133726158376000","asDouble":504260.9},{"attributes":[{"key":"cpu","value":{"stringValue":"cpu0"}},{"key":"state","value":{"stringValue":"idle"}}],"startTimeUnixNano":"1733753908000000000","timeUnixNano":"1737133726158376000","asDouble":615648.75},{"attributes":[{"key":"cpu","value":{"stringValue":"cpu0"}},{"key":"state","value":{"stringValue":"interrupt"}}],"startTimeUnixNano":"1733753908000000000","timeUnixNano":"1737133726158376000","asDouble":0},{"attributes":[{"key":"cpu","value":{"stringValue":"cpu1"}},{"key":"state","value":{"stringValue":"user"}}],"startTimeUnixNano":"1733753908000000000","timeUnixNano":"1737133726158376000","asDouble":1168999.03},{"attributes":[{"key":"cpu","value":{"stringValue":"cpu1"}},{"key":"state","value":{"stringValue":"system"}}],"startTimeUnixNano":"1733753908000000000","timeUnixNano":"1737133726158376000","asDouble":497785.47},{"attributes":[{"key":"cpu","value":{"stringValue":"cpu1"}},{"key":"state","value":{"stringValue":"idle"}}],"startTimeUnixNano":"1733753908000000000","timeUnixNano":"1737133726158376000","asDouble":621140.39},{"attributes":[{"key":"cpu","value":{"stringValue":"cpu1"}},{"key":"state","value":{"stringValue":"interrupt"}}]}]}}]}]}]}
"description": "Total seconds each logical CPU spent on each mode.",
93
+
"unit": "s",
94
+
"sum": {
95
+
"dataPoints": [
96
+
{
97
+
"attributes": [
98
+
{
99
+
"key": "cpu",
100
+
"value": {
101
+
"stringValue": "cpu0"
102
+
}
103
+
},
104
+
{
105
+
"key": "state",
106
+
"value": {
107
+
"stringValue": "user"
108
+
}
109
+
}
110
+
],
111
+
"startTimeUnixNano": "1733753908000000000",
112
+
"timeUnixNano": "1737133726158376000",
113
+
"asDouble": 1168005.59
114
+
},
115
+
{
116
+
"attributes": [
117
+
{
118
+
"key": "cpu",
119
+
"value": {
120
+
"stringValue": "cpu0"
121
+
}
122
+
},
123
+
{
124
+
"key": "state",
125
+
"value": {
126
+
"stringValue": "system"
127
+
}
128
+
}
129
+
],
130
+
"startTimeUnixNano": "1733753908000000000",
131
+
"timeUnixNano": "1737133726158376000",
132
+
"asDouble": 504260.9
133
+
},
134
+
{
135
+
"attributes": [
136
+
{
137
+
"key": "cpu",
138
+
"value": {
139
+
"stringValue": "cpu0"
140
+
}
141
+
},
142
+
{
143
+
"key": "state",
144
+
"value": {
145
+
"stringValue": "idle"
146
+
}
147
+
}
148
+
],
149
+
"startTimeUnixNano": "1733753908000000000",
150
+
"timeUnixNano": "1737133726158376000",
151
+
"asDouble": 615648.75
152
+
},
153
+
{
154
+
"attributes": [
155
+
{
156
+
"key": "cpu",
157
+
"value": {
158
+
"stringValue": "cpu0"
159
+
}
160
+
},
161
+
{
162
+
"key": "state",
163
+
"value": {
164
+
"stringValue": "interrupt"
165
+
}
166
+
}
167
+
],
168
+
"startTimeUnixNano": "1733753908000000000",
169
+
"timeUnixNano": "1737133726158376000",
170
+
"asDouble": 0
171
+
},
172
+
{
173
+
"attributes": [
174
+
{
175
+
"key": "cpu",
176
+
"value": {
177
+
"stringValue": "cpu1"
178
+
}
179
+
},
180
+
{
181
+
"key": "state",
182
+
"value": {
183
+
"stringValue": "user"
184
+
}
185
+
}
186
+
],
187
+
"startTimeUnixNano": "1733753908000000000",
188
+
"timeUnixNano": "1737133726158376000",
189
+
"asDouble": 1168999.03
190
+
},
191
+
{
192
+
"attributes": [
193
+
{
194
+
"key": "cpu",
195
+
"value": {
196
+
"stringValue": "cpu1"
197
+
}
198
+
},
199
+
{
200
+
"key": "state",
201
+
"value": {
202
+
"stringValue": "system"
203
+
}
204
+
}
205
+
],
206
+
"startTimeUnixNano": "1733753908000000000",
207
+
"timeUnixNano": "1737133726158376000",
208
+
"asDouble": 497785.47
209
+
},
210
+
{
211
+
"attributes": [
212
+
{
213
+
"key": "cpu",
214
+
"value": {
215
+
"stringValue": "cpu1"
216
+
}
217
+
},
218
+
{
219
+
"key": "state",
220
+
"value": {
221
+
"stringValue": "idle"
222
+
}
223
+
}
224
+
],
225
+
"startTimeUnixNano": "1733753908000000000",
226
+
"timeUnixNano": "1737133726158376000",
227
+
"asDouble": 621140.39
228
+
},
229
+
{
230
+
"attributes": [
231
+
{
232
+
"key": "cpu",
233
+
"value": {
234
+
"stringValue": "cpu1"
235
+
}
236
+
},
237
+
{
238
+
"key": "state",
239
+
"value": {
240
+
"stringValue": "interrupt"
241
+
}
242
+
}
243
+
]
244
+
}
245
+
]
246
+
}
247
+
}
248
+
]
249
+
}
250
+
]
251
+
}
252
+
]
253
+
}
254
+
```
255
+
256
+
{{% /tab %}}
257
+
{{% /tabs %}}
258
+
Next, make sure both the gateway and the agent are running in their own shell, then in a 3rd shell run the curl command to send a trace:
45
259
46
260
```sh
47
261
curl -X POST -i http://localhost:4318/v1/traces \
48
262
-H "Content-Type: application/json" \
49
263
-d @trace.json
50
264
```
51
265
52
-
Check for the `gateway-traces.out`
266
+
Now the gateway should have generated a new file called `./gateway-traces.out` and it should contain the following trace:
267
+
268
+
269
+
{{% tabs %}}
270
+
{{% tab title="Compacted JSON" %}}
271
+
272
+
```json
273
+
{"resourceSpans":[{"resource":{"attributes":[{"key":"service.name","value":{"stringValue":"my.service"}},{"key":"deployment.environment","value":{"stringValue":"my.environment"}},{"key":"host.name","value":{"stringValue":"YOUR_HOST_NAME"}},{"key":"os.type","value":{"stringValue":"YOUR_OS"}},{"key":"otelcol.service.mode","value":{"stringValue":"agent"}}]},"scopeSpans":[{"scope":{"name":"my.library","version":"1.0.0","attributes":[{"key":"my.scope.attribute","value":{"stringValue":"some scope attribute"}}]},"spans":[{"traceId":"5b8efff798038103d269b633813fc60c","spanId":"eee19b7ec3c1b174","parentSpanId":"eee19b7ec3c1b173","name":"I'm a server span","kind":2,"startTimeUnixNano":"1544712660000000000","endTimeUnixNano":"1544712661000000000","attributes":[{"value":{"stringValue":"some value"}}],"status":{}}]}],"schemaUrl":"https://opentelemetry.io/schemas/1.6.1"}]}
Note that both the `./gateway-metrics.out` and the `./gateway-traces.out` have a resource metric/attribute key `"otelcol.service.mode"` with a value of `"agent"`
0 commit comments