You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: generated/attributes/all.md
+12-6Lines changed: 12 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@
4
4
5
5
This page lists all available attributes across all categories.
6
6
7
-
Total attributes: 415
7
+
Total attributes: 421
8
8
9
9
## Stable Attributes
10
10
@@ -81,13 +81,13 @@ Total attributes: 415
81
81
|[`gen_ai.cost.output_tokens`](./gen_ai.md#gen_aicostoutput_tokens)| The cost of tokens used for creating the AI output in USD (without reasoning tokens). |
82
82
|[`gen_ai.cost.total_tokens`](./gen_ai.md#gen_aicosttotal_tokens)| The total cost for the tokens used. |
83
83
|[`gen_ai.embeddings.input`](./gen_ai.md#gen_aiembeddingsinput)| The input to the embeddings model. |
84
+
|[`gen_ai.input.messages`](./gen_ai.md#gen_aiinputmessages)| The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`. |
84
85
|[`gen_ai.operation.name`](./gen_ai.md#gen_aioperationname)| The name of the operation being performed. |
85
86
|[`gen_ai.operation.type`](./gen_ai.md#gen_aioperationtype)| The type of AI operation. Must be one of 'agent', 'ai_client', 'tool', 'handoff', 'guardrail'. Makes querying for spans in the UI easier. |
87
+
|[`gen_ai.output.messages`](./gen_ai.md#gen_aioutputmessages)| The model's response messages. It has to be a stringified version of an array of message objects, which can include text responses and tool calls. |
86
88
|[`gen_ai.pipeline.name`](./gen_ai.md#gen_aipipelinename)| Name of the AI pipeline or chain being executed. |
87
-
|[`gen_ai.request.available_tools`](./gen_ai.md#gen_airequestavailable_tools)| The available tools for the model. It has to be a stringified version of an array of objects. |
88
89
|[`gen_ai.request.frequency_penalty`](./gen_ai.md#gen_airequestfrequency_penalty)| Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation. |
89
90
|[`gen_ai.request.max_tokens`](./gen_ai.md#gen_airequestmax_tokens)| The maximum number of tokens to generate in the response. |
90
-
|[`gen_ai.request.messages`](./gen_ai.md#gen_airequestmessages)| The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`. |
91
91
|[`gen_ai.request.model`](./gen_ai.md#gen_airequestmodel)| The model identifier being used for the request. |
92
92
|[`gen_ai.request.presence_penalty`](./gen_ai.md#gen_airequestpresence_penalty)| Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies. |
93
93
|[`gen_ai.request.seed`](./gen_ai.md#gen_airequestseed)| The seed, ideally models given the same seed and same other parameters will produce the exact same output. |
@@ -98,11 +98,12 @@ Total attributes: 415
98
98
|[`gen_ai.response.id`](./gen_ai.md#gen_airesponseid)| Unique identifier for the completion. |
99
99
|[`gen_ai.response.model`](./gen_ai.md#gen_airesponsemodel)| The vendor-specific ID of the model used. |
100
100
|[`gen_ai.response.streaming`](./gen_ai.md#gen_airesponsestreaming)| Whether or not the AI model call's response was streamed back asynchronously |
101
-
|[`gen_ai.response.text`](./gen_ai.md#gen_airesponsetext)| The model's response text messages. It has to be a stringified version of an array of response text messages. |
102
101
|[`gen_ai.response.tokens_per_second`](./gen_ai.md#gen_airesponsetokens_per_second)| The total output tokens per seconds throughput |
103
-
|[`gen_ai.response.tool_calls`](./gen_ai.md#gen_airesponsetool_calls)| The tool calls in the model's response. It has to be a stringified version of an array of objects. |
104
102
|[`gen_ai.system`](./gen_ai.md#gen_aisystem)| The provider of the model. |
105
-
|[`gen_ai.system.message`](./gen_ai.md#gen_aisystemmessage)| The system instructions passed to the model. |
103
+
|[`gen_ai.system_instructions`](./gen_ai.md#gen_aisystem_instructions)| The system instructions passed to the model. |
104
+
|[`gen_ai.tool.call.arguments`](./gen_ai.md#gen_aitoolcallarguments)| The arguments of the tool call. It has to be a stringified version of the arguments to the tool. |
105
+
|[`gen_ai.tool.call.result`](./gen_ai.md#gen_aitoolcallresult)| The result of the tool call. It has to be a stringified version of the result of the tool. |
106
+
|[`gen_ai.tool.definitions`](./gen_ai.md#gen_aitooldefinitions)| The list of source system tool definitions available to the GenAI agent or model. |
106
107
|[`gen_ai.tool.description`](./gen_ai.md#gen_aitooldescription)| The description of the tool being used. |
107
108
|[`gen_ai.tool.input`](./gen_ai.md#gen_aitoolinput)| The input of the tool being used. It has to be a stringified version of the input to the tool. |
108
109
|[`gen_ai.tool.message`](./gen_ai.md#gen_aitoolmessage)| The response from a tool or function call passed to the model. |
@@ -129,6 +135,17 @@ The input to the embeddings model.
129
135
| Exists in OpenTelemetry | No |
130
136
| Example |`What's the weather in Paris?`|
131
137
138
+
### gen_ai.input.messages
139
+
140
+
The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.
@@ -151,28 +168,28 @@ The type of AI operation. Must be one of 'agent', 'ai_client', 'tool', 'handoff'
151
168
| Exists in OpenTelemetry | No |
152
169
| Example |`tool`|
153
170
154
-
### gen_ai.pipeline.name
171
+
### gen_ai.output.messages
155
172
156
-
Name of the AI pipeline or chain being executed.
173
+
The model's response messages. It has to be a stringified version of an array of message objects, which can include text responses and tool calls.
157
174
158
175
| Property | Value |
159
176
| --- | --- |
160
177
| Type |`string`|
161
178
| Has PII | maybe |
162
-
| Exists in OpenTelemetry | No |
163
-
| Example |`Autofix Pipeline`|
164
-
| Aliases |`ai.pipeline.name`|
179
+
| Exists in OpenTelemetry | Yes |
180
+
| Example |`[{"role": "assistant", "parts": [{"type": "text", "content": "The weather in Paris is currently rainy with a temperature of 57°F."}], "finish_reason": "stop"}]`|
165
181
166
-
### gen_ai.request.available_tools
182
+
### gen_ai.pipeline.name
167
183
168
-
The available tools for the model. It has to be a stringified version of an array of objects.
184
+
Name of the AI pipeline or chain being executed.
169
185
170
186
| Property | Value |
171
187
| --- | --- |
172
188
| Type |`string`|
173
189
| Has PII | maybe |
174
190
| Exists in OpenTelemetry | No |
175
-
| Example |`[{"name": "get_weather", "description": "Get the weather for a given location"}, {"name": "get_news", "description": "Get the news for a given topic"}]`|
191
+
| Example |`Autofix Pipeline`|
192
+
| Aliases |`ai.pipeline.name`|
176
193
177
194
### gen_ai.request.frequency_penalty
178
195
@@ -197,18 +214,6 @@ The maximum number of tokens to generate in the response.
197
214
| Exists in OpenTelemetry | Yes |
198
215
| Example |`2048`|
199
216
200
-
### gen_ai.request.messages
201
-
202
-
The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.
203
-
204
-
| Property | Value |
205
-
| --- | --- |
206
-
| Type |`string`|
207
-
| Has PII | maybe |
208
-
| Exists in OpenTelemetry | No |
209
-
| Example |`[{"role": "system", "content": "Generate a random number."}, {"role": "user", "content": [{"text": "Generate a random number between 0 and 10.", "type": "text"}]}, {"role": "tool", "content": {"toolCallId": "1", "toolName": "Weather", "output": "rainy"}}]`|
210
-
| Aliases |`ai.input_messages`|
211
-
212
217
### gen_ai.request.model
213
218
214
219
The model identifier being used for the request.
@@ -328,61 +333,72 @@ Whether or not the AI model call's response was streamed back asynchronously
328
333
| Example |`true`|
329
334
| Aliases |`ai.streaming`|
330
335
331
-
### gen_ai.response.text
336
+
### gen_ai.response.tokens_per_second
332
337
333
-
The model's response text messages. It has to be a stringified version of an array of response text messages.
338
+
The total output tokens per seconds throughput
339
+
340
+
| Property | Value |
341
+
| --- | --- |
342
+
| Type |`double`|
343
+
| Has PII | false |
344
+
| Exists in OpenTelemetry | No |
345
+
| Example |`12345.67`|
346
+
347
+
### gen_ai.system
348
+
349
+
The provider of the model.
334
350
335
351
| Property | Value |
336
352
| --- | --- |
337
353
| Type |`string`|
338
354
| Has PII | maybe |
339
-
| Exists in OpenTelemetry | No |
340
-
| Example |`["The weather in Paris is rainy and overcast, with temperatures around 57°F", "The weather in London is sunny and warm, with temperatures around 65°F"]`|
355
+
| Exists in OpenTelemetry | Yes |
356
+
| Example |`openai`|
357
+
| Aliases |`ai.model.provider`|
341
358
342
-
### gen_ai.response.tokens_per_second
359
+
### gen_ai.system_instructions
343
360
344
-
The total output tokens per seconds throughput
361
+
The system instructions passed to the model.
345
362
346
363
| Property | Value |
347
364
| --- | --- |
348
-
| Type |`double`|
349
-
| Has PII |false|
350
-
| Exists in OpenTelemetry |No|
351
-
| Example |`12345.67`|
365
+
| Type |`string`|
366
+
| Has PII |maybe|
367
+
| Exists in OpenTelemetry |Yes|
368
+
| Example |`You are a helpful assistant`|
352
369
353
-
### gen_ai.response.tool_calls
370
+
### gen_ai.tool.call.arguments
354
371
355
-
The tool calls in the model's response. It has to be a stringified version of an array of objects.
372
+
The arguments of the tool call. It has to be a stringified version of the arguments to the tool.
356
373
357
374
| Property | Value |
358
375
| --- | --- |
359
376
| Type |`string`|
360
377
| Has PII | maybe |
361
-
| Exists in OpenTelemetry |No|
362
-
| Example |`[{"name": "get_weather", "arguments": {"location": "Paris"}}]`|
378
+
| Exists in OpenTelemetry |Yes|
379
+
| Example |`{"location": "Paris"}`|
363
380
364
-
### gen_ai.system
381
+
### gen_ai.tool.call.result
365
382
366
-
The provider of the model.
383
+
The result of the tool call. It has to be a stringified version of the result of the tool.
367
384
368
385
| Property | Value |
369
386
| --- | --- |
370
387
| Type |`string`|
371
388
| Has PII | maybe |
372
389
| Exists in OpenTelemetry | Yes |
373
-
| Example |`openai`|
374
-
| Aliases |`ai.model.provider`|
390
+
| Example |`rainy, 57°F`|
375
391
376
-
### gen_ai.system.message
392
+
### gen_ai.tool.definitions
377
393
378
-
The system instructions passed to the model.
394
+
The list of source system tool definitions available to the GenAI agent or model.
379
395
380
396
| Property | Value |
381
397
| --- | --- |
382
398
| Type |`string`|
383
-
| Has PII |true|
384
-
| Exists in OpenTelemetry |No|
385
-
| Example |`You are a helpful assistant`|
399
+
| Has PII |maybe|
400
+
| Exists in OpenTelemetry |Yes|
401
+
| Example |`[{"type": "function", "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}}, "required": ["location", "unit"]}}]`|
386
402
387
403
### gen_ai.tool.description
388
404
@@ -548,6 +564,67 @@ The input messages sent to the model
548
564
| Deprecated | Yes, no replacement at this time |
549
565
| Deprecation Reason | Deprecated from OTEL, use gen_ai.input.messages with the new format instead. |
550
566
567
+
### gen_ai.request.available_tools
568
+
569
+
The available tools for the model. It has to be a stringified version of an array of objects.
570
+
571
+
| Property | Value |
572
+
| --- | --- |
573
+
| Type |`string`|
574
+
| Has PII | maybe |
575
+
| Exists in OpenTelemetry | No |
576
+
| Example |`[{"name": "get_weather", "description": "Get the weather for a given location"}, {"name": "get_news", "description": "Get the news for a given topic"}]`|
577
+
| Deprecated | Yes, use `gen_ai.tool.definitions` instead |
578
+
579
+
### gen_ai.request.messages
580
+
581
+
The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.
582
+
583
+
| Property | Value |
584
+
| --- | --- |
585
+
| Type |`string`|
586
+
| Has PII | maybe |
587
+
| Exists in OpenTelemetry | No |
588
+
| Example |`[{"role": "system", "content": "Generate a random number."}, {"role": "user", "content": [{"text": "Generate a random number between 0 and 10.", "type": "text"}]}, {"role": "tool", "content": {"toolCallId": "1", "toolName": "Weather", "output": "rainy"}}]`|
589
+
| Deprecated | Yes, use `gen_ai.input.messages` instead |
590
+
| Aliases |`ai.input_messages`|
591
+
592
+
### gen_ai.response.text
593
+
594
+
The model's response text messages. It has to be a stringified version of an array of response text messages.
595
+
596
+
| Property | Value |
597
+
| --- | --- |
598
+
| Type |`string`|
599
+
| Has PII | maybe |
600
+
| Exists in OpenTelemetry | No |
601
+
| Example |`["The weather in Paris is rainy and overcast, with temperatures around 57°F", "The weather in London is sunny and warm, with temperatures around 65°F"]`|
602
+
| Deprecated | Yes, use `gen_ai.output.messages` instead |
603
+
604
+
### gen_ai.response.tool_calls
605
+
606
+
The tool calls in the model's response. It has to be a stringified version of an array of objects.
607
+
608
+
| Property | Value |
609
+
| --- | --- |
610
+
| Type |`string`|
611
+
| Has PII | maybe |
612
+
| Exists in OpenTelemetry | No |
613
+
| Example |`[{"name": "get_weather", "arguments": {"location": "Paris"}}]`|
614
+
| Deprecated | Yes, use `gen_ai.output.messages` instead |
615
+
616
+
### gen_ai.system.message
617
+
618
+
The system instructions passed to the model.
619
+
620
+
| Property | Value |
621
+
| --- | --- |
622
+
| Type |`string`|
623
+
| Has PII | true |
624
+
| Exists in OpenTelemetry | No |
625
+
| Example |`You are a helpful assistant`|
626
+
| Deprecated | Yes, use `gen_ai.system_instructions` instead |
627
+
551
628
### gen_ai.usage.completion_tokens
552
629
553
630
The number of tokens used in the GenAI response (completion).
0 commit comments