Skip to content

Commit 5eac935

Browse files
authored
feat(gen_ai): add new Gen AI attributes (#221)
1 parent 9f146ef commit 5eac935

17 files changed

+646
-61
lines changed

CHANGELOG.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99
- feat(ai): Add gen_ai.usage.input_tokens.cache_write ([#217](https://github.com/getsentry/sentry-conventions/pull/217))
1010
- feat(attributes): Add sentry.normalized_db_query.hash ([#200](https://github.com/getsentry/sentry-conventions/pull/200))
1111
- feat(attributes): Add sentry.category attribute ([#218](https://github.com/getsentry/sentry-conventions/pull/218))
12+
- Add new Gen AI attributes ([#221](https://github.com/getsentry/sentry-conventions/pull/221))
1213

1314
## 0.3.1
1415

@@ -55,12 +56,11 @@
5556
- feat(sentry): Add sentry.observed_timestamp_nanos ([#137](https://github.com/getsentry/sentry-conventions/pull/137))
5657
- dynamic-sampling: add field conventions for dynamic sampling context ([#128](https://github.com/getsentry/sentry-conventions/pull/128))
5758
- chore(ai): Clean up of `sentry._internal.segment.contains_gen_ai_spans` ([#155](https://github.com/getsentry/sentry-conventions/pull/155))
58-
- feat(attributes): Add sentry._internal.replay_is_buffering ([#159](https://github.com/getsentry/sentry-conventions/pull/159))
59+
- feat(attributes): Add sentry.\_internal.replay_is_buffering ([#159](https://github.com/getsentry/sentry-conventions/pull/159))
5960
- feat: Add vercel log drain attributes ([#163](https://github.com/getsentry/sentry-conventions/pull/163))
6061
- feat(attributes) add MCP related attributes ([#164](https://github.com/getsentry/sentry-conventions/pull/164))
6162
- feat(attributes): Add MDC log attributes ([#167](https://github.com/getsentry/sentry-conventions/pull/167))
6263

63-
6464
### Fixes
6565

6666
- fix(name): Remove duplicate GraphQL op ([#152](https://github.com/getsentry/sentry-conventions/pull/152))

generated/attributes/all.md

Lines changed: 12 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
This page lists all available attributes across all categories.
66

7-
Total attributes: 415
7+
Total attributes: 421
88

99
## Stable Attributes
1010

@@ -81,13 +81,13 @@ Total attributes: 415
8181
| [`gen_ai.cost.output_tokens`](./gen_ai.md#gen_aicostoutput_tokens) | The cost of tokens used for creating the AI output in USD (without reasoning tokens). |
8282
| [`gen_ai.cost.total_tokens`](./gen_ai.md#gen_aicosttotal_tokens) | The total cost for the tokens used. |
8383
| [`gen_ai.embeddings.input`](./gen_ai.md#gen_aiembeddingsinput) | The input to the embeddings model. |
84+
| [`gen_ai.input.messages`](./gen_ai.md#gen_aiinputmessages) | The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`. |
8485
| [`gen_ai.operation.name`](./gen_ai.md#gen_aioperationname) | The name of the operation being performed. |
8586
| [`gen_ai.operation.type`](./gen_ai.md#gen_aioperationtype) | The type of AI operation. Must be one of 'agent', 'ai_client', 'tool', 'handoff', 'guardrail'. Makes querying for spans in the UI easier. |
87+
| [`gen_ai.output.messages`](./gen_ai.md#gen_aioutputmessages) | The model's response messages. It has to be a stringified version of an array of message objects, which can include text responses and tool calls. |
8688
| [`gen_ai.pipeline.name`](./gen_ai.md#gen_aipipelinename) | Name of the AI pipeline or chain being executed. |
87-
| [`gen_ai.request.available_tools`](./gen_ai.md#gen_airequestavailable_tools) | The available tools for the model. It has to be a stringified version of an array of objects. |
8889
| [`gen_ai.request.frequency_penalty`](./gen_ai.md#gen_airequestfrequency_penalty) | Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation. |
8990
| [`gen_ai.request.max_tokens`](./gen_ai.md#gen_airequestmax_tokens) | The maximum number of tokens to generate in the response. |
90-
| [`gen_ai.request.messages`](./gen_ai.md#gen_airequestmessages) | The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`. |
9191
| [`gen_ai.request.model`](./gen_ai.md#gen_airequestmodel) | The model identifier being used for the request. |
9292
| [`gen_ai.request.presence_penalty`](./gen_ai.md#gen_airequestpresence_penalty) | Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies. |
9393
| [`gen_ai.request.seed`](./gen_ai.md#gen_airequestseed) | The seed, ideally models given the same seed and same other parameters will produce the exact same output. |
@@ -98,11 +98,12 @@ Total attributes: 415
9898
| [`gen_ai.response.id`](./gen_ai.md#gen_airesponseid) | Unique identifier for the completion. |
9999
| [`gen_ai.response.model`](./gen_ai.md#gen_airesponsemodel) | The vendor-specific ID of the model used. |
100100
| [`gen_ai.response.streaming`](./gen_ai.md#gen_airesponsestreaming) | Whether or not the AI model call's response was streamed back asynchronously |
101-
| [`gen_ai.response.text`](./gen_ai.md#gen_airesponsetext) | The model's response text messages. It has to be a stringified version of an array of response text messages. |
102101
| [`gen_ai.response.tokens_per_second`](./gen_ai.md#gen_airesponsetokens_per_second) | The total output tokens per seconds throughput |
103-
| [`gen_ai.response.tool_calls`](./gen_ai.md#gen_airesponsetool_calls) | The tool calls in the model's response. It has to be a stringified version of an array of objects. |
104102
| [`gen_ai.system`](./gen_ai.md#gen_aisystem) | The provider of the model. |
105-
| [`gen_ai.system.message`](./gen_ai.md#gen_aisystemmessage) | The system instructions passed to the model. |
103+
| [`gen_ai.system_instructions`](./gen_ai.md#gen_aisystem_instructions) | The system instructions passed to the model. |
104+
| [`gen_ai.tool.call.arguments`](./gen_ai.md#gen_aitoolcallarguments) | The arguments of the tool call. It has to be a stringified version of the arguments to the tool. |
105+
| [`gen_ai.tool.call.result`](./gen_ai.md#gen_aitoolcallresult) | The result of the tool call. It has to be a stringified version of the result of the tool. |
106+
| [`gen_ai.tool.definitions`](./gen_ai.md#gen_aitooldefinitions) | The list of source system tool definitions available to the GenAI agent or model. |
106107
| [`gen_ai.tool.description`](./gen_ai.md#gen_aitooldescription) | The description of the tool being used. |
107108
| [`gen_ai.tool.input`](./gen_ai.md#gen_aitoolinput) | The input of the tool being used. It has to be a stringified version of the input to the tool. |
108109
| [`gen_ai.tool.message`](./gen_ai.md#gen_aitoolmessage) | The response from a tool or function call passed to the model. |
@@ -389,6 +390,11 @@ Total attributes: 415
389390
| [`environment`](./general.md#environment) | [`sentry.environment`](./sentry.md#sentryenvironment) |
390391
| [`fs_error`](./general.md#fs_error) | [`error.type`](./error.md#errortype) |
391392
| [`gen_ai.prompt`](./gen_ai.md#gen_aiprompt) | No replacement |
393+
| [`gen_ai.request.available_tools`](./gen_ai.md#gen_airequestavailable_tools) | [`gen_ai.tool.definitions`](./gen_ai.md#gen_aitooldefinitions) |
394+
| [`gen_ai.request.messages`](./gen_ai.md#gen_airequestmessages) | [`gen_ai.input.messages`](./gen_ai.md#gen_aiinputmessages) |
395+
| [`gen_ai.response.text`](./gen_ai.md#gen_airesponsetext) | [`gen_ai.output.messages`](./gen_ai.md#gen_aioutputmessages) |
396+
| [`gen_ai.response.tool_calls`](./gen_ai.md#gen_airesponsetool_calls) | [`gen_ai.output.messages`](./gen_ai.md#gen_aioutputmessages) |
397+
| [`gen_ai.system.message`](./gen_ai.md#gen_aisystemmessage) | [`gen_ai.system_instructions`](./gen_ai.md#gen_aisystem_instructions) |
392398
| [`gen_ai.usage.completion_tokens`](./gen_ai.md#gen_aiusagecompletion_tokens) | [`gen_ai.usage.output_tokens`](./gen_ai.md#gen_aiusageoutput_tokens) |
393399
| [`gen_ai.usage.prompt_tokens`](./gen_ai.md#gen_aiusageprompt_tokens) | [`gen_ai.usage.input_tokens`](./gen_ai.md#gen_aiusageinput_tokens) |
394400
| [`http.client_ip`](./http.md#httpclient_ip) | [`client.address`](./client.md#clientaddress) |

generated/attributes/gen_ai.md

Lines changed: 125 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -10,13 +10,13 @@
1010
- [gen_ai.cost.output_tokens](#gen_aicostoutput_tokens)
1111
- [gen_ai.cost.total_tokens](#gen_aicosttotal_tokens)
1212
- [gen_ai.embeddings.input](#gen_aiembeddingsinput)
13+
- [gen_ai.input.messages](#gen_aiinputmessages)
1314
- [gen_ai.operation.name](#gen_aioperationname)
1415
- [gen_ai.operation.type](#gen_aioperationtype)
16+
- [gen_ai.output.messages](#gen_aioutputmessages)
1517
- [gen_ai.pipeline.name](#gen_aipipelinename)
16-
- [gen_ai.request.available_tools](#gen_airequestavailable_tools)
1718
- [gen_ai.request.frequency_penalty](#gen_airequestfrequency_penalty)
1819
- [gen_ai.request.max_tokens](#gen_airequestmax_tokens)
19-
- [gen_ai.request.messages](#gen_airequestmessages)
2020
- [gen_ai.request.model](#gen_airequestmodel)
2121
- [gen_ai.request.presence_penalty](#gen_airequestpresence_penalty)
2222
- [gen_ai.request.seed](#gen_airequestseed)
@@ -27,11 +27,12 @@
2727
- [gen_ai.response.id](#gen_airesponseid)
2828
- [gen_ai.response.model](#gen_airesponsemodel)
2929
- [gen_ai.response.streaming](#gen_airesponsestreaming)
30-
- [gen_ai.response.text](#gen_airesponsetext)
3130
- [gen_ai.response.tokens_per_second](#gen_airesponsetokens_per_second)
32-
- [gen_ai.response.tool_calls](#gen_airesponsetool_calls)
3331
- [gen_ai.system](#gen_aisystem)
34-
- [gen_ai.system.message](#gen_aisystemmessage)
32+
- [gen_ai.system_instructions](#gen_aisystem_instructions)
33+
- [gen_ai.tool.call.arguments](#gen_aitoolcallarguments)
34+
- [gen_ai.tool.call.result](#gen_aitoolcallresult)
35+
- [gen_ai.tool.definitions](#gen_aitooldefinitions)
3536
- [gen_ai.tool.description](#gen_aitooldescription)
3637
- [gen_ai.tool.input](#gen_aitoolinput)
3738
- [gen_ai.tool.message](#gen_aitoolmessage)
@@ -47,6 +48,11 @@
4748
- [gen_ai.user.message](#gen_aiusermessage)
4849
- [Deprecated Attributes](#deprecated-attributes)
4950
- [gen_ai.prompt](#gen_aiprompt)
51+
- [gen_ai.request.available_tools](#gen_airequestavailable_tools)
52+
- [gen_ai.request.messages](#gen_airequestmessages)
53+
- [gen_ai.response.text](#gen_airesponsetext)
54+
- [gen_ai.response.tool_calls](#gen_airesponsetool_calls)
55+
- [gen_ai.system.message](#gen_aisystemmessage)
5056
- [gen_ai.usage.completion_tokens](#gen_aiusagecompletion_tokens)
5157
- [gen_ai.usage.prompt_tokens](#gen_aiusageprompt_tokens)
5258

@@ -129,6 +135,17 @@ The input to the embeddings model.
129135
| Exists in OpenTelemetry | No |
130136
| Example | `What's the weather in Paris?` |
131137

138+
### gen_ai.input.messages
139+
140+
The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.
141+
142+
| Property | Value |
143+
| --- | --- |
144+
| Type | `string` |
145+
| Has PII | maybe |
146+
| Exists in OpenTelemetry | Yes |
147+
| Example | `[{"role": "user", "parts": [{"type": "text", "content": "Weather in Paris?"}]}, {"role": "assistant", "parts": [{"type": "tool_call", "id": "call_VSPygqKTWdrhaFErNvMV18Yl", "name": "get_weather", "arguments": {"location": "Paris"}}]}, {"role": "tool", "parts": [{"type": "tool_call_response", "id": "call_VSPygqKTWdrhaFErNvMV18Yl", "result": "rainy, 57°F"}]}]` |
148+
132149
### gen_ai.operation.name
133150

134151
The name of the operation being performed.
@@ -151,28 +168,28 @@ The type of AI operation. Must be one of 'agent', 'ai_client', 'tool', 'handoff'
151168
| Exists in OpenTelemetry | No |
152169
| Example | `tool` |
153170

154-
### gen_ai.pipeline.name
171+
### gen_ai.output.messages
155172

156-
Name of the AI pipeline or chain being executed.
173+
The model's response messages. It has to be a stringified version of an array of message objects, which can include text responses and tool calls.
157174

158175
| Property | Value |
159176
| --- | --- |
160177
| Type | `string` |
161178
| Has PII | maybe |
162-
| Exists in OpenTelemetry | No |
163-
| Example | `Autofix Pipeline` |
164-
| Aliases | `ai.pipeline.name` |
179+
| Exists in OpenTelemetry | Yes |
180+
| Example | `[{"role": "assistant", "parts": [{"type": "text", "content": "The weather in Paris is currently rainy with a temperature of 57°F."}], "finish_reason": "stop"}]` |
165181

166-
### gen_ai.request.available_tools
182+
### gen_ai.pipeline.name
167183

168-
The available tools for the model. It has to be a stringified version of an array of objects.
184+
Name of the AI pipeline or chain being executed.
169185

170186
| Property | Value |
171187
| --- | --- |
172188
| Type | `string` |
173189
| Has PII | maybe |
174190
| Exists in OpenTelemetry | No |
175-
| Example | `[{"name": "get_weather", "description": "Get the weather for a given location"}, {"name": "get_news", "description": "Get the news for a given topic"}]` |
191+
| Example | `Autofix Pipeline` |
192+
| Aliases | `ai.pipeline.name` |
176193

177194
### gen_ai.request.frequency_penalty
178195

@@ -197,18 +214,6 @@ The maximum number of tokens to generate in the response.
197214
| Exists in OpenTelemetry | Yes |
198215
| Example | `2048` |
199216

200-
### gen_ai.request.messages
201-
202-
The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.
203-
204-
| Property | Value |
205-
| --- | --- |
206-
| Type | `string` |
207-
| Has PII | maybe |
208-
| Exists in OpenTelemetry | No |
209-
| Example | `[{"role": "system", "content": "Generate a random number."}, {"role": "user", "content": [{"text": "Generate a random number between 0 and 10.", "type": "text"}]}, {"role": "tool", "content": {"toolCallId": "1", "toolName": "Weather", "output": "rainy"}}]` |
210-
| Aliases | `ai.input_messages` |
211-
212217
### gen_ai.request.model
213218

214219
The model identifier being used for the request.
@@ -328,61 +333,72 @@ Whether or not the AI model call's response was streamed back asynchronously
328333
| Example | `true` |
329334
| Aliases | `ai.streaming` |
330335

331-
### gen_ai.response.text
336+
### gen_ai.response.tokens_per_second
332337

333-
The model's response text messages. It has to be a stringified version of an array of response text messages.
338+
The total output tokens per seconds throughput
339+
340+
| Property | Value |
341+
| --- | --- |
342+
| Type | `double` |
343+
| Has PII | false |
344+
| Exists in OpenTelemetry | No |
345+
| Example | `12345.67` |
346+
347+
### gen_ai.system
348+
349+
The provider of the model.
334350

335351
| Property | Value |
336352
| --- | --- |
337353
| Type | `string` |
338354
| Has PII | maybe |
339-
| Exists in OpenTelemetry | No |
340-
| Example | `["The weather in Paris is rainy and overcast, with temperatures around 57°F", "The weather in London is sunny and warm, with temperatures around 65°F"]` |
355+
| Exists in OpenTelemetry | Yes |
356+
| Example | `openai` |
357+
| Aliases | `ai.model.provider` |
341358

342-
### gen_ai.response.tokens_per_second
359+
### gen_ai.system_instructions
343360

344-
The total output tokens per seconds throughput
361+
The system instructions passed to the model.
345362

346363
| Property | Value |
347364
| --- | --- |
348-
| Type | `double` |
349-
| Has PII | false |
350-
| Exists in OpenTelemetry | No |
351-
| Example | `12345.67` |
365+
| Type | `string` |
366+
| Has PII | maybe |
367+
| Exists in OpenTelemetry | Yes |
368+
| Example | `You are a helpful assistant` |
352369

353-
### gen_ai.response.tool_calls
370+
### gen_ai.tool.call.arguments
354371

355-
The tool calls in the model's response. It has to be a stringified version of an array of objects.
372+
The arguments of the tool call. It has to be a stringified version of the arguments to the tool.
356373

357374
| Property | Value |
358375
| --- | --- |
359376
| Type | `string` |
360377
| Has PII | maybe |
361-
| Exists in OpenTelemetry | No |
362-
| Example | `[{"name": "get_weather", "arguments": {"location": "Paris"}}]` |
378+
| Exists in OpenTelemetry | Yes |
379+
| Example | `{"location": "Paris"}` |
363380

364-
### gen_ai.system
381+
### gen_ai.tool.call.result
365382

366-
The provider of the model.
383+
The result of the tool call. It has to be a stringified version of the result of the tool.
367384

368385
| Property | Value |
369386
| --- | --- |
370387
| Type | `string` |
371388
| Has PII | maybe |
372389
| Exists in OpenTelemetry | Yes |
373-
| Example | `openai` |
374-
| Aliases | `ai.model.provider` |
390+
| Example | `rainy, 57°F` |
375391

376-
### gen_ai.system.message
392+
### gen_ai.tool.definitions
377393

378-
The system instructions passed to the model.
394+
The list of source system tool definitions available to the GenAI agent or model.
379395

380396
| Property | Value |
381397
| --- | --- |
382398
| Type | `string` |
383-
| Has PII | true |
384-
| Exists in OpenTelemetry | No |
385-
| Example | `You are a helpful assistant` |
399+
| Has PII | maybe |
400+
| Exists in OpenTelemetry | Yes |
401+
| Example | `[{"type": "function", "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}}, "required": ["location", "unit"]}}]` |
386402

387403
### gen_ai.tool.description
388404

@@ -548,6 +564,67 @@ The input messages sent to the model
548564
| Deprecated | Yes, no replacement at this time |
549565
| Deprecation Reason | Deprecated from OTEL, use gen_ai.input.messages with the new format instead. |
550566

567+
### gen_ai.request.available_tools
568+
569+
The available tools for the model. It has to be a stringified version of an array of objects.
570+
571+
| Property | Value |
572+
| --- | --- |
573+
| Type | `string` |
574+
| Has PII | maybe |
575+
| Exists in OpenTelemetry | No |
576+
| Example | `[{"name": "get_weather", "description": "Get the weather for a given location"}, {"name": "get_news", "description": "Get the news for a given topic"}]` |
577+
| Deprecated | Yes, use `gen_ai.tool.definitions` instead |
578+
579+
### gen_ai.request.messages
580+
581+
The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.
582+
583+
| Property | Value |
584+
| --- | --- |
585+
| Type | `string` |
586+
| Has PII | maybe |
587+
| Exists in OpenTelemetry | No |
588+
| Example | `[{"role": "system", "content": "Generate a random number."}, {"role": "user", "content": [{"text": "Generate a random number between 0 and 10.", "type": "text"}]}, {"role": "tool", "content": {"toolCallId": "1", "toolName": "Weather", "output": "rainy"}}]` |
589+
| Deprecated | Yes, use `gen_ai.input.messages` instead |
590+
| Aliases | `ai.input_messages` |
591+
592+
### gen_ai.response.text
593+
594+
The model's response text messages. It has to be a stringified version of an array of response text messages.
595+
596+
| Property | Value |
597+
| --- | --- |
598+
| Type | `string` |
599+
| Has PII | maybe |
600+
| Exists in OpenTelemetry | No |
601+
| Example | `["The weather in Paris is rainy and overcast, with temperatures around 57°F", "The weather in London is sunny and warm, with temperatures around 65°F"]` |
602+
| Deprecated | Yes, use `gen_ai.output.messages` instead |
603+
604+
### gen_ai.response.tool_calls
605+
606+
The tool calls in the model's response. It has to be a stringified version of an array of objects.
607+
608+
| Property | Value |
609+
| --- | --- |
610+
| Type | `string` |
611+
| Has PII | maybe |
612+
| Exists in OpenTelemetry | No |
613+
| Example | `[{"name": "get_weather", "arguments": {"location": "Paris"}}]` |
614+
| Deprecated | Yes, use `gen_ai.output.messages` instead |
615+
616+
### gen_ai.system.message
617+
618+
The system instructions passed to the model.
619+
620+
| Property | Value |
621+
| --- | --- |
622+
| Type | `string` |
623+
| Has PII | true |
624+
| Exists in OpenTelemetry | No |
625+
| Example | `You are a helpful assistant` |
626+
| Deprecated | Yes, use `gen_ai.system_instructions` instead |
627+
551628
### gen_ai.usage.completion_tokens
552629

553630
The number of tokens used in the GenAI response (completion).

0 commit comments

Comments
 (0)