Skip to content
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
- feat(ai): Add gen_ai.usage.input_tokens.cache_write ([#217](https://github.com/getsentry/sentry-conventions/pull/217))
- feat(attributes): Add sentry.normalized_db_query.hash ([#200](https://github.com/getsentry/sentry-conventions/pull/200))
- feat(attributes): Add sentry.category attribute ([#218](https://github.com/getsentry/sentry-conventions/pull/218))
- Add new Gen AI attributes ([#221](https://github.com/getsentry/sentry-conventions/pull/221))

## 0.3.1

Expand Down Expand Up @@ -55,12 +56,11 @@
- feat(sentry): Add sentry.observed_timestamp_nanos ([#137](https://github.com/getsentry/sentry-conventions/pull/137))
- dynamic-sampling: add field conventions for dynamic sampling context ([#128](https://github.com/getsentry/sentry-conventions/pull/128))
- chore(ai): Clean up of `sentry._internal.segment.contains_gen_ai_spans` ([#155](https://github.com/getsentry/sentry-conventions/pull/155))
- feat(attributes): Add sentry._internal.replay_is_buffering ([#159](https://github.com/getsentry/sentry-conventions/pull/159))
- feat(attributes): Add sentry.\_internal.replay_is_buffering ([#159](https://github.com/getsentry/sentry-conventions/pull/159))
- feat: Add vercel log drain attributes ([#163](https://github.com/getsentry/sentry-conventions/pull/163))
- feat(attributes) add MCP related attributes ([#164](https://github.com/getsentry/sentry-conventions/pull/164))
- feat(attributes): Add MDC log attributes ([#167](https://github.com/getsentry/sentry-conventions/pull/167))


### Fixes

- fix(name): Remove duplicate GraphQL op ([#152](https://github.com/getsentry/sentry-conventions/pull/152))
Expand Down
18 changes: 12 additions & 6 deletions generated/attributes/all.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

This page lists all available attributes across all categories.

Total attributes: 415
Total attributes: 421

## Stable Attributes

Expand Down Expand Up @@ -81,13 +81,13 @@ Total attributes: 415
| [`gen_ai.cost.output_tokens`](./gen_ai.md#gen_aicostoutput_tokens) | The cost of tokens used for creating the AI output in USD (without reasoning tokens). |
| [`gen_ai.cost.total_tokens`](./gen_ai.md#gen_aicosttotal_tokens) | The total cost for the tokens used. |
| [`gen_ai.embeddings.input`](./gen_ai.md#gen_aiembeddingsinput) | The input to the embeddings model. |
| [`gen_ai.input.messages`](./gen_ai.md#gen_aiinputmessages) | The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`. |
| [`gen_ai.operation.name`](./gen_ai.md#gen_aioperationname) | The name of the operation being performed. |
| [`gen_ai.operation.type`](./gen_ai.md#gen_aioperationtype) | The type of AI operation. Must be one of 'agent', 'ai_client', 'tool', 'handoff', 'guardrail'. Makes querying for spans in the UI easier. |
| [`gen_ai.output.messages`](./gen_ai.md#gen_aioutputmessages) | The model's response messages. It has to be a stringified version of an array of message objects, which can include text responses and tool calls. |
| [`gen_ai.pipeline.name`](./gen_ai.md#gen_aipipelinename) | Name of the AI pipeline or chain being executed. |
| [`gen_ai.request.available_tools`](./gen_ai.md#gen_airequestavailable_tools) | The available tools for the model. It has to be a stringified version of an array of objects. |
| [`gen_ai.request.frequency_penalty`](./gen_ai.md#gen_airequestfrequency_penalty) | Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation. |
| [`gen_ai.request.max_tokens`](./gen_ai.md#gen_airequestmax_tokens) | The maximum number of tokens to generate in the response. |
| [`gen_ai.request.messages`](./gen_ai.md#gen_airequestmessages) | The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`. |
| [`gen_ai.request.model`](./gen_ai.md#gen_airequestmodel) | The model identifier being used for the request. |
| [`gen_ai.request.presence_penalty`](./gen_ai.md#gen_airequestpresence_penalty) | Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies. |
| [`gen_ai.request.seed`](./gen_ai.md#gen_airequestseed) | The seed, ideally models given the same seed and same other parameters will produce the exact same output. |
Expand All @@ -98,11 +98,12 @@ Total attributes: 415
| [`gen_ai.response.id`](./gen_ai.md#gen_airesponseid) | Unique identifier for the completion. |
| [`gen_ai.response.model`](./gen_ai.md#gen_airesponsemodel) | The vendor-specific ID of the model used. |
| [`gen_ai.response.streaming`](./gen_ai.md#gen_airesponsestreaming) | Whether or not the AI model call's response was streamed back asynchronously |
| [`gen_ai.response.text`](./gen_ai.md#gen_airesponsetext) | The model's response text messages. It has to be a stringified version of an array of response text messages. |
| [`gen_ai.response.tokens_per_second`](./gen_ai.md#gen_airesponsetokens_per_second) | The total output tokens per seconds throughput |
| [`gen_ai.response.tool_calls`](./gen_ai.md#gen_airesponsetool_calls) | The tool calls in the model's response. It has to be a stringified version of an array of objects. |
| [`gen_ai.system`](./gen_ai.md#gen_aisystem) | The provider of the model. |
| [`gen_ai.system.message`](./gen_ai.md#gen_aisystemmessage) | The system instructions passed to the model. |
| [`gen_ai.system_instructions`](./gen_ai.md#gen_aisystem_instructions) | The system instructions passed to the model. |
| [`gen_ai.tool.call.arguments`](./gen_ai.md#gen_aitoolcallarguments) | The arguments of the tool call. It has to be a stringified version of the arguments to the tool. |
| [`gen_ai.tool.call.result`](./gen_ai.md#gen_aitoolcallresult) | The result of the tool call. It has to be a stringified version of the result of the tool. |
| [`gen_ai.tool.definitions`](./gen_ai.md#gen_aitooldefinitions) | The list of source system tool definitions available to the GenAI agent or model. |
| [`gen_ai.tool.description`](./gen_ai.md#gen_aitooldescription) | The description of the tool being used. |
| [`gen_ai.tool.input`](./gen_ai.md#gen_aitoolinput) | The input of the tool being used. It has to be a stringified version of the input to the tool. |
| [`gen_ai.tool.message`](./gen_ai.md#gen_aitoolmessage) | The response from a tool or function call passed to the model. |
Expand Down Expand Up @@ -389,6 +390,11 @@ Total attributes: 415
| [`environment`](./general.md#environment) | [`sentry.environment`](./sentry.md#sentryenvironment) |
| [`fs_error`](./general.md#fs_error) | [`error.type`](./error.md#errortype) |
| [`gen_ai.prompt`](./gen_ai.md#gen_aiprompt) | No replacement |
| [`gen_ai.request.available_tools`](./gen_ai.md#gen_airequestavailable_tools) | [`gen_ai.tool.definitions`](./gen_ai.md#gen_aitooldefinitions) |
| [`gen_ai.request.messages`](./gen_ai.md#gen_airequestmessages) | [`gen_ai.input.messages`](./gen_ai.md#gen_aiinputmessages) |
| [`gen_ai.response.text`](./gen_ai.md#gen_airesponsetext) | [`gen_ai.output.messages`](./gen_ai.md#gen_aioutputmessages) |
| [`gen_ai.response.tool_calls`](./gen_ai.md#gen_airesponsetool_calls) | [`gen_ai.output.messages`](./gen_ai.md#gen_aioutputmessages) |
| [`gen_ai.system.message`](./gen_ai.md#gen_aisystemmessage) | [`gen_ai.system_instructions`](./gen_ai.md#gen_aisystem_instructions) |
| [`gen_ai.usage.completion_tokens`](./gen_ai.md#gen_aiusagecompletion_tokens) | [`gen_ai.usage.output_tokens`](./gen_ai.md#gen_aiusageoutput_tokens) |
| [`gen_ai.usage.prompt_tokens`](./gen_ai.md#gen_aiusageprompt_tokens) | [`gen_ai.usage.input_tokens`](./gen_ai.md#gen_aiusageinput_tokens) |
| [`http.client_ip`](./http.md#httpclient_ip) | [`client.address`](./client.md#clientaddress) |
Expand Down
173 changes: 125 additions & 48 deletions generated/attributes/gen_ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,13 @@
- [gen_ai.cost.output_tokens](#gen_aicostoutput_tokens)
- [gen_ai.cost.total_tokens](#gen_aicosttotal_tokens)
- [gen_ai.embeddings.input](#gen_aiembeddingsinput)
- [gen_ai.input.messages](#gen_aiinputmessages)
- [gen_ai.operation.name](#gen_aioperationname)
- [gen_ai.operation.type](#gen_aioperationtype)
- [gen_ai.output.messages](#gen_aioutputmessages)
- [gen_ai.pipeline.name](#gen_aipipelinename)
- [gen_ai.request.available_tools](#gen_airequestavailable_tools)
- [gen_ai.request.frequency_penalty](#gen_airequestfrequency_penalty)
- [gen_ai.request.max_tokens](#gen_airequestmax_tokens)
- [gen_ai.request.messages](#gen_airequestmessages)
- [gen_ai.request.model](#gen_airequestmodel)
- [gen_ai.request.presence_penalty](#gen_airequestpresence_penalty)
- [gen_ai.request.seed](#gen_airequestseed)
Expand All @@ -27,11 +27,12 @@
- [gen_ai.response.id](#gen_airesponseid)
- [gen_ai.response.model](#gen_airesponsemodel)
- [gen_ai.response.streaming](#gen_airesponsestreaming)
- [gen_ai.response.text](#gen_airesponsetext)
- [gen_ai.response.tokens_per_second](#gen_airesponsetokens_per_second)
- [gen_ai.response.tool_calls](#gen_airesponsetool_calls)
- [gen_ai.system](#gen_aisystem)
- [gen_ai.system.message](#gen_aisystemmessage)
- [gen_ai.system_instructions](#gen_aisystem_instructions)
- [gen_ai.tool.call.arguments](#gen_aitoolcallarguments)
- [gen_ai.tool.call.result](#gen_aitoolcallresult)
- [gen_ai.tool.definitions](#gen_aitooldefinitions)
- [gen_ai.tool.description](#gen_aitooldescription)
- [gen_ai.tool.input](#gen_aitoolinput)
- [gen_ai.tool.message](#gen_aitoolmessage)
Expand All @@ -47,6 +48,11 @@
- [gen_ai.user.message](#gen_aiusermessage)
- [Deprecated Attributes](#deprecated-attributes)
- [gen_ai.prompt](#gen_aiprompt)
- [gen_ai.request.available_tools](#gen_airequestavailable_tools)
- [gen_ai.request.messages](#gen_airequestmessages)
- [gen_ai.response.text](#gen_airesponsetext)
- [gen_ai.response.tool_calls](#gen_airesponsetool_calls)
- [gen_ai.system.message](#gen_aisystemmessage)
- [gen_ai.usage.completion_tokens](#gen_aiusagecompletion_tokens)
- [gen_ai.usage.prompt_tokens](#gen_aiusageprompt_tokens)

Expand Down Expand Up @@ -129,6 +135,17 @@ The input to the embeddings model.
| Exists in OpenTelemetry | No |
| Example | `What's the weather in Paris?` |

### gen_ai.input.messages

The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | maybe |
| Exists in OpenTelemetry | Yes |
| Example | `[{"role": "user", "parts": [{"type": "text", "content": "What is the weather in Paris?"}]}]` |

### gen_ai.operation.name

The name of the operation being performed.
Expand All @@ -151,28 +168,28 @@ The type of AI operation. Must be one of 'agent', 'ai_client', 'tool', 'handoff'
| Exists in OpenTelemetry | No |
| Example | `tool` |

### gen_ai.pipeline.name
### gen_ai.output.messages

Name of the AI pipeline or chain being executed.
The model's response messages. It has to be a stringified version of an array of message objects, which can include text responses and tool calls.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | `Autofix Pipeline` |
| Aliases | `ai.pipeline.name` |
| Exists in OpenTelemetry | Yes |
| Example | `[{"role": "assistant", "parts": [{"type": "text", "content": "The weather in Paris is currently rainy with a temperature of 57°F."}], "finish_reason": "stop"}]` |

### gen_ai.request.available_tools
### gen_ai.pipeline.name

The available tools for the model. It has to be a stringified version of an array of objects.
Name of the AI pipeline or chain being executed.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | `[{"name": "get_weather", "description": "Get the weather for a given location"}, {"name": "get_news", "description": "Get the news for a given topic"}]` |
| Example | `Autofix Pipeline` |
| Aliases | `ai.pipeline.name` |

### gen_ai.request.frequency_penalty

Expand All @@ -197,18 +214,6 @@ The maximum number of tokens to generate in the response.
| Exists in OpenTelemetry | Yes |
| Example | `2048` |

### gen_ai.request.messages

The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | `[{"role": "system", "content": "Generate a random number."}, {"role": "user", "content": [{"text": "Generate a random number between 0 and 10.", "type": "text"}]}, {"role": "tool", "content": {"toolCallId": "1", "toolName": "Weather", "output": "rainy"}}]` |
| Aliases | `ai.input_messages` |

### gen_ai.request.model

The model identifier being used for the request.
Expand Down Expand Up @@ -328,61 +333,72 @@ Whether or not the AI model call's response was streamed back asynchronously
| Example | `true` |
| Aliases | `ai.streaming` |

### gen_ai.response.text
### gen_ai.response.tokens_per_second

The model's response text messages. It has to be a stringified version of an array of response text messages.
The total output tokens per seconds throughput

| Property | Value |
| --- | --- |
| Type | `double` |
| Has PII | false |
| Exists in OpenTelemetry | No |
| Example | `12345.67` |

### gen_ai.system

The provider of the model.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | `["The weather in Paris is rainy and overcast, with temperatures around 57°F", "The weather in London is sunny and warm, with temperatures around 65°F"]` |
| Exists in OpenTelemetry | Yes |
| Example | `openai` |
| Aliases | `ai.model.provider` |

### gen_ai.response.tokens_per_second
### gen_ai.system_instructions

The total output tokens per seconds throughput
The system instructions passed to the model.

| Property | Value |
| --- | --- |
| Type | `double` |
| Has PII | false |
| Exists in OpenTelemetry | No |
| Example | `12345.67` |
| Type | `string` |
| Has PII | maybe |
| Exists in OpenTelemetry | Yes |
| Example | `You are a helpful assistant` |

### gen_ai.response.tool_calls
### gen_ai.tool.call.arguments

The tool calls in the model's response. It has to be a stringified version of an array of objects.
The arguments of the tool call. It has to be a stringified version of the arguments to the tool.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | `[{"name": "get_weather", "arguments": {"location": "Paris"}}]` |
| Exists in OpenTelemetry | Yes |
| Example | `{"location": "Paris"}` |

### gen_ai.system
### gen_ai.tool.call.result

The provider of the model.
The result of the tool call. It has to be a stringified version of the result of the tool.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | maybe |
| Exists in OpenTelemetry | Yes |
| Example | `openai` |
| Aliases | `ai.model.provider` |
| Example | `rainy, 57°F` |

### gen_ai.system.message
### gen_ai.tool.definitions

The system instructions passed to the model.
The list of source system tool definitions available to the GenAI agent or model.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | true |
| Exists in OpenTelemetry | No |
| Example | `You are a helpful assistant` |
| Has PII | maybe |
| Exists in OpenTelemetry | Yes |
| Example | `[{"type": "function", "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}}, "required": ["location", "unit"]}}]` |

### gen_ai.tool.description

Expand Down Expand Up @@ -548,6 +564,67 @@ The input messages sent to the model
| Deprecated | Yes, no replacement at this time |
| Deprecation Reason | Deprecated from OTEL, use gen_ai.input.messages with the new format instead. |

### gen_ai.request.available_tools

The available tools for the model. It has to be a stringified version of an array of objects.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | `[{"name": "get_weather", "description": "Get the weather for a given location"}, {"name": "get_news", "description": "Get the news for a given topic"}]` |
| Deprecated | Yes, use `gen_ai.tool.definitions` instead |

### gen_ai.request.messages

The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | `[{"role": "system", "content": "Generate a random number."}, {"role": "user", "content": [{"text": "Generate a random number between 0 and 10.", "type": "text"}]}, {"role": "tool", "content": {"toolCallId": "1", "toolName": "Weather", "output": "rainy"}}]` |
| Deprecated | Yes, use `gen_ai.input.messages` instead |
| Aliases | `ai.input_messages` |

### gen_ai.response.text

The model's response text messages. It has to be a stringified version of an array of response text messages.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | `["The weather in Paris is rainy and overcast, with temperatures around 57°F", "The weather in London is sunny and warm, with temperatures around 65°F"]` |
| Deprecated | Yes, use `gen_ai.output.messages` instead |

### gen_ai.response.tool_calls

The tool calls in the model's response. It has to be a stringified version of an array of objects.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | `[{"name": "get_weather", "arguments": {"location": "Paris"}}]` |
| Deprecated | Yes, use `gen_ai.output.messages` instead |

### gen_ai.system.message

The system instructions passed to the model.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | true |
| Exists in OpenTelemetry | No |
| Example | `You are a helpful assistant` |
| Deprecated | Yes, use `gen_ai.system_instructions` instead |

### gen_ai.usage.completion_tokens

The number of tokens used in the GenAI response (completion).
Expand Down
Loading
Loading