Skip to content

Commit fbb4397

Browse files
authored
Added more deprecations/aliases (#65)
1 parent 596f2e8 commit fbb4397

File tree

8 files changed

+140
-61
lines changed

8 files changed

+140
-61
lines changed

generated/attributes/ai.md

Lines changed: 53 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -5,21 +5,17 @@
55
- [Stable Attributes](#stable-attributes)
66
- [ai.citations](#aicitations)
77
- [ai.documents](#aidocuments)
8-
- [ai.input_messages](#aiinput_messages)
98
- [ai.is_search_required](#aiis_search_required)
109
- [ai.metadata](#aimetadata)
1110
- [ai.pipeline.name](#aipipelinename)
1211
- [ai.preamble](#aipreamble)
1312
- [ai.raw_prompting](#airaw_prompting)
1413
- [ai.response_format](#airesponse_format)
15-
- [ai.responses](#airesponses)
1614
- [ai.search_queries](#aisearch_queries)
1715
- [ai.search_results](#aisearch_results)
1816
- [ai.streaming](#aistreaming)
1917
- [ai.tags](#aitags)
2018
- [ai.texts](#aitexts)
21-
- [ai.tool_calls](#aitool_calls)
22-
- [ai.tools](#aitools)
2319
- [ai.total_cost](#aitotal_cost)
2420
- [ai.warnings](#aiwarnings)
2521
- [Deprecated Attributes](#deprecated-attributes)
@@ -28,12 +24,16 @@
2824
- [ai.frequency_penalty](#aifrequency_penalty)
2925
- [ai.function_call](#aifunction_call)
3026
- [ai.generation_id](#aigeneration_id)
27+
- [ai.input_messages](#aiinput_messages)
3128
- [ai.model_id](#aimodel_id)
3229
- [ai.model.provider](#aimodelprovider)
3330
- [ai.presence_penalty](#aipresence_penalty)
3431
- [ai.prompt_tokens.used](#aiprompt_tokensused)
32+
- [ai.responses](#airesponses)
3533
- [ai.seed](#aiseed)
3634
- [ai.temperature](#aitemperature)
35+
- [ai.tool_calls](#aitool_calls)
36+
- [ai.tools](#aitools)
3737
- [ai.top_k](#aitop_k)
3838
- [ai.top_p](#aitop_p)
3939
- [ai.total_tokens.used](#aitotal_tokensused)
@@ -62,18 +62,6 @@ Documents or content chunks used as context for the AI model.
6262
| Exists in OpenTelemetry | No |
6363
| Example | `["document1.txt","document2.pdf"]` |
6464

65-
### ai.input_messages
66-
67-
The input messages sent to the model
68-
69-
| Property | Value |
70-
| --- | --- |
71-
| Type | `string` |
72-
| Has PII | maybe |
73-
| Exists in OpenTelemetry | No |
74-
| Example | `[{"role": "user", "message": "hello"}]` |
75-
| Aliases | `gen_ai.prompt` |
76-
7765
### ai.is_search_required
7866

7967
Boolean indicating if the model needs to perform a search.
@@ -140,17 +128,6 @@ For an AI model call, the format of the response
140128
| Exists in OpenTelemetry | No |
141129
| Example | `json_object` |
142130

143-
### ai.responses
144-
145-
The response messages sent back by the AI model.
146-
147-
| Property | Value |
148-
| --- | --- |
149-
| Type | `string[]` |
150-
| Has PII | false |
151-
| Exists in OpenTelemetry | No |
152-
| Example | `["hello","world"]` |
153-
154131
### ai.search_queries
155132

156133
Queries used to search for relevant context or documents.
@@ -206,28 +183,6 @@ Raw text inputs provided to the model.
206183
| Exists in OpenTelemetry | No |
207184
| Example | `["Hello, how are you?","What is the capital of France?"]` |
208185

209-
### ai.tool_calls
210-
211-
For an AI model call, the tool calls that were made.
212-
213-
| Property | Value |
214-
| --- | --- |
215-
| Type | `string[]` |
216-
| Has PII | true |
217-
| Exists in OpenTelemetry | No |
218-
| Example | `["tool_call_1","tool_call_2"]` |
219-
220-
### ai.tools
221-
222-
For an AI model call, the functions that are available
223-
224-
| Property | Value |
225-
| --- | --- |
226-
| Type | `string[]` |
227-
| Has PII | false |
228-
| Exists in OpenTelemetry | No |
229-
| Example | `["function_1","function_2"]` |
230-
231186
### ai.total_cost
232187

233188
The total cost for the tokens used.
@@ -315,6 +270,19 @@ Unique identifier for the completion.
315270
| Example | `gen_123abc` |
316271
| Deprecated | Yes, use `gen_ai.response.id` instead |
317272

273+
### ai.input_messages
274+
275+
The input messages sent to the model
276+
277+
| Property | Value |
278+
| --- | --- |
279+
| Type | `string` |
280+
| Has PII | maybe |
281+
| Exists in OpenTelemetry | No |
282+
| Example | `[{"role": "user", "message": "hello"}]` |
283+
| Deprecated | Yes, use `gen_ai.request.messages` instead |
284+
| Aliases | `gen_ai.prompt`, `gen_ai.request.messages` |
285+
318286
### ai.model_id
319287

320288
The vendor-specific ID of the model used.
@@ -365,6 +333,18 @@ The number of tokens used to process just the prompt.
365333
| Deprecated | Yes, use `gen_ai.usage.input_tokens` instead |
366334
| Aliases | `gen_ai.usage.prompt_tokens`, `gen_ai.usage.input_tokens` |
367335

336+
### ai.responses
337+
338+
The response messages sent back by the AI model.
339+
340+
| Property | Value |
341+
| --- | --- |
342+
| Type | `string[]` |
343+
| Has PII | false |
344+
| Exists in OpenTelemetry | No |
345+
| Example | `["hello","world"]` |
346+
| Deprecated | Yes, use `gen_ai.response.text` instead |
347+
368348
### ai.seed
369349

370350
The seed, ideally models given the same seed and same other parameters will produce the exact same output.
@@ -389,6 +369,30 @@ For an AI model call, the temperature parameter. Temperature essentially means h
389369
| Example | `0.1` |
390370
| Deprecated | Yes, use `gen_ai.request.temperature` instead |
391371

372+
### ai.tool_calls
373+
374+
For an AI model call, the tool calls that were made.
375+
376+
| Property | Value |
377+
| --- | --- |
378+
| Type | `string[]` |
379+
| Has PII | true |
380+
| Exists in OpenTelemetry | No |
381+
| Example | `["tool_call_1","tool_call_2"]` |
382+
| Deprecated | Yes, use `gen_ai.response.tool_calls` instead |
383+
384+
### ai.tools
385+
386+
For an AI model call, the functions that are available
387+
388+
| Property | Value |
389+
| --- | --- |
390+
| Type | `string[]` |
391+
| Has PII | false |
392+
| Exists in OpenTelemetry | No |
393+
| Example | `["function_1","function_2"]` |
394+
| Deprecated | Yes, use `gen_ai.request.available_tools` instead |
395+
392396
### ai.top_k
393397

394398
Limits the model to only consider the K most likely next tokens, where K is an integer (e.g., top_k=20 means only the 20 highest probability tokens are considered).

generated/attributes/gen_ai.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ The maximum number of tokens to generate in the response.
135135

136136
### gen_ai.request.messages
137137

138-
The messages passed to the model. The "content" can be a string or an array of objects. It has to be a stringified version of an array of objects.
138+
The messages passed to the model. It has to be a stringified version of an array of objects. The "content" can be a string or an array of objects.
139139

140140
| Property | Value |
141141
| --- | --- |

javascript/sentry-conventions/src/attributes.ts

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -158,8 +158,9 @@ export type AI_GENERATION_ID_TYPE = string;
158158
*
159159
* Attribute defined in OTEL: No
160160
*
161-
* Aliases: {@link GEN_AI_PROMPT} `gen_ai.prompt`
161+
* Aliases: {@link GEN_AI_PROMPT} `gen_ai.prompt`, {@link GEN_AI_REQUEST_MESSAGES} `gen_ai.request.messages`
162162
*
163+
* @deprecated Use {@link GEN_AI_REQUEST_MESSAGES} (gen_ai.request.messages) instead
163164
* @example "[{\"role\": \"user\", \"message\": \"hello\"}]"
164165
*/
165166
export const AI_INPUT_MESSAGES = 'ai.input_messages';
@@ -388,6 +389,7 @@ export type AI_RESPONSE_FORMAT_TYPE = string;
388389
*
389390
* Attribute defined in OTEL: No
390391
*
392+
* @deprecated Use {@link GEN_AI_RESPONSE_TEXT} (gen_ai.response.text) instead
391393
* @example ["hello","world"]
392394
*/
393395
export const AI_RESPONSES = 'ai.responses';
@@ -550,6 +552,7 @@ export type AI_TEXTS_TYPE = Array<string>;
550552
*
551553
* Attribute defined in OTEL: No
552554
*
555+
* @deprecated Use {@link GEN_AI_RESPONSE_TOOL_CALLS} (gen_ai.response.tool_calls) instead
553556
* @example ["tool_call_1","tool_call_2"]
554557
*/
555558
export const AI_TOOL_CALLS = 'ai.tool_calls';
@@ -570,6 +573,7 @@ export type AI_TOOL_CALLS_TYPE = Array<string>;
570573
*
571574
* Attribute defined in OTEL: No
572575
*
576+
* @deprecated Use {@link GEN_AI_REQUEST_AVAILABLE_TOOLS} (gen_ai.request.available_tools) instead
573577
* @example ["function_1","function_2"]
574578
*/
575579
export const AI_TOOLS = 'ai.tools';
@@ -2016,7 +2020,7 @@ export type GEN_AI_REQUEST_MAX_TOKENS_TYPE = number;
20162020
// Path: model/attributes/gen_ai/gen_ai__request__messages.json
20172021

20182022
/**
2019-
* The messages passed to the model. The "content" can be a string or an array of objects. It has to be a stringified version of an array of objects. `gen_ai.request.messages`
2023+
* The messages passed to the model. It has to be a stringified version of an array of objects. The "content" can be a string or an array of objects. `gen_ai.request.messages`
20202024
*
20212025
* Attribute Value Type: `string` {@link GEN_AI_REQUEST_MESSAGES_TYPE}
20222026
*
@@ -5989,21 +5993,17 @@ export type AttributeValue = string | number | boolean | Array<string> | Array<n
59895993
export type Attributes = {
59905994
[AI_CITATIONS]?: AI_CITATIONS_TYPE;
59915995
[AI_DOCUMENTS]?: AI_DOCUMENTS_TYPE;
5992-
[AI_INPUT_MESSAGES]?: AI_INPUT_MESSAGES_TYPE;
59935996
[AI_IS_SEARCH_REQUIRED]?: AI_IS_SEARCH_REQUIRED_TYPE;
59945997
[AI_METADATA]?: AI_METADATA_TYPE;
59955998
[AI_PIPELINE_NAME]?: AI_PIPELINE_NAME_TYPE;
59965999
[AI_PREAMBLE]?: AI_PREAMBLE_TYPE;
59976000
[AI_RAW_PROMPTING]?: AI_RAW_PROMPTING_TYPE;
59986001
[AI_RESPONSE_FORMAT]?: AI_RESPONSE_FORMAT_TYPE;
5999-
[AI_RESPONSES]?: AI_RESPONSES_TYPE;
60006002
[AI_SEARCH_QUERIES]?: AI_SEARCH_QUERIES_TYPE;
60016003
[AI_SEARCH_RESULTS]?: AI_SEARCH_RESULTS_TYPE;
60026004
[AI_STREAMING]?: AI_STREAMING_TYPE;
60036005
[AI_TAGS]?: AI_TAGS_TYPE;
60046006
[AI_TEXTS]?: AI_TEXTS_TYPE;
6005-
[AI_TOOL_CALLS]?: AI_TOOL_CALLS_TYPE;
6006-
[AI_TOOLS]?: AI_TOOLS_TYPE;
60076007
[AI_TOTAL_COST]?: AI_TOTAL_COST_TYPE;
60086008
[AI_WARNINGS]?: AI_WARNINGS_TYPE;
60096009
[APP_START_TYPE]?: APP_START_TYPE_TYPE;

model/attributes/ai/ai__input_messages.json

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,10 @@
77
},
88
"is_in_otel": false,
99
"example": "[{\"role\": \"user\", \"message\": \"hello\"}]",
10-
"alias": ["gen_ai.prompt"],
11-
"sdks": ["python"]
10+
"alias": ["gen_ai.prompt", "gen_ai.request.messages"],
11+
"sdks": ["python"],
12+
"deprecation": {
13+
"_status": null,
14+
"replacement": "gen_ai.request.messages"
15+
}
1216
}

model/attributes/ai/ai__responses.json

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,5 +7,9 @@
77
},
88
"is_in_otel": false,
99
"example": ["hello", "world"],
10-
"sdks": ["python"]
10+
"sdks": ["python"],
11+
"deprecation": {
12+
"_status": null,
13+
"replacement": "gen_ai.response.text"
14+
}
1115
}

model/attributes/ai/ai__tool_calls.json

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,5 +6,9 @@
66
"key": "true"
77
},
88
"is_in_otel": false,
9-
"example": ["tool_call_1", "tool_call_2"]
9+
"example": ["tool_call_1", "tool_call_2"],
10+
"deprecation": {
11+
"_status": null,
12+
"replacement": "gen_ai.response.tool_calls"
13+
}
1014
}

model/attributes/ai/ai__tools.json

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,5 +6,9 @@
66
"key": "false"
77
},
88
"is_in_otel": false,
9-
"example": ["function_1", "function_2"]
9+
"example": ["function_1", "function_2"],
10+
"deprecation": {
11+
"_status": null,
12+
"replacement": "gen_ai.request.available_tools"
13+
}
1014
}

shared/deprecated_attributes.json

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -212,6 +212,22 @@
212212
"replacement": "gen_ai.response.id"
213213
}
214214
},
215+
{
216+
"key": "ai.input_messages",
217+
"brief": "The input messages sent to the model",
218+
"type": "string",
219+
"pii": {
220+
"key": "maybe"
221+
},
222+
"is_in_otel": false,
223+
"example": "[{\"role\": \"user\", \"message\": \"hello\"}]",
224+
"alias": ["gen_ai.prompt", "gen_ai.request.messages"],
225+
"sdks": ["python"],
226+
"deprecation": {
227+
"_status": null,
228+
"replacement": "gen_ai.request.messages"
229+
}
230+
},
215231
{
216232
"key": "ai.model.provider",
217233
"brief": "The provider of the model.",
@@ -272,6 +288,21 @@
272288
"replacement": "gen_ai.usage.input_tokens"
273289
}
274290
},
291+
{
292+
"key": "ai.responses",
293+
"brief": "The response messages sent back by the AI model.",
294+
"type": "string[]",
295+
"pii": {
296+
"key": "false"
297+
},
298+
"is_in_otel": false,
299+
"example": ["hello", "world"],
300+
"sdks": ["python"],
301+
"deprecation": {
302+
"_status": null,
303+
"replacement": "gen_ai.response.text"
304+
}
305+
},
275306
{
276307
"key": "ai.seed",
277308
"brief": "The seed, ideally models given the same seed and same other parameters will produce the exact same output.",
@@ -300,6 +331,34 @@
300331
"replacement": "gen_ai.request.temperature"
301332
}
302333
},
334+
{
335+
"key": "ai.tool_calls",
336+
"brief": "For an AI model call, the tool calls that were made.",
337+
"type": "string[]",
338+
"pii": {
339+
"key": "true"
340+
},
341+
"is_in_otel": false,
342+
"example": ["tool_call_1", "tool_call_2"],
343+
"deprecation": {
344+
"_status": null,
345+
"replacement": "gen_ai.response.tool_calls"
346+
}
347+
},
348+
{
349+
"key": "ai.tools",
350+
"brief": "For an AI model call, the functions that are available",
351+
"type": "string[]",
352+
"pii": {
353+
"key": "false"
354+
},
355+
"is_in_otel": false,
356+
"example": ["function_1", "function_2"],
357+
"deprecation": {
358+
"_status": null,
359+
"replacement": "gen_ai.request.available_tools"
360+
}
361+
},
303362
{
304363
"key": "ai.top_k",
305364
"brief": "Limits the model to only consider the K most likely next tokens, where K is an integer (e.g., top_k=20 means only the 20 highest probability tokens are considered).",

0 commit comments

Comments
 (0)