You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -22,6 +23,7 @@ This document defines the attributes used to describe telemetry in the context o
22
23
| <aid="gen-ai-request-max-tokens"href="#gen-ai-request-max-tokens">`gen_ai.request.max_tokens`</a> | int | The maximum number of tokens the model generates for a request. |`100`||
23
24
| <aid="gen-ai-request-model"href="#gen-ai-request-model">`gen_ai.request.model`</a> | string | The name of the GenAI model a request is being made to. |`gpt-4`||
24
25
| <aid="gen-ai-request-presence-penalty"href="#gen-ai-request-presence-penalty">`gen_ai.request.presence_penalty`</a> | double | The presence penalty setting for the GenAI request. |`0.1`||
26
+
| <aid="gen-ai-request-seed"href="#gen-ai-request-seed">`gen_ai.request.seed`</a> | int | Requests with same seed value more likely to return same result. |`100`||
25
27
| <aid="gen-ai-request-stop-sequences"href="#gen-ai-request-stop-sequences">`gen_ai.request.stop_sequences`</a> | string[]| List of sequences that the model will use to stop generating further tokens. |`["forest", "lived"]`||
26
28
| <aid="gen-ai-request-temperature"href="#gen-ai-request-temperature">`gen_ai.request.temperature`</a> | double | The temperature setting for the GenAI request. |`0.0`||
27
29
| <aid="gen-ai-request-top-k"href="#gen-ai-request-top-k">`gen_ai.request.top_k`</a> | double | The top_k sampling setting for the GenAI request. |`1.0`||
@@ -88,7 +90,6 @@ Thie group defines attributes for OpenAI.
| <aid="gen-ai-openai-request-response-format"href="#gen-ai-openai-request-response-format">`gen_ai.openai.request.response_format`</a> | string | The response format that is requested. |`json`||
91
-
| <aid="gen-ai-openai-request-seed"href="#gen-ai-openai-request-seed">`gen_ai.openai.request.seed`</a> | int | Requests with same seed value more likely to return same result. |`100`||
92
93
| <aid="gen-ai-openai-request-service-tier"href="#gen-ai-openai-request-service-tier">`gen_ai.openai.request.service_tier`</a> | string | The service tier requested. May be a specific tier, default, or auto. |`auto`; `default`||
93
94
| <aid="gen-ai-openai-response-service-tier"href="#gen-ai-openai-response-service-tier">`gen_ai.openai.response.service_tier`</a> | string | The service tier used for the response. |`scale`; `default`||
94
95
| <aid="gen-ai-openai-response-system-fingerprint"href="#gen-ai-openai-response-system-fingerprint">`gen_ai.openai.response.system_fingerprint`</a> | string | A fingerprint to track any eventual change in the Generative AI environment. |`fp_44709d6fcb`||
| <aid="gen-ai-prompt"href="#gen-ai-prompt">`gen_ai.prompt`</a> | string | Deprecated, use Event API to report prompt contents. |`[{'role': 'user', 'content': 'What is the capital of France?'}]`|<br>Removed, no replacement at this time. |
123
124
| <aid="gen-ai-usage-completion-tokens"href="#gen-ai-usage-completion-tokens">`gen_ai.usage.completion_tokens`</a> | int | Deprecated, use `gen_ai.usage.output_tokens` instead. |`42`|<br>Replaced by `gen_ai.usage.output_tokens` attribute. |
124
125
| <aid="gen-ai-usage-prompt-tokens"href="#gen-ai-usage-prompt-tokens">`gen_ai.usage.prompt_tokens`</a> | int | Deprecated, use `gen_ai.usage.input_tokens` instead. |`42`|<br>Replaced by `gen_ai.usage.input_tokens` attribute. |
| <aid="gen-ai-openai-request-seed"href="#gen-ai-openai-request-seed">`gen_ai.openai.request.seed`</a> | int | Deprecated, use `gen_ai.request.seed`. |`100`|<br>Replaced by `gen_ai.request.seed` attribute. |
Copy file name to clipboardExpand all lines: docs/gen-ai/openai.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ attributes and ones specific the OpenAI.
40
40
|[`gen_ai.request.model`](/docs/attributes-registry/gen-ai.md)| string | The name of the GenAI model a request is being made to. [2]|`gpt-4`|`Required`||
41
41
|[`error.type`](/docs/attributes-registry/error.md)| string | Describes a class of error the operation ended with. [3]|`timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500`|`Conditionally Required` if the operation ended in an error ||
42
42
|[`gen_ai.openai.request.response_format`](/docs/attributes-registry/gen-ai.md)| string | The response format that is requested. |`json`|`Conditionally Required` if the request includes a response_format ||
43
-
|[`gen_ai.openai.request.seed`](/docs/attributes-registry/gen-ai.md)| int |Requests with same seed value more likely to return same result. |`100`|`Conditionally Required` if the request includes a seed ||
43
+
|[`gen_ai.openai.request.seed`](/docs/attributes-registry/gen-ai.md)| int |Deprecated, use `gen_ai.request.seed`. |`100`|`Conditionally Required` if the request includes a seed |<br>Replaced by `gen_ai.request.seed` attribute.|
44
44
|[`gen_ai.openai.request.service_tier`](/docs/attributes-registry/gen-ai.md)| string | The service tier requested. May be a specific tier, default, or auto. |`auto`; `default`|`Conditionally Required`[4]||
45
45
|[`gen_ai.openai.response.service_tier`](/docs/attributes-registry/gen-ai.md)| string | The service tier used for the response. |`scale`; `default`|`Conditionally Required`[5]||
46
46
|[`server.port`](/docs/attributes-registry/server.md)| int | GenAI server port. [6]|`80`; `8080`; `443`|`Conditionally Required` If `server.address` is set. ||
0 commit comments