-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Adopt OTel semantic conventions for agents and frameworks #2575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
krisztianfekete
wants to merge
1
commit into
google:main
Choose a base branch
from
krisztianfekete:main
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+85
−14
Open
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -70,10 +70,13 @@ def trace_tool_call( | |
function_response_event: The event with the function response details. | ||
""" | ||
span = trace.get_current_span() | ||
span.set_attribute('gen_ai.system', 'gcp.vertex.agent') | ||
|
||
# Standard OpenTelemetry GenAI attributes as of OTel SemConv v1.36.0 for Agents and Frameworks | ||
span.set_attribute('gen_ai.system', 'gcp.vertex_ai') | ||
span.set_attribute('gen_ai.operation.name', 'execute_tool') | ||
span.set_attribute('gen_ai.tool.name', tool.name) | ||
span.set_attribute('gen_ai.tool.description', tool.description) | ||
|
||
tool_call_id = '<not specified>' | ||
tool_response = '<not specified>' | ||
if function_response_event.content.parts: | ||
|
@@ -86,6 +89,7 @@ def trace_tool_call( | |
|
||
span.set_attribute('gen_ai.tool.call.id', tool_call_id) | ||
|
||
# Vendor-specific attributes (moved from gen_ai.* to gcp.vertex.agent.*) | ||
if not isinstance(tool_response, dict): | ||
tool_response = {'result': tool_response} | ||
span.set_attribute( | ||
|
@@ -121,12 +125,15 @@ def trace_merged_tool_calls( | |
""" | ||
|
||
span = trace.get_current_span() | ||
span.set_attribute('gen_ai.system', 'gcp.vertex.agent') | ||
|
||
# Standard OpenTelemetry GenAI attributes | ||
span.set_attribute('gen_ai.system', 'gcp.vertex_ai') | ||
span.set_attribute('gen_ai.operation.name', 'execute_tool') | ||
span.set_attribute('gen_ai.tool.name', '(merged tools)') | ||
span.set_attribute('gen_ai.tool.description', '(merged tools)') | ||
span.set_attribute('gen_ai.tool.call.id', response_event_id) | ||
|
||
# Vendor-specific attributes | ||
span.set_attribute('gcp.vertex.agent.tool_call_args', 'N/A') | ||
span.set_attribute('gcp.vertex.agent.event_id', response_event_id) | ||
try: | ||
|
@@ -167,23 +174,38 @@ def trace_call_llm( | |
llm_response: The LLM response object. | ||
""" | ||
span = trace.get_current_span() | ||
# Special standard Open Telemetry GenaI attributes that indicate | ||
# that this is a span related to a Generative AI system. | ||
span.set_attribute('gen_ai.system', 'gcp.vertex.agent') | ||
|
||
# Standard OpenTelemetry GenAI attributes | ||
span.set_attribute('gen_ai.system', 'gcp.vertex_ai') | ||
span.set_attribute('gen_ai.operation.name', 'chat') | ||
span.set_attribute('gen_ai.request.model', llm_request.model) | ||
|
||
if hasattr(llm_response, 'id') and llm_response.id: | ||
span.set_attribute('gen_ai.response.id', llm_response.id) | ||
|
||
# Set response model if different from request model | ||
if ( | ||
hasattr(llm_response, 'model') | ||
and llm_response.model | ||
and llm_response.model != llm_request.model | ||
): | ||
span.set_attribute('gen_ai.response.model', llm_response.model) | ||
|
||
Comment on lines
+183
to
+193
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Where did you find the |
||
span.set_attribute( | ||
'gcp.vertex.agent.invocation_id', invocation_context.invocation_id | ||
) | ||
span.set_attribute( | ||
'gcp.vertex.agent.session_id', invocation_context.session.id | ||
) | ||
span.set_attribute('gcp.vertex.agent.event_id', event_id) | ||
|
||
# Consider removing once GenAI SDK provides a way to record this info. | ||
span.set_attribute( | ||
'gcp.vertex.agent.llm_request', | ||
_safe_json_serialize(_build_llm_request_for_trace(llm_request)), | ||
) | ||
# Consider removing once GenAI SDK provides a way to record this info. | ||
|
||
# Standard GenAI request attributes | ||
if llm_request.config: | ||
if llm_request.config.top_p: | ||
span.set_attribute( | ||
|
@@ -195,6 +217,14 @@ def trace_call_llm( | |
'gen_ai.request.max_tokens', | ||
llm_request.config.max_output_tokens, | ||
) | ||
if ( | ||
hasattr(llm_request.config, 'temperature') | ||
and llm_request.config.temperature is not None | ||
): | ||
span.set_attribute( | ||
'gen_ai.request.temperature', | ||
llm_request.config.temperature, | ||
) | ||
|
||
try: | ||
llm_response_json = llm_response.model_dump_json(exclude_none=True) | ||
|
@@ -206,6 +236,7 @@ def trace_call_llm( | |
llm_response_json, | ||
) | ||
|
||
# Standard GenAI usage and response attributes | ||
if llm_response.usage_metadata is not None: | ||
span.set_attribute( | ||
'gen_ai.usage.input_tokens', | ||
|
@@ -286,3 +317,41 @@ def _build_llm_request_for_trace(llm_request: LlmRequest) -> dict[str, Any]: | |
) | ||
) | ||
return result | ||
|
||
|
||
def _create_span_name(operation_name: str, model_name: str) -> str: | ||
"""Creates a span name following OpenTelemetry GenAI conventions. | ||
|
||
Args: | ||
operation_name: The GenAI operation name (e.g., 'generate_content', 'execute_tool'). | ||
model_name: The model name being used. | ||
|
||
Returns: | ||
A span name in the format '{operation_name} {model_name}'. | ||
""" | ||
return f'{operation_name} {model_name}' | ||
|
||
|
||
def add_genai_prompt_event(span: trace.Span, prompt_content: str): | ||
"""Adds a GenAI prompt event to the span following OpenTelemetry conventions. | ||
|
||
Args: | ||
span: The OpenTelemetry span to add the event to. | ||
prompt_content: The prompt content as a JSON string. | ||
""" | ||
span.add_event( | ||
name='gen_ai.content.prompt', attributes={'gen_ai.prompt': prompt_content} | ||
) | ||
|
||
|
||
def add_genai_completion_event(span: trace.Span, completion_content: str): | ||
"""Adds a GenAI completion event to the span following OpenTelemetry conventions. | ||
|
||
Args: | ||
span: The OpenTelemetry span to add the event to. | ||
completion_content: The completion content as a JSON string. | ||
""" | ||
span.add_event( | ||
name='gen_ai.content.completion', | ||
attributes={'gen_ai.completion': completion_content}, | ||
) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reasoning for only setting the
gen_ai.response.model
when it's different togen_ai.request.model
?Is this behavior specified somewhere, so it would be reasonable to assume both are the same if the response model is missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SemConv marks
gen_ai.request.model
as “Conditionally Required (if available)” andgen_ai.response.model
as “Recommended,” not required, so it might makes sense to only include it if it provides additional information. I have a vague recollection of this practice from one of the many semconv GH issues discussing this, but cannot find it now.