Skip to content

feat: Add id and finish_reason to ModelResponse #2325

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

fatelei
Copy link

@fatelei fatelei commented Jul 27, 2025

Feature:

Fixes #886

@fatelei fatelei force-pushed the issue-886 branch 3 times, most recently from bacbc82 to 28051fe Compare July 28, 2025 10:41

finish_reason: str | None = None
"""The reason the model finished generating this response, e.g. 'stop', 'length', etc."""

def otel_events(self, settings: InstrumentationSettings) -> list[Event]:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue mentions:

These fields would be used to populate gen_ai.response.id and gen_ai.response.finish_reasons in opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans#genai-attributes

Can you please handle that here as well so this PR can close that issue?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that genai.response.finish_reasons has specific allowed values: #1882 (comment)

I also left some other related suggestions on that older PR that tried to add finish_reason -- can you please check those out as well?

Copy link

github-actions bot commented Aug 5, 2025

This PR is stale, and will be closed in 3 days if no reply is received.

@github-actions github-actions bot added the Stale label Aug 5, 2025
@fatelei fatelei force-pushed the issue-886 branch 2 times, most recently from 40d9442 to c22a9f4 Compare August 5, 2025 16:21
@@ -773,7 +773,30 @@ def handler(_: httpx.Request):
with pytest.raises(ModelHTTPError) as exc_info:
await agent.run('Hello')

assert str(exc_info.value) == snapshot('status_code: 401, model_name: gemini-1.5-flash, body: invalid request')
assert str(exc_info.value) == snapshot("""\
status_code: 400, model_name: gemini-1.5-flash, body: {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is incorrect, we should still be getting the mocked 401 response shouldn't we?

@@ -276,7 +276,9 @@ def get_jpg(request: httpx.Request) -> httpx.Response:
graph1.name = None
img = graph1.mermaid_image(start_node=Foo(), httpx_client=httpx_with_handler(get_jpg))
assert graph1.name == 'graph1'
assert img == snapshot(b'---\ntitle: graph1\n---\nstateDiagram-v2\n [*] --> Foo\n Foo --> Bar\n Bar --> [*]')
assert img == snapshot(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like an unrelated change

parts=[TextPart(pydantic_core.to_json(output).decode())],
model_name=self._model_name,
id=None,
finish_reason=None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As these fields are optional, we shouldn't need to include them here (and below)

@@ -137,6 +137,8 @@ async def request(
if not response.usage.has_values(): # pragma: no branch
response.usage = _estimate_usage(chain(messages, [response]))
response.usage.requests = 1
response.id = getattr(response, 'id', None)
response.finish_reason = getattr(response, 'finish_reason', None)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are these lines necessary?

id: str | None = None
"""Unique identifier for the model response, e.g. as returned by the model provider (OpenAI, etc)."""

finish_reason: str | None = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As in the comment below, I think this should only support the values from https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/#genai-attributes. If the model returns something else, the raw value can go into vendor_details and a mapped version should go here.

vendor_id: str | None = None
"""Vendor ID as specified by the model provider. This can be used to track the specific request to the model."""
id: str | None = None
"""Unique identifier for the model response, e.g. as returned by the model provider (OpenAI, etc)."""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"""Unique identifier for the model response, e.g. as returned by the model provider (OpenAI, etc)."""
"""Unique identifier for the model response as returned by the model provider."""

if self.finish_reason and self.vendor_details is None:
self.vendor_details = {}
if self.finish_reason and self.vendor_details is not None:
self.vendor_details['finish_reason'] = self.finish_reason
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic here can be simplifed a bit: the first if means that self.vendor_details is not None in the second if will never be the case.

if finish_reason:
vendor_details = {'finish_reason': finish_reason}
vendor_details['finish_reason'] = finish_reason
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As in the other PR that worked on this (#1882 (comment)), I think we should keep the raw value here and then add finish_reason with a mapped value. But note that GeminiModel is deprecated and we should do this in GoogleModel instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add id and finish_reason to ModelResponse
2 participants