-
Notifications
You must be signed in to change notification settings - Fork 1.1k
feat: Add id and finish_reason to ModelResponse #2325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
bacbc82
to
28051fe
Compare
|
||
finish_reason: str | None = None | ||
"""The reason the model finished generating this response, e.g. 'stop', 'length', etc.""" | ||
|
||
def otel_events(self, settings: InstrumentationSettings) -> list[Event]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue mentions:
These fields would be used to populate gen_ai.response.id and gen_ai.response.finish_reasons in opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans#genai-attributes
Can you please handle that here as well so this PR can close that issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that genai.response.finish_reasons
has specific allowed values: #1882 (comment)
I also left some other related suggestions on that older PR that tried to add finish_reason
-- can you please check those out as well?
This PR is stale, and will be closed in 3 days if no reply is received. |
40d9442
to
c22a9f4
Compare
@@ -773,7 +773,30 @@ def handler(_: httpx.Request): | |||
with pytest.raises(ModelHTTPError) as exc_info: | |||
await agent.run('Hello') | |||
|
|||
assert str(exc_info.value) == snapshot('status_code: 401, model_name: gemini-1.5-flash, body: invalid request') | |||
assert str(exc_info.value) == snapshot("""\ | |||
status_code: 400, model_name: gemini-1.5-flash, body: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is incorrect, we should still be getting the mocked 401 response shouldn't we?
@@ -276,7 +276,9 @@ def get_jpg(request: httpx.Request) -> httpx.Response: | |||
graph1.name = None | |||
img = graph1.mermaid_image(start_node=Foo(), httpx_client=httpx_with_handler(get_jpg)) | |||
assert graph1.name == 'graph1' | |||
assert img == snapshot(b'---\ntitle: graph1\n---\nstateDiagram-v2\n [*] --> Foo\n Foo --> Bar\n Bar --> [*]') | |||
assert img == snapshot( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like an unrelated change
parts=[TextPart(pydantic_core.to_json(output).decode())], | ||
model_name=self._model_name, | ||
id=None, | ||
finish_reason=None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As these fields are optional, we shouldn't need to include them here (and below)
@@ -137,6 +137,8 @@ async def request( | |||
if not response.usage.has_values(): # pragma: no branch | |||
response.usage = _estimate_usage(chain(messages, [response])) | |||
response.usage.requests = 1 | |||
response.id = getattr(response, 'id', None) | |||
response.finish_reason = getattr(response, 'finish_reason', None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are these lines necessary?
id: str | None = None | ||
"""Unique identifier for the model response, e.g. as returned by the model provider (OpenAI, etc).""" | ||
|
||
finish_reason: str | None = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As in the comment below, I think this should only support the values from https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/#genai-attributes. If the model returns something else, the raw value can go into vendor_details
and a mapped version should go here.
vendor_id: str | None = None | ||
"""Vendor ID as specified by the model provider. This can be used to track the specific request to the model.""" | ||
id: str | None = None | ||
"""Unique identifier for the model response, e.g. as returned by the model provider (OpenAI, etc).""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"""Unique identifier for the model response, e.g. as returned by the model provider (OpenAI, etc).""" | |
"""Unique identifier for the model response as returned by the model provider.""" |
if self.finish_reason and self.vendor_details is None: | ||
self.vendor_details = {} | ||
if self.finish_reason and self.vendor_details is not None: | ||
self.vendor_details['finish_reason'] = self.finish_reason |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic here can be simplifed a bit: the first if
means that self.vendor_details is not None
in the second if will never be the case.
if finish_reason: | ||
vendor_details = {'finish_reason': finish_reason} | ||
vendor_details['finish_reason'] = finish_reason |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As in the other PR that worked on this (#1882 (comment)), I think we should keep the raw value here and then add finish_reason
with a mapped value. But note that GeminiModel
is deprecated and we should do this in GoogleModel
instead.
Feature:
Fixes #886