Skip to content

Commit f10b79e

Browse files
authored
fix(openai): handle empty mode values in azure openai streaming responses (#14417)
Fixed an issue where the OpenAI model tag was not being updated when the first chunk contained an empty model value but subsequent chunks contained the actual model name. MLOS-219. ## Checklist - [x] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)
1 parent 04ae100 commit f10b79e

File tree

3 files changed

+6
-2
lines changed

3 files changed

+6
-2
lines changed

ddtrace/contrib/internal/openai/utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ def _loop_handler(span, chunk, streamed_chunks):
116116
When handling a streamed chat/completion/responses,
117117
this function is called for each chunk in the streamed response.
118118
"""
119-
if span.get_tag("openai.response.model") is None:
119+
if not span.get_tag("openai.response.model"):
120120
if hasattr(chunk, "type") and chunk.type.startswith("response."):
121121
response = getattr(chunk, "response", None)
122122
model = getattr(response, "model", "")
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
---
2+
fixes:
3+
- |
4+
openai: This fix resolves an issue where the model name was not captured in streamed responses that specified the model name in non-initial chunks.

tests/contrib/openai/test_openai_llmobs.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -508,7 +508,7 @@ def test_chat_completion_azure_streamed(
508508
mock_llmobs_writer.enqueue.assert_called_with(
509509
_expected_llmobs_llm_span_event(
510510
span,
511-
model_name="gpt-4o-mini",
511+
model_name="gpt-4o-mini-2024-07-18",
512512
model_provider="azure_openai",
513513
input_messages=input_messages,
514514
# note: investigate why role is empty; in the streamed chunks there is no role returned.

0 commit comments

Comments
 (0)