Skip to content

Commit f3d0394

Browse files
authored
Reorder when the completion hook is called from vertex AI instrumentation (open-telemetry#3830)
* Reorder when the completion is called * Add changelog
1 parent c17e428 commit f3d0394

File tree

2 files changed

+11
-11
lines changed
  • instrumentation-genai/opentelemetry-instrumentation-vertexai

2 files changed

+11
-11
lines changed

instrumentation-genai/opentelemetry-instrumentation-vertexai/CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
99

1010
- Update instrumentation to use the latest semantic convention changes made in https://github.com/open-telemetry/semantic-conventions/pull/2179.
1111
Now only a single event and span (`gen_ai.client.inference.operation.details`) are used to capture prompt and response content. These changes are opt-in,
12-
users will need to set the environment variable OTEL_SEMCONV_STABILITY_OPT_IN to `gen_ai_latest_experimental` to see them ([#3799](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/3799)) and ([#3709](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/3709)).
12+
users will need to set the environment variable OTEL_SEMCONV_STABILITY_OPT_IN to `gen_ai_latest_experimental` to see them ([#3799](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/3799)) and ([#3709](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/3709)). Update instrumentation to call upload hook.
1313
- Implement uninstrument for `opentelemetry-instrumentation-vertexai`
1414
([#3328](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/3328))
1515
- VertexAI support for async calling

instrumentation-genai/opentelemetry-instrumentation-vertexai/src/opentelemetry/instrumentation/vertexai/patch.py

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -172,6 +172,9 @@ def handle_response(
172172
| prediction_service_v1beta1.GenerateContentResponse
173173
| None,
174174
) -> None:
175+
event = LogRecord(
176+
event_name="gen_ai.client.inference.operation.details",
177+
)
175178
attributes = (
176179
get_server_attributes(instance.api_endpoint) # type: ignore[reportUnknownMemberType]
177180
| request_attributes
@@ -203,6 +206,13 @@ def handle_response(
203206
)
204207
for candidate in response.candidates
205208
]
209+
self.completion_hook.on_completion(
210+
inputs=inputs,
211+
outputs=outputs,
212+
system_instruction=system_instructions,
213+
span=span,
214+
log_record=event,
215+
)
206216
content_attributes = {
207217
k: [asdict(x) for x in v]
208218
for k, v in [
@@ -227,23 +237,13 @@ def handle_response(
227237
for k, v in content_attributes.items()
228238
}
229239
)
230-
event = LogRecord(
231-
event_name="gen_ai.client.inference.operation.details",
232-
)
233240
event.attributes = attributes
234241
if capture_content in (
235242
ContentCapturingMode.SPAN_AND_EVENT,
236243
ContentCapturingMode.EVENT_ONLY,
237244
):
238245
event.attributes |= content_attributes
239246
self.logger.emit(event)
240-
self.completion_hook.on_completion(
241-
inputs=inputs,
242-
outputs=outputs,
243-
system_instruction=system_instructions,
244-
span=span,
245-
log_record=event,
246-
)
247247

248248
yield handle_response
249249

0 commit comments

Comments
 (0)