-
Notifications
You must be signed in to change notification settings - Fork 1k
Instrument openai async client #14322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
|
||
| public static void emitPromptLogEvents( | ||
| Logger eventLogger, ChatCompletionCreateParams request, boolean captureMessageContent) { | ||
| Context ctx, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
usually we don't use abbreviations so most code uses context and parentContext
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks - fixed it
…-instrumentation into openai-async
| future = | ||
| future.whenComplete( | ||
| (res, t) -> instrumenter.end(context, chatCompletionCreateParams, res, t)); | ||
| return CompletableFutureWrapper.wrap(future, parentContext); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I realized I had a big bug here, passing context instead of parentContext. Fixed it and updated test
Continues openai instrumentation, adding support for async client. The types work out that the proxy entry points need to basically be duplicated but the telemetry is still mostly contained in the instrumentation API usage, and streaming handling was refactored to a sharable class.
Also passes context to logs in all cases which is required for async, and will be a bit faster in sync anyways.
/cc @codefromthecrypt