-
Notifications
You must be signed in to change notification settings - Fork 766
Added span support for genAI langchain llm invocation #3665
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Added span support for genAI langchain llm invocation #3665
Conversation
...ntelemetry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/__init__.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for putting this together. Added some comments.
...opentelemetry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/utils.py
Outdated
Show resolved
Hide resolved
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
instrumentation-genai/opentelemetry-instrumentation-langchain/pyproject.toml
Outdated
Show resolved
Hide resolved
3940934
to
072b0fd
Compare
...emetry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/span_manager.py
Outdated
Show resolved
Hide resolved
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Show resolved
Hide resolved
...emetry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/span_manager.py
Outdated
Show resolved
Hide resolved
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
...emetry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/span_manager.py
Outdated
Show resolved
Hide resolved
...emetry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/span_manager.py
Outdated
Show resolved
Hide resolved
fed5254
to
01caea4
Compare
def test_langchain_call( | ||
span_exporter, chatOpenAI_client, start_instrumentation | ||
): | ||
llm = ChatOpenAI( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to move the provider as fixture and provide another one for the same test? just to make sure there's nothing openai dependent on the instrumentation side
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have moved the provider as fixture. But when I tried other providers like AWS bedrock and Gemini I see that the attributes are listed differently. For openAI -> invocation_params.get("model_name") please see callback_handler.py ln:57
aws bedrock -> invocation_params.get("model_id")
gemini -> invocation_params.get("model")
What does this mean? Would the ownership of handling various providers will be on instrumentation implementors or is it ok to say that providers supporting langchain(eg: langchain_openai, langchain_google_genai) should be consistent with the attributes?
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
|
||
def _create_span( | ||
self, | ||
run_id: UUID, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
run_id
is a langtrace specific concept right? So this code is not supposed to move to the genai utils package right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In genAI utils we can have run_id optional. For this PR we can keep as is.
...emetry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/span_manager.py
Outdated
Show resolved
Hide resolved
7c0e8bf
to
8502753
Compare
Description
Taking inspiration from traceloop's openllmetry support for langchain instrumentation, added callback handler support for llm start and end to generate span with attributes listed here.
Attributes added on span:
-> gen_ai.operation.name: Str(chat)
-> gen_ai.system: Str(ChatOpenAI)
-> gen_ai.request.model: Str(gpt-3.5-turbo)
-> gen_ai.request.top_p: Double(0.9)
-> gen_ai.request.frequency_penalty: Double(0.5)
-> gen_ai.request.presence_penalty: Double(0.5)
-> gen_ai.request.stop_sequences: Slice(["\n","Human:","AI:"])
-> gen_ai.request.seed: Int(100)
-> gen_ai.request.max_tokens: Int(100)
-> gen_ai.provider.name: Str(openai)
-> gen_ai.request.temperature: Double(0.1)
-> gen_ai.response.finish_reasons: Slice(["stop"])
-> gen_ai.response.model: Str(gpt-3.5-turbo-0125)
-> gen_ai.response.id: Str(chatcmpl-Bz8yrvPnydD9pObv625n2CGBPHS13)
-> gen_ai.usage.input_tokens: Int(24)
-> gen_ai.usage.output_tokens: Int(7)
Please refer complete span attributes o/p: https://docs.google.com/document/d/13SXuf4iZSl2MqWUJtmoDEqFxuHZ1Y8hykW_ctlykcDE/edit?tab=t.0
We have just added span support in this PR for now to keep this PR concise and in future we will have metric and log support PRs.
Fixes # (issue)
Type of change
Please delete options that are not relevant.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
pytest -rP test_llm_call.py
Does This PR Require a Core Repo Change?
Checklist:
See contributing.md for styleguide, changelog guidelines, and more.