Skip to content

Added span support for genAI langchain llm invocation #3665

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 19 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,7 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## Unreleased
## Unreleased

- Added span support for genAI langchain llm invocation.
([#3665](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/3665))
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Update this with your real OpenAI API key
OPENAI_API_KEY=sk-YOUR_API_KEY

# Uncomment and change to your OTLP endpoint
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
# OTEL_EXPORTER_OTLP_PROTOCOL=grpc

OTEL_SERVICE_NAME=opentelemetry-python-langchain-manual
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
OpenTelemetry Langcahin Instrumentation Example
============================================

This is an example of how to instrument Langchain when configuring OpenTelemetry SDK and instrumentations manually.

When :code:`main.py <main.py>`_ is run, it exports traces to an OTLP-compatible endpoint.
Traces include details such as the span name and other attributes.

Note: :code:`.env <.env>`_ file configures additional environment variables:
- :code:`OTEL_LOGS_EXPORTER=otlp` to specify exporter type.
- :code:`OPENAI_API_KEY` open AI key for accessing the OpenAI API.
- :code:`OTEL_EXPORTER_OTLP_ENDPOINT` to specify the endpoint for exporting traces (default is http://localhost:4317).

Setup
-----

Minimally, update the :code:`.env <.env>`_ file with your :code:`OPENAI_API_KEY`.
An OTLP compatible endpoint should be listening for traces http://localhost:4317.
If not, update :code:`OTEL_EXPORTER_OTLP_ENDPOINT` as well.

Next, set up a virtual environment like this:

::

python3 -m venv .venv
source .venv/bin/activate
pip install "python-dotenv[cli]"
pip install -r requirements.txt

Run
---

Run the example like this:

::

dotenv run -- python main.py

You should see the capital of France generated by Langchain ChatOpenAI while traces export to your configured observability tool.
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI

from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
OTLPSpanExporter,
)
from opentelemetry.instrumentation.langchain import LangChainInstrumentor
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

# Configure tracing
trace.set_tracer_provider(TracerProvider())
span_processor = BatchSpanProcessor(OTLPSpanExporter())
trace.get_tracer_provider().add_span_processor(span_processor)


def main():
# Set up instrumentation
LangChainInstrumentor().instrument()

# ChatOpenAI
llm = ChatOpenAI(
model="gpt-3.5-turbo",
temperature=0.1,
max_tokens=100,
top_p=0.9,
frequency_penalty=0.5,
presence_penalty=0.5,
stop_sequences=["\n", "Human:", "AI:"],
seed=100,
)

messages = [
SystemMessage(content="You are a helpful assistant!"),
HumanMessage(content="What is the capital of France?"),
]

result = llm.invoke(messages)

print("LLM output:\n", result)

# Un-instrument after use
LangChainInstrumentor().uninstrument()


if __name__ == "__main__":
main()
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
langchain==0.3.21
langchain_openai
opentelemetry-sdk~=1.36.0
opentelemetry-exporter-otlp-proto-grpc~=1.36.0

# Uncomment after lanchain instrumetation is released
# opentelemetry-instrumentation-langchain~=2.0b0.dev
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Update this with your real OpenAI API key
OPENAI_API_KEY=sk-YOUR_API_KEY

# Uncomment and change to your OTLP endpoint
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
# OTEL_EXPORTER_OTLP_PROTOCOL=grpc

OTEL_SERVICE_NAME=opentelemetry-python-langchain-zero-code
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
OpenTelemetry Langchain Zero-Code Instrumentation Example
======================================================

This is an example of how to instrument Langchain with zero code changes,
using `opentelemetry-instrument`.

When :code:`main.py <main.py>`_ is run, it exports traces to an OTLP-compatible endpoint.
Traces include details such as the span name and other attributes.

Note: :code:`.env <.env>`_ file configures additional environment variables:
- :code:`OTEL_LOGS_EXPORTER=otlp` to specify exporter type.
- :code:`OPENAI_API_KEY` open AI key for accessing the OpenAI API.
- :code:`OTEL_EXPORTER_OTLP_ENDPOINT` to specify the endpoint for exporting traces (default is http://localhost:4317).

Setup
-----

Minimally, update the :code:`.env <.env>`_ file with your :code:`OPENAI_API_KEY`.
An OTLP compatible endpoint should be listening for traces http://localhost:4317.
If not, update :code:`OTEL_EXPORTER_OTLP_ENDPOINT` as well.

Next, set up a virtual environment like this:

::

python3 -m venv .venv
source .venv/bin/activate
pip install "python-dotenv[cli]"
pip install -r requirements.txt

Run
---

Run the example like this:

::

dotenv run -- opentelemetry-instrument python main.py

You should see the capital of France generated by Langchain ChatOpenAI while traces export to your configured observability tool.
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI


def main():
llm = ChatOpenAI(
model="gpt-3.5-turbo",
temperature=0.1,
max_tokens=100,
top_p=0.9,
frequency_penalty=0.5,
presence_penalty=0.5,
stop_sequences=["\n", "Human:", "AI:"],
seed=100,
)

messages = [
SystemMessage(content="You are a helpful assistant!"),
HumanMessage(content="What is the capital of France?"),
]

result = llm.invoke(messages).content
print("LLM output:\n", result)


if __name__ == "__main__":
main()
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
langchain==0.3.21
langchain_openai
opentelemetry-sdk~=1.36.0
opentelemetry-exporter-otlp-proto-grpc~=1.36.0
opentelemetry-distro~=0.57b0

# Uncomment after lanchain instrumetation is released
# opentelemetry-instrumentation-langchain~=2.0b0.dev
Original file line number Diff line number Diff line change
Expand Up @@ -25,16 +25,19 @@ classifiers = [
"Programming Language :: Python :: 3.13",
]
dependencies = [
"opentelemetry-api ~= 1.30",
"opentelemetry-instrumentation ~= 0.51b0",
"opentelemetry-semantic-conventions ~= 0.51b0"
"opentelemetry-api >= 1.36.0",
"opentelemetry-instrumentation >= 0.57b0",
"opentelemetry-semantic-conventions >= 0.57b0"
]

[project.optional-dependencies]
instruments = [
"langchain >= 0.3.21",
]

[project.entry-points.opentelemetry_instrumentor]
langchain = "opentelemetry.instrumentation.langchain:LangChainInstrumentor"

[project.urls]
Homepage = "https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation-genai/opentelemetry-instrumentation-langchain"
Repository = "https://github.com/open-telemetry/opentelemetry-python-contrib"
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
# Copyright The OpenTelemetry Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Langchain instrumentation supporting `ChatOpenAI`, it can be enabled by
using ``LangChainInstrumentor``.

Usage
-----
.. code:: python
from opentelemetry.instrumentation.langchain import LangChainInstrumentor
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI

LangChainInstrumentor().instrument()
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0, max_tokens=1000)
messages = [
SystemMessage(content="You are a helpful assistant!"),
HumanMessage(content="What is the capital of France?"),
]
result = llm.invoke(messages)
LangChainInstrumentor().uninstrument()

API
---
"""

from typing import Any, Callable, Collection, Optional

from wrapt import wrap_function_wrapper # type: ignore

from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
from opentelemetry.instrumentation.langchain.callback_handler import (
OpenTelemetryLangChainCallbackHandler,
)
from opentelemetry.instrumentation.langchain.package import _instruments
from opentelemetry.instrumentation.langchain.version import __version__
from opentelemetry.instrumentation.utils import unwrap
from opentelemetry.semconv.schemas import Schemas
from opentelemetry.trace import get_tracer


class LangChainInstrumentor(BaseInstrumentor):
"""
OpenTelemetry instrumentor for LangChain.
This adds a custom callback handler to the LangChain callback manager
to capture LLM telemetry.
"""

def __init__(
self, exception_logger: Optional[Callable[[Exception], Any]] = None
):
super().__init__()

def instrumentation_dependencies(self) -> Collection[str]:
return _instruments

def _instrument(self, **kwargs: Any):
"""
Enable Langchain instrumentation.
"""
tracer_provider = kwargs.get("tracer_provider")
tracer = get_tracer(
__name__,
__version__,
tracer_provider,
schema_url=Schemas.V1_28_0.value,
)

otel_callback_handler = OpenTelemetryLangChainCallbackHandler(
tracer=tracer,
)

wrap_function_wrapper(
module="langchain_core.callbacks",
name="BaseCallbackManager.__init__",
wrapper=_BaseCallbackManagerInitWrapper(otel_callback_handler),
)

def _uninstrument(self, **kwargs: Any):
"""
Cleanup instrumentation (unwrap).
"""
unwrap("langchain_core.callbacks.base.BaseCallbackManager", "__init__")


class _BaseCallbackManagerInitWrapper:
"""
Wrap the BaseCallbackManager __init__ to insert custom callback handler in the manager's handlers list.
"""

def __init__(
self, callback_handler: OpenTelemetryLangChainCallbackHandler
):
self._otel_handler = callback_handler

def __call__(
self,
wrapped: Callable[..., None],
instance: Any,
args: tuple[Any, ...],
kwargs: dict[str, Any],
):
wrapped(*args, **kwargs)
# Ensure our OTel callback is present if not already.
for handler in instance.inheritable_handlers:
if isinstance(handler, type(self._otel_handler)):
break
else:
instance.add_handler(self._otel_handler, inherit=True)
Loading