Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
6f5de87
Added span support for llm invocation
wrisa Jul 30, 2025
ffa629e
removed invalid code
wrisa Jul 31, 2025
4bc4c58
added entry point and fixed unwrap
wrisa Jul 31, 2025
58e4c52
fixed check runs and updated dependencies
wrisa Aug 1, 2025
92e7c13
fixed ruff error
wrisa Aug 1, 2025
2196e28
moved span generation code and added test coverage
wrisa Aug 4, 2025
9a0fbcb
ruff formatting
wrisa Aug 4, 2025
4d0955a
ruff formatting again
wrisa Aug 4, 2025
0324d26
removed config exception logger
wrisa Aug 5, 2025
a3b38df
removed dontThrow
wrisa Aug 5, 2025
ed85fc7
fixed span name
wrisa Aug 5, 2025
db3f045
fixed ruff
wrisa Aug 6, 2025
ce29530
fixed typecheck
wrisa Aug 6, 2025
0fc63d2
added span exist check
wrisa Aug 7, 2025
34cc5b4
fixed typecheck
wrisa Aug 7, 2025
e99bd66
removed start time from span state and moved error handler method to …
wrisa Aug 8, 2025
694cc8a
fixed ruff
wrisa Aug 8, 2025
a67d023
made SpanManager class and method private
wrisa Aug 8, 2025
eac3e0d
removed deprecated gen_ai.system attribute
wrisa Aug 12, 2025
f9edd23
Moved model to fixture and changed imports
wrisa Aug 28, 2025
bb919ae
Fixed ruff errors and renamed method
wrisa Aug 28, 2025
bdfb1a9
Added bedrock support and test
wrisa Sep 4, 2025
fe27d2b
Fixed ruff errors
wrisa Sep 4, 2025
eae339f
Addressed Aaron's comments
wrisa Sep 4, 2025
3bbb8e3
Reverted versions and ignored typecheck errors
wrisa Sep 4, 2025
422bf2f
removed context and added issue
wrisa Sep 5, 2025
81a8e76
fixed versions
wrisa Sep 8, 2025
2c545ce
skipped telemetry for other than ChatOpenAI and ChatBedrock. Added te…
wrisa Sep 9, 2025
87575aa
Fixed telemetry skipping logic
wrisa Sep 9, 2025
7e7d19c
Fixed ruff
wrisa Sep 9, 2025
83a7098
added notice file
wrisa Sep 16, 2025
84bc754
fixed conflict
wrisa Sep 17, 2025
dd2ba36
fixed ruff and typecheck
wrisa Sep 17, 2025
84aa793
fixed ruff
wrisa Sep 18, 2025
d5771dd
upgraded semcov version
wrisa Sep 19, 2025
72b6669
Merge branch 'main' into genai-instrumentation-langchain-spans
aabmass Sep 19, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .github/component_owners.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,3 +40,7 @@ components:
util/opentelemetry-util-genai:
- DylanRussell
- keith-decker

instrumentation-genai/opentelemetry-instrumentation-langchain:
- zhirafovod
- wrisa
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,7 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## Unreleased
## Unreleased

- Added span support for genAI langchain llm invocation.
([#3665](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/3665))
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
This project is inspired by and portions of it are derived from Traceloop OpenLLMetry
(https://github.com/traceloop/openllmetry).
Licensed under the Apache License, Version 2.0 (http://www.apache.org/licenses/LICENSE-2.0).
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Update this with your real OpenAI API key
OPENAI_API_KEY=sk-YOUR_API_KEY

# Uncomment and change to your OTLP endpoint
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
# OTEL_EXPORTER_OTLP_PROTOCOL=grpc

OTEL_SERVICE_NAME=opentelemetry-python-langchain-manual
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
OpenTelemetry Langcahin Instrumentation Example
============================================

This is an example of how to instrument Langchain when configuring OpenTelemetry SDK and instrumentations manually.

When :code:`main.py <main.py>`_ is run, it exports traces to an OTLP-compatible endpoint.
Traces include details such as the span name and other attributes.

Note: :code:`.env <.env>`_ file configures additional environment variables:
- :code:`OTEL_LOGS_EXPORTER=otlp` to specify exporter type.
- :code:`OPENAI_API_KEY` open AI key for accessing the OpenAI API.
- :code:`OTEL_EXPORTER_OTLP_ENDPOINT` to specify the endpoint for exporting traces (default is http://localhost:4317).

Setup
-----

Minimally, update the :code:`.env <.env>`_ file with your :code:`OPENAI_API_KEY`.
An OTLP compatible endpoint should be listening for traces http://localhost:4317.
If not, update :code:`OTEL_EXPORTER_OTLP_ENDPOINT` as well.

Next, set up a virtual environment like this:

::

python3 -m venv .venv
source .venv/bin/activate
pip install "python-dotenv[cli]"
pip install -r requirements.txt

Run
---

Run the example like this:

::

dotenv run -- python main.py

You should see the capital of France generated by Langchain ChatOpenAI while traces export to your configured observability tool.
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI

from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
OTLPSpanExporter,
)
from opentelemetry.instrumentation.langchain import LangChainInstrumentor
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

# Configure tracing
trace.set_tracer_provider(TracerProvider())
span_processor = BatchSpanProcessor(OTLPSpanExporter())
trace.get_tracer_provider().add_span_processor(span_processor)


def main():
# Set up instrumentation
LangChainInstrumentor().instrument()

# ChatOpenAI
llm = ChatOpenAI(
model="gpt-3.5-turbo",
temperature=0.1,
max_tokens=100,
top_p=0.9,
frequency_penalty=0.5,
presence_penalty=0.5,
stop_sequences=["\n", "Human:", "AI:"],
seed=100,
)

messages = [
SystemMessage(content="You are a helpful assistant!"),
HumanMessage(content="What is the capital of France?"),
]

result = llm.invoke(messages)

print("LLM output:\n", result)

# Un-instrument after use
LangChainInstrumentor().uninstrument()


if __name__ == "__main__":
main()
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
langchain==0.3.21
langchain_openai
opentelemetry-sdk>=1.31.0
opentelemetry-exporter-otlp-proto-grpc>=1.31.0

# Uncomment after lanchain instrumetation is released
# opentelemetry-instrumentation-langchain~=2.0b0.dev
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Update this with your real OpenAI API key
OPENAI_API_KEY=sk-YOUR_API_KEY

# Uncomment and change to your OTLP endpoint
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
# OTEL_EXPORTER_OTLP_PROTOCOL=grpc

OTEL_SERVICE_NAME=opentelemetry-python-langchain-zero-code
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
OpenTelemetry Langchain Zero-Code Instrumentation Example
======================================================

This is an example of how to instrument Langchain with zero code changes,
using `opentelemetry-instrument`.

When :code:`main.py <main.py>`_ is run, it exports traces to an OTLP-compatible endpoint.
Traces include details such as the span name and other attributes.

Note: :code:`.env <.env>`_ file configures additional environment variables:
- :code:`OTEL_LOGS_EXPORTER=otlp` to specify exporter type.
- :code:`OPENAI_API_KEY` open AI key for accessing the OpenAI API.
- :code:`OTEL_EXPORTER_OTLP_ENDPOINT` to specify the endpoint for exporting traces (default is http://localhost:4317).

Setup
-----

Minimally, update the :code:`.env <.env>`_ file with your :code:`OPENAI_API_KEY`.
An OTLP compatible endpoint should be listening for traces http://localhost:4317.
If not, update :code:`OTEL_EXPORTER_OTLP_ENDPOINT` as well.

Next, set up a virtual environment like this:

::

python3 -m venv .venv
source .venv/bin/activate
pip install "python-dotenv[cli]"
pip install -r requirements.txt

Run
---

Run the example like this:

::

dotenv run -- opentelemetry-instrument python main.py

You should see the capital of France generated by Langchain ChatOpenAI while traces export to your configured observability tool.
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI


def main():
llm = ChatOpenAI(
model="gpt-3.5-turbo",
temperature=0.1,
max_tokens=100,
top_p=0.9,
frequency_penalty=0.5,
presence_penalty=0.5,
stop_sequences=["\n", "Human:", "AI:"],
seed=100,
)

messages = [
SystemMessage(content="You are a helpful assistant!"),
HumanMessage(content="What is the capital of France?"),
]

result = llm.invoke(messages).content
print("LLM output:\n", result)


if __name__ == "__main__":
main()
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
langchain==0.3.21
langchain_openai
opentelemetry-sdk>=1.31.0
opentelemetry-exporter-otlp-proto-grpc>=1.31.0
opentelemetry-distro~=0.51b0

# Uncomment after lanchain instrumetation is released
# opentelemetry-instrumentation-langchain~=2.0b0.dev
Original file line number Diff line number Diff line change
Expand Up @@ -25,16 +25,19 @@ classifiers = [
"Programming Language :: Python :: 3.13",
]
dependencies = [
"opentelemetry-api ~= 1.30",
"opentelemetry-instrumentation ~= 0.51b0",
"opentelemetry-semantic-conventions ~= 0.51b0"
"opentelemetry-api >= 1.31.0",
"opentelemetry-instrumentation ~= 0.57b0",
"opentelemetry-semantic-conventions ~= 0.57b0"
]

[project.optional-dependencies]
instruments = [
"langchain >= 0.3.21",
]

[project.entry-points.opentelemetry_instrumentor]
langchain = "opentelemetry.instrumentation.langchain:LangChainInstrumentor"

[project.urls]
Homepage = "https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation-genai/opentelemetry-instrumentation-langchain"
Repository = "https://github.com/open-telemetry/opentelemetry-python-contrib"
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
# Copyright The OpenTelemetry Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Langchain instrumentation supporting `ChatOpenAI` and `ChatBedrock`, it can be enabled by
using ``LangChainInstrumentor``. Other providers/LLMs may be supported in the future and telemetry for them is skipped for now.

Usage
-----
.. code:: python
from opentelemetry.instrumentation.langchain import LangChainInstrumentor
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI

LangChainInstrumentor().instrument()
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0, max_tokens=1000)
messages = [
SystemMessage(content="You are a helpful assistant!"),
HumanMessage(content="What is the capital of France?"),
]
result = llm.invoke(messages)
LangChainInstrumentor().uninstrument()

API
---
"""

from typing import Any, Callable, Collection

from langchain_core.callbacks import BaseCallbackHandler # type: ignore
from wrapt import wrap_function_wrapper # type: ignore

from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
from opentelemetry.instrumentation.langchain.callback_handler import (
OpenTelemetryLangChainCallbackHandler,
)
from opentelemetry.instrumentation.langchain.package import _instruments
from opentelemetry.instrumentation.langchain.version import __version__
from opentelemetry.instrumentation.utils import unwrap
from opentelemetry.semconv.schemas import Schemas
from opentelemetry.trace import get_tracer


class LangChainInstrumentor(BaseInstrumentor):
"""
OpenTelemetry instrumentor for LangChain.
This adds a custom callback handler to the LangChain callback manager
to capture LLM telemetry.
"""

def __init__(
self,
):
super().__init__()

def instrumentation_dependencies(self) -> Collection[str]:
return _instruments

def _instrument(self, **kwargs: Any):
"""
Enable Langchain instrumentation.
"""
tracer_provider = kwargs.get("tracer_provider")
tracer = get_tracer(
__name__,
__version__,
tracer_provider,
schema_url=Schemas.V1_37_0.value,
)

otel_callback_handler = OpenTelemetryLangChainCallbackHandler(
tracer=tracer,
)

wrap_function_wrapper(
module="langchain_core.callbacks",
name="BaseCallbackManager.__init__",
wrapper=_BaseCallbackManagerInitWrapper(otel_callback_handler),
)

def _uninstrument(self, **kwargs: Any):
"""
Cleanup instrumentation (unwrap).
"""
unwrap("langchain_core.callbacks.base.BaseCallbackManager", "__init__")


class _BaseCallbackManagerInitWrapper:
"""
Wrap the BaseCallbackManager __init__ to insert custom callback handler in the manager's handlers list.
"""

def __init__(
self, callback_handler: OpenTelemetryLangChainCallbackHandler
):
self._otel_handler = callback_handler

def __call__(
self,
wrapped: Callable[..., None],
instance: BaseCallbackHandler, # type: ignore
args: tuple[Any, ...],
kwargs: dict[str, Any],
):
wrapped(*args, **kwargs)
# Ensure our OTel callback is present if not already.
for handler in instance.inheritable_handlers: # type: ignore
if isinstance(handler, type(self._otel_handler)):
break
else:
instance.add_handler(self._otel_handler, inherit=True) # type: ignore
Loading
Loading