Skip to content

Commit 60a670f

Browse files
wrisaaabmass
andauthored
Added span support for genAI langchain llm invocation (#3665)
* Added span support for llm invocation * removed invalid code * added entry point and fixed unwrap * fixed check runs and updated dependencies * fixed ruff error * moved span generation code and added test coverage * ruff formatting * ruff formatting again * removed config exception logger * removed dontThrow * fixed span name * fixed ruff * fixed typecheck * added span exist check * fixed typecheck * removed start time from span state and moved error handler method to span manager * fixed ruff * made SpanManager class and method private * removed deprecated gen_ai.system attribute * Moved model to fixture and changed imports * Fixed ruff errors and renamed method * Added bedrock support and test * Fixed ruff errors * Addressed Aaron's comments * Reverted versions and ignored typecheck errors * removed context and added issue * fixed versions * skipped telemetry for other than ChatOpenAI and ChatBedrock. Added test for the same. * Fixed telemetry skipping logic * Fixed ruff * added notice file * fixed conflict * fixed ruff and typecheck * fixed ruff * upgraded semcov version --------- Co-authored-by: Aaron Abbott <[email protected]>
1 parent 6edb3f8 commit 60a670f

File tree

20 files changed

+1359
-4
lines changed

20 files changed

+1359
-4
lines changed

.github/component_owners.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,3 +40,7 @@ components:
4040
util/opentelemetry-util-genai:
4141
- DylanRussell
4242
- keith-decker
43+
44+
instrumentation-genai/opentelemetry-instrumentation-langchain:
45+
- zhirafovod
46+
- wrisa

instrumentation-genai/opentelemetry-instrumentation-langchain/CHANGELOG.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,4 +5,7 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
66
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
77

8-
## Unreleased
8+
## Unreleased
9+
10+
- Added span support for genAI langchain llm invocation.
11+
([#3665](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/3665))
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
This project is inspired by and portions of it are derived from Traceloop OpenLLMetry
2+
(https://github.com/traceloop/openllmetry).
3+
Licensed under the Apache License, Version 2.0 (http://www.apache.org/licenses/LICENSE-2.0).
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
# Update this with your real OpenAI API key
2+
OPENAI_API_KEY=sk-YOUR_API_KEY
3+
4+
# Uncomment and change to your OTLP endpoint
5+
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
6+
# OTEL_EXPORTER_OTLP_PROTOCOL=grpc
7+
8+
OTEL_SERVICE_NAME=opentelemetry-python-langchain-manual
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
OpenTelemetry Langcahin Instrumentation Example
2+
============================================
3+
4+
This is an example of how to instrument Langchain when configuring OpenTelemetry SDK and instrumentations manually.
5+
6+
When :code:`main.py <main.py>`_ is run, it exports traces to an OTLP-compatible endpoint.
7+
Traces include details such as the span name and other attributes.
8+
9+
Note: :code:`.env <.env>`_ file configures additional environment variables:
10+
- :code:`OTEL_LOGS_EXPORTER=otlp` to specify exporter type.
11+
- :code:`OPENAI_API_KEY` open AI key for accessing the OpenAI API.
12+
- :code:`OTEL_EXPORTER_OTLP_ENDPOINT` to specify the endpoint for exporting traces (default is http://localhost:4317).
13+
14+
Setup
15+
-----
16+
17+
Minimally, update the :code:`.env <.env>`_ file with your :code:`OPENAI_API_KEY`.
18+
An OTLP compatible endpoint should be listening for traces http://localhost:4317.
19+
If not, update :code:`OTEL_EXPORTER_OTLP_ENDPOINT` as well.
20+
21+
Next, set up a virtual environment like this:
22+
23+
::
24+
25+
python3 -m venv .venv
26+
source .venv/bin/activate
27+
pip install "python-dotenv[cli]"
28+
pip install -r requirements.txt
29+
30+
Run
31+
---
32+
33+
Run the example like this:
34+
35+
::
36+
37+
dotenv run -- python main.py
38+
39+
You should see the capital of France generated by Langchain ChatOpenAI while traces export to your configured observability tool.
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
from langchain_core.messages import HumanMessage, SystemMessage
2+
from langchain_openai import ChatOpenAI
3+
4+
from opentelemetry import trace
5+
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
6+
OTLPSpanExporter,
7+
)
8+
from opentelemetry.instrumentation.langchain import LangChainInstrumentor
9+
from opentelemetry.sdk.trace import TracerProvider
10+
from opentelemetry.sdk.trace.export import BatchSpanProcessor
11+
12+
# Configure tracing
13+
trace.set_tracer_provider(TracerProvider())
14+
span_processor = BatchSpanProcessor(OTLPSpanExporter())
15+
trace.get_tracer_provider().add_span_processor(span_processor)
16+
17+
18+
def main():
19+
# Set up instrumentation
20+
LangChainInstrumentor().instrument()
21+
22+
# ChatOpenAI
23+
llm = ChatOpenAI(
24+
model="gpt-3.5-turbo",
25+
temperature=0.1,
26+
max_tokens=100,
27+
top_p=0.9,
28+
frequency_penalty=0.5,
29+
presence_penalty=0.5,
30+
stop_sequences=["\n", "Human:", "AI:"],
31+
seed=100,
32+
)
33+
34+
messages = [
35+
SystemMessage(content="You are a helpful assistant!"),
36+
HumanMessage(content="What is the capital of France?"),
37+
]
38+
39+
result = llm.invoke(messages)
40+
41+
print("LLM output:\n", result)
42+
43+
# Un-instrument after use
44+
LangChainInstrumentor().uninstrument()
45+
46+
47+
if __name__ == "__main__":
48+
main()
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
langchain==0.3.21
2+
langchain_openai
3+
opentelemetry-sdk>=1.31.0
4+
opentelemetry-exporter-otlp-proto-grpc>=1.31.0
5+
6+
# Uncomment after lanchain instrumetation is released
7+
# opentelemetry-instrumentation-langchain~=2.0b0.dev
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
# Update this with your real OpenAI API key
2+
OPENAI_API_KEY=sk-YOUR_API_KEY
3+
4+
# Uncomment and change to your OTLP endpoint
5+
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
6+
# OTEL_EXPORTER_OTLP_PROTOCOL=grpc
7+
8+
OTEL_SERVICE_NAME=opentelemetry-python-langchain-zero-code
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
OpenTelemetry Langchain Zero-Code Instrumentation Example
2+
======================================================
3+
4+
This is an example of how to instrument Langchain with zero code changes,
5+
using `opentelemetry-instrument`.
6+
7+
When :code:`main.py <main.py>`_ is run, it exports traces to an OTLP-compatible endpoint.
8+
Traces include details such as the span name and other attributes.
9+
10+
Note: :code:`.env <.env>`_ file configures additional environment variables:
11+
- :code:`OTEL_LOGS_EXPORTER=otlp` to specify exporter type.
12+
- :code:`OPENAI_API_KEY` open AI key for accessing the OpenAI API.
13+
- :code:`OTEL_EXPORTER_OTLP_ENDPOINT` to specify the endpoint for exporting traces (default is http://localhost:4317).
14+
15+
Setup
16+
-----
17+
18+
Minimally, update the :code:`.env <.env>`_ file with your :code:`OPENAI_API_KEY`.
19+
An OTLP compatible endpoint should be listening for traces http://localhost:4317.
20+
If not, update :code:`OTEL_EXPORTER_OTLP_ENDPOINT` as well.
21+
22+
Next, set up a virtual environment like this:
23+
24+
::
25+
26+
python3 -m venv .venv
27+
source .venv/bin/activate
28+
pip install "python-dotenv[cli]"
29+
pip install -r requirements.txt
30+
31+
Run
32+
---
33+
34+
Run the example like this:
35+
36+
::
37+
38+
dotenv run -- opentelemetry-instrument python main.py
39+
40+
You should see the capital of France generated by Langchain ChatOpenAI while traces export to your configured observability tool.
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
from langchain_core.messages import HumanMessage, SystemMessage
2+
from langchain_openai import ChatOpenAI
3+
4+
5+
def main():
6+
llm = ChatOpenAI(
7+
model="gpt-3.5-turbo",
8+
temperature=0.1,
9+
max_tokens=100,
10+
top_p=0.9,
11+
frequency_penalty=0.5,
12+
presence_penalty=0.5,
13+
stop_sequences=["\n", "Human:", "AI:"],
14+
seed=100,
15+
)
16+
17+
messages = [
18+
SystemMessage(content="You are a helpful assistant!"),
19+
HumanMessage(content="What is the capital of France?"),
20+
]
21+
22+
result = llm.invoke(messages).content
23+
print("LLM output:\n", result)
24+
25+
26+
if __name__ == "__main__":
27+
main()

0 commit comments

Comments
 (0)