Skip to content

Commit b27aaab

Browse files
wrisazhirafovod
authored andcommitted
First commit for langchain instrumentation
1 parent 032d6c6 commit b27aaab

File tree

16 files changed

+1470
-0
lines changed

16 files changed

+1470
-0
lines changed
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# Update this with your real OpenAI API key
2+
OPENAI_API_KEY=sk-YOUR_API_KEY
3+
4+
# Uncomment and change to your OTLP endpoint
5+
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
6+
# OTEL_EXPORTER_OTLP_PROTOCOL=grpc
7+
8+
# Change to 'false' to hide prompt and completion content
9+
OTEL_INSTRUMENTATION_LANGCHAIN_CAPTURE_MESSAGE_CONTENT=true
10+
11+
OTEL_SERVICE_NAME=opentelemetry-python-langchain
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
OpenTelemetry LangChain Instrumentation Example
2+
==============================================
3+
4+
This is an example of how to instrument LangChain calls when configuring
5+
OpenTelemetry SDK and Instrumentations manually.
6+
7+
When :code:`main.py <main.py>` is run, it exports traces, metrics (and optionally logs)
8+
to an OTLP-compatible endpoint. Traces include details such as the span name and other attributes.
9+
Exports metrics like input and output token usage and durations for each operation.
10+
11+
Environment variables:
12+
13+
- ``OTEL_INSTRUMENTATION_LANGCHAIN_CAPTURE_MESSAGE_CONTENT=true`` can be used
14+
to capture full prompt/response content.
15+
16+
Setup
17+
-----
18+
19+
1. **Update** the :code:`.env <.env>` file with any environment variables you
20+
need (e.g., your OpenAI key, or :code:`OTEL_EXPORTER_OTLP_ENDPOINT` if not
21+
using the default http://localhost:4317).
22+
2. Set up a virtual environment:
23+
24+
.. code-block:: console
25+
26+
python3 -m venv .venv
27+
source .venv/bin/activate
28+
pip install "python-dotenv[cli]"
29+
pip install -r requirements.txt
30+
31+
3. **(Optional)** Install a development version of the new instrumentation:
32+
33+
.. code-block:: console
34+
35+
# E.g., from a local path or a git repo
36+
pip install -e /path/to/opentelemetry-python-contrib/instrumentation-genai/opentelemetry-instrumentation-langchain
37+
Run
38+
---
39+
40+
Run the example like this:
41+
42+
.. code-block:: console
43+
44+
dotenv run -- python main.py
45+
46+
You should see an example span output while traces are exported to your
47+
configured observability tool.
Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
from langchain_core.messages import HumanMessage, SystemMessage
2+
from langchain_openai import ChatOpenAI
3+
4+
from opentelemetry.instrumentation.langchain import LangChainInstrumentor
5+
6+
from opentelemetry import _events, _logs, trace, metrics
7+
from opentelemetry.exporter.otlp.proto.grpc._log_exporter import (
8+
OTLPLogExporter,
9+
)
10+
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
11+
OTLPSpanExporter,
12+
)
13+
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter
14+
15+
from opentelemetry.sdk._events import EventLoggerProvider
16+
from opentelemetry.sdk._logs import LoggerProvider
17+
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
18+
from opentelemetry.sdk.trace import TracerProvider
19+
from opentelemetry.sdk.trace.export import BatchSpanProcessor
20+
from opentelemetry.sdk.metrics import MeterProvider
21+
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
22+
23+
# configure tracing
24+
trace.set_tracer_provider(TracerProvider())
25+
trace.get_tracer_provider().add_span_processor(
26+
BatchSpanProcessor(OTLPSpanExporter())
27+
)
28+
29+
metric_reader = PeriodicExportingMetricReader(OTLPMetricExporter())
30+
metrics.set_meter_provider(MeterProvider(metric_readers=[metric_reader]))
31+
32+
# configure logging and events
33+
_logs.set_logger_provider(LoggerProvider())
34+
_logs.get_logger_provider().add_log_record_processor(
35+
BatchLogRecordProcessor(OTLPLogExporter())
36+
)
37+
_events.set_event_logger_provider(EventLoggerProvider())
38+
39+
def main():
40+
41+
# Set up instrumentation
42+
LangChainInstrumentor().instrument()
43+
44+
# ChatOpenAI
45+
llm = ChatOpenAI(model="gpt-3.5-turbo")
46+
messages = [
47+
SystemMessage(content="You are a helpful assistant!"),
48+
HumanMessage(content="What is the capital of France?"),
49+
]
50+
51+
result = llm.invoke(messages)
52+
53+
print("LLM output:\n", result)
54+
55+
# Un-instrument after use
56+
LangChainInstrumentor().uninstrument()
57+
58+
if __name__ == "__main__":
59+
main()
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
langchain==0.3.21 #todo: find the lowest compatible version
2+
langchain_openai
3+
4+
opentelemetry-sdk~=1.31.1
5+
opentelemetry-exporter-otlp-proto-grpc~=1.31.1
6+
7+
python-dotenv[cli]
8+
9+
# For local development: `pip install -e /path/to/opentelemetry-instrumentation-langchain`
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# Update this with your real OpenAI API key
2+
OPENAI_API_KEY=sk-YOUR_API_KEY
3+
4+
# Uncomment and change to your OTLP endpoint
5+
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
6+
# OTEL_EXPORTER_OTLP_PROTOCOL=grpc
7+
8+
# Change to 'false' to hide prompt and completion content
9+
OTEL_INSTRUMENTATION_LANGCHAIN_CAPTURE_MESSAGE_CONTENT=true
10+
11+
OTEL_SERVICE_NAME=opentelemetry-python-langchain
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
OpenTelemetry LangChain Instrumentation Example
2+
==============================================
3+
4+
This is an example of how to instrument LangChain calls when configuring
5+
OpenTelemetry SDK and Instrumentations manually.
6+
7+
When :code:`main.py <main.py>` is run, it exports traces (and optionally logs)
8+
to an OTLP-compatible endpoint. Traces include details such as the chain name,
9+
LLM usage, token usage, and durations for each operation.
10+
11+
Environment variables:
12+
13+
- ``OTEL_INSTRUMENTATION_LANGCHAIN_CAPTURE_MESSAGE_CONTENT=true`` can be used
14+
to capture full prompt/response content.
15+
16+
Setup
17+
-----
18+
19+
1. **Update** the :code:`.env <.env>` file with any environment variables you
20+
need (e.g., your OpenAI key, or :code:`OTEL_EXPORTER_OTLP_ENDPOINT` if not
21+
using the default http://localhost:4317).
22+
2. Set up a virtual environment:
23+
24+
.. code-block:: console
25+
26+
python3 -m venv .venv
27+
source .venv/bin/activate
28+
pip install "python-dotenv[cli]"
29+
pip install -r requirements.txt
30+
31+
3. **(Optional)** Install a development version of the new instrumentation:
32+
33+
.. code-block:: console
34+
35+
# E.g., from a local path or a git repo
36+
pip install -e /path/to/opentelemetry-python-contrib/instrumentation-genai/opentelemetry-instrumentation-langchain
37+
Run
38+
---
39+
40+
Run the example like this:
41+
42+
.. code-block:: console
43+
44+
dotenv run -- opentelemetry-instrument python main.py
45+
46+
You should see an example chain output while traces are exported to your
47+
configured observability tool.
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
from langchain_core.messages import HumanMessage, SystemMessage
2+
from langchain_openai import ChatOpenAI
3+
4+
def main():
5+
6+
llm = ChatOpenAI(model="gpt-3.5-turbo")
7+
8+
messages = [
9+
SystemMessage(content="You are a helpful assistant!"),
10+
HumanMessage(content="What is the capital of France?"),
11+
]
12+
13+
result = llm.invoke(messages).content
14+
print("LLM output:\n", result)
15+
16+
if __name__ == "__main__":
17+
main()
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
langchain==0.3.21 #todo: find the lowest compatible version
2+
langchain_openai
3+
4+
opentelemetry-sdk~=1.31.1
5+
opentelemetry-exporter-otlp-proto-grpc~=1.31.1
6+
7+
python-dotenv[cli]
8+
9+
# For local developmen: `pip install -e /path/to/opentelemetry-instrumentation-langchain`
10+

0 commit comments

Comments
 (0)