Skip to content

Commit 1d78868

Browse files
committed
Add OpenAI Agents manual and zero-code examples
1 parent 44b91e8 commit 1d78868

File tree

8 files changed

+212
-0
lines changed

8 files changed

+212
-0
lines changed
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# Update this with your real OpenAI API key
2+
OPENAI_API_KEY=sk-YOUR_API_KEY
3+
4+
# Uncomment and adjust if you use a non-default OTLP collector endpoint
5+
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
6+
# OTEL_EXPORTER_OTLP_PROTOCOL=grpc
7+
8+
OTEL_SERVICE_NAME=opentelemetry-python-openai-agents-manual
9+
10+
# Optionally override the agent name reported on spans
11+
# OTEL_GENAI_AGENT_NAME=Travel Concierge
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
OpenTelemetry OpenAI Agents Instrumentation Example
2+
===================================================
3+
4+
This example demonstrates how to manually configure the OpenTelemetry SDK
5+
alongside the OpenAI Agents instrumentation.
6+
7+
Running `main.py <main.py>`_ produces spans for the end-to-end agent run,
8+
including tool invocations and model generations. Spans are exported through
9+
OTLP/gRPC to the endpoint configured in the environment.
10+
11+
Setup
12+
-----
13+
14+
1. Update the `.env <.env>`_ file with your real ``OPENAI_API_KEY``. If your
15+
OTLP collector is not reachable via ``http://localhost:4317``, adjust the
16+
endpoint variables as needed.
17+
2. Create a virtual environment and install the dependencies:
18+
19+
::
20+
21+
python3 -m venv .venv
22+
source .venv/bin/activate
23+
pip install "python-dotenv[cli]"
24+
pip install -r requirements.txt
25+
26+
Run
27+
---
28+
29+
Execute the sample with ``dotenv`` so the environment variables from ``.env``
30+
are applied:
31+
32+
::
33+
34+
dotenv run -- python main.py
35+
36+
You should see the agent response printed to the console while spans export to
37+
your configured observability backend.
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# pylint: skip-file
2+
"""Manual OpenAI Agents instrumentation example."""
3+
4+
from __future__ import annotations
5+
6+
from agents import Agent, Runner, function_tool
7+
8+
from opentelemetry import trace
9+
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
10+
OTLPSpanExporter,
11+
)
12+
from opentelemetry.instrumentation.openai_agents import (
13+
OpenAIAgentsInstrumentor,
14+
)
15+
from opentelemetry.sdk.trace import TracerProvider
16+
from opentelemetry.sdk.trace.export import BatchSpanProcessor
17+
18+
19+
def configure_otel() -> None:
20+
"""Configure the OpenTelemetry SDK for exporting spans."""
21+
22+
provider = TracerProvider()
23+
provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))
24+
trace.set_tracer_provider(provider)
25+
26+
OpenAIAgentsInstrumentor().instrument(tracer_provider=provider)
27+
28+
29+
@function_tool
30+
def get_weather(city: str) -> str:
31+
"""Return a canned weather response for the requested city."""
32+
33+
return f"The forecast for {city} is sunny with pleasant temperatures."
34+
35+
36+
def run_agent() -> None:
37+
"""Create a simple agent and execute a single run."""
38+
39+
assistant = Agent(
40+
name="Travel Concierge",
41+
instructions=(
42+
"You are a concise travel concierge. Use the weather tool when the"
43+
" traveler asks about local conditions."
44+
),
45+
tools=[get_weather],
46+
)
47+
48+
result = Runner.run_sync(
49+
assistant,
50+
"I'm visiting Barcelona this weekend. How should I pack?",
51+
)
52+
53+
print("Agent response:")
54+
print(result.final_output)
55+
56+
57+
def main() -> None:
58+
configure_otel()
59+
run_agent()
60+
61+
62+
if __name__ == "__main__":
63+
main()
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
openai-agents~=0.3.3
2+
3+
opentelemetry-sdk~=1.36.0
4+
opentelemetry-exporter-otlp-proto-grpc~=1.36.0
5+
opentelemetry-instrumentation-openai-agents~=0.1.0.dev
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# Update this with your real OpenAI API key
2+
OPENAI_API_KEY=sk-YOUR_API_KEY
3+
4+
# Uncomment and adjust if you use a non-default OTLP collector endpoint
5+
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
6+
# OTEL_EXPORTER_OTLP_PROTOCOL=grpc
7+
8+
OTEL_SERVICE_NAME=opentelemetry-python-openai-agents-zero-code
9+
10+
# Enable auto-instrumentation for logs if desired
11+
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true
12+
13+
# Optionally override the agent name reported on spans
14+
# OTEL_GENAI_AGENT_NAME=Travel Concierge
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
OpenTelemetry OpenAI Agents Zero-Code Instrumentation Example
2+
=============================================================
3+
4+
This example shows how to capture telemetry from OpenAI Agents without
5+
changing your application code by using ``opentelemetry-instrument``.
6+
7+
When `main.py <main.py>`_ is executed, spans describing the agent workflow are
8+
exported to the configured OTLP endpoint. The spans include details such as the
9+
operation name, tool usage, and token consumption (when available).
10+
11+
Setup
12+
-----
13+
14+
1. Update the `.env <.env>`_ file with your real ``OPENAI_API_KEY``. Adjust the
15+
OTLP endpoint settings if your collector is not reachable via
16+
``http://localhost:4317``.
17+
2. Create a virtual environment and install the dependencies:
18+
19+
::
20+
21+
python3 -m venv .venv
22+
source .venv/bin/activate
23+
pip install "python-dotenv[cli]"
24+
pip install -r requirements.txt
25+
26+
Run
27+
---
28+
29+
Execute the sample via ``opentelemetry-instrument`` so the OpenAI Agents
30+
instrumentation is activated automatically:
31+
32+
::
33+
34+
dotenv run -- opentelemetry-instrument python main.py
35+
36+
You should see the agent response printed to the console while spans export to
37+
your observability backend.
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
"""Zero-code OpenAI Agents example."""
2+
3+
from __future__ import annotations
4+
5+
from agents import Agent, Runner, function_tool
6+
7+
8+
@function_tool
9+
def get_weather(city: str) -> str:
10+
"""Return a canned weather response for the requested city."""
11+
12+
return f"The forecast for {city} is sunny with pleasant temperatures."
13+
14+
15+
def run_agent() -> None:
16+
assistant = Agent(
17+
name="Travel Concierge",
18+
instructions=(
19+
"You are a concise travel concierge. Use the weather tool when the"
20+
" traveler asks about local conditions."
21+
),
22+
tools=[get_weather],
23+
)
24+
25+
result = Runner.run_sync(
26+
assistant,
27+
"I'm visiting Barcelona this weekend. How should I pack?",
28+
)
29+
30+
print("Agent response:")
31+
print(result.final_output)
32+
33+
34+
def main() -> None:
35+
run_agent()
36+
37+
38+
if __name__ == "__main__":
39+
main()
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
openai-agents~=0.3.3
2+
3+
opentelemetry-sdk~=1.36.0
4+
opentelemetry-exporter-otlp-proto-grpc~=1.36.0
5+
opentelemetry-distro~=0.57b0
6+
opentelemetry-instrumentation-openai-agents~=0.1.0.dev

0 commit comments

Comments
 (0)