Skip to content

Commit fd22316

Browse files
committed
Add histogram explicit boundaries example
1 parent 11d0a4d commit fd22316

File tree

4 files changed

+149
-0
lines changed

4 files changed

+149
-0
lines changed
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# Update this with your real OpenAI API key
2+
OPENAI_API_KEY=sk-YOUR_API_KEY
3+
4+
# Uncomment to use Ollama instead of OpenAI
5+
# OPENAI_BASE_URL=http://localhost:11434/v1
6+
# OPENAI_API_KEY=unused
7+
# CHAT_MODEL=qwen2.5:0.5b
8+
9+
# Uncomment and change to your OTLP endpoint
10+
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
11+
# OTEL_EXPORTER_OTLP_PROTOCOL=grpc
12+
13+
OTEL_SERVICE_NAME=opentelemetry-python-openai
14+
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
OpenTelemetry OpenAI Instrumentation Example
2+
============================================
3+
4+
This is an example of how to instrument OpenAI calls when configuring OpenTelemetry SDK and Instrumentations manually for metrics.
5+
6+
When `main.py <main.py>`_ is run, it exports metrics to an OTLP compatible endpoint. Metrics include details such as token usage and operation duration, with specific bucket boundaries for each metric.
7+
8+
The bucket boundaries are defined as follows:
9+
10+
- For `gen_ai.client.token.usage`: [1, 4, 16, 64, 256, 1024, 4096, 16384, 65536, 262144, 1048576, 4194304, 16777216, 67108864]
11+
- For `gen_ai.client.operation.duration`: [0.01, 0.02, 0.04, 0.08, 0.16, 0.32, 0.64, 1.28, 2.56, 5.12, 10.24, 20.48, 40.96, 81.92]
12+
13+
These are documented in the `OpenTelemetry GenAI Metrics documentation <https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-metrics/>`_.
14+
15+
Setup
16+
-----
17+
18+
Minimally, update the `.env <.env>`_ file with your "OPENAI_API_KEY". An OTLP compatible endpoint should be listening for metrics on http://localhost:4317. If not, update "OTEL_EXPORTER_OTLP_ENDPOINT" as well.
19+
20+
Next, set up a virtual environment like this:
21+
22+
::
23+
24+
python3 -m venv .venv
25+
source .venv/bin/activate
26+
pip install "python-dotenv[cli]"
27+
pip install -r requirements.txt
28+
29+
Run
30+
---
31+
32+
Run the example like this:
33+
34+
::
35+
36+
dotenv run -- python main.py
37+
38+
You should see metrics being exported to your configured observability tool, with the specified bucket boundaries for token usage and operation duration.
Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
import os
2+
3+
from openai import OpenAI
4+
5+
from opentelemetry import metrics
6+
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import (
7+
OTLPMetricExporter,
8+
)
9+
from opentelemetry.instrumentation.openai_v2 import OpenAIInstrumentor
10+
from opentelemetry.sdk.metrics import Histogram, MeterProvider
11+
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
12+
from opentelemetry.sdk.metrics.view import (
13+
ExplicitBucketHistogramAggregation,
14+
View,
15+
)
16+
17+
# configure metrics
18+
metric_exporter = OTLPMetricExporter()
19+
metric_reader = PeriodicExportingMetricReader(metric_exporter)
20+
21+
TokenUsageHistogramView = View(
22+
instrument_type=Histogram,
23+
instrument_name="gen_ai.client.token.usage",
24+
aggregation=ExplicitBucketHistogramAggregation(
25+
boundaries=[
26+
1,
27+
4,
28+
16,
29+
64,
30+
256,
31+
1024,
32+
4096,
33+
16384,
34+
65536,
35+
262144,
36+
1048576,
37+
4194304,
38+
16777216,
39+
67108864,
40+
]
41+
),
42+
)
43+
44+
DurationHistogramView = View(
45+
instrument_type=Histogram,
46+
instrument_name="gen_ai.client.operation.duration",
47+
aggregation=ExplicitBucketHistogramAggregation(
48+
boundaries=[
49+
0.01,
50+
0.02,
51+
0.04,
52+
0.08,
53+
0.16,
54+
0.32,
55+
0.64,
56+
1.28,
57+
2.56,
58+
5.12,
59+
10.24,
60+
20.48,
61+
40.96,
62+
81.92,
63+
]
64+
),
65+
)
66+
67+
meter_provider = MeterProvider(
68+
metric_readers=[metric_reader],
69+
views=[TokenUsageHistogramView, DurationHistogramView],
70+
)
71+
metrics.set_meter_provider(meter_provider)
72+
73+
# instrument OpenAI
74+
OpenAIInstrumentor().instrument(meter_provider=meter_provider)
75+
76+
77+
def main():
78+
client = OpenAI()
79+
chat_completion = client.chat.completions.create(
80+
model=os.getenv("CHAT_MODEL", "gpt-4o-mini"),
81+
messages=[
82+
{
83+
"role": "user",
84+
"content": "Write a short poem on OpenTelemetry.",
85+
},
86+
],
87+
)
88+
print(chat_completion.choices[0].message.content)
89+
90+
91+
if __name__ == "__main__":
92+
main()
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
openai~=1.57.3
2+
3+
opentelemetry-sdk~=1.29.0
4+
opentelemetry-exporter-otlp-proto-grpc~=1.29.0
5+
opentelemetry-instrumentation-openai-v2~=2.0b0

0 commit comments

Comments
 (0)