@@ -22,32 +22,49 @@ pip install elastic-opentelemetry-instrumentation-openai
22
22
23
23
This instrumentation supports * zero-code* / * autoinstrumentation* :
24
24
25
+ Set up a virtual environment with this package, the dependencies it requires
26
+ and ` dotenv ` (a portable way to load environment variables).
27
+ ```
28
+ python3 -m venv .venv
29
+ source .venv/bin/activate
30
+ pip install -r test-requirements.txt
31
+ pip install python-dotenv[cli]
32
+ ```
33
+
34
+ Create a ` .env ` file containing the OpenAI API key:
35
+
36
+ ```
37
+ echo "OPENAI_API_KEY=sk-..." > .env
25
38
```
26
- opentelemetry-instrument python use_openai.py
27
39
28
- # You can record more information about prompts as log events by enabling content capture.
29
- OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true opentelemetry-instrument python use_openai.py
40
+ Run the script with telemetry setup to use the instrumentation.
41
+
42
+ ```
43
+ dotenv run -- opentelemetry-instrument python examples/chat.py
30
44
```
31
45
32
- Or manual instrumentation:
46
+ You can record more information about prompts as log events by enabling content capture.
47
+ ```
48
+ OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true dotenv run -- \
49
+ opentelemetry-instrument python examples/chat.py
50
+ ```
33
51
34
- ``` python
35
- import openai
36
- from opentelemetry.instrumentation.openai import OpenAIInstrumentor
52
+ ### Using a local model
37
53
38
- OpenAIInstrumentor().instrument()
54
+ [ Ollama] ( https://ollama.com/ ) may be used to run examples without a cloud account. After you have set it up
55
+ need to install the models in order to run the examples:
39
56
40
- # assumes at least the OPENAI_API_KEY environment variable set
41
- client = openai.Client()
57
+ ```
58
+ # for chat
59
+ ollama pull qwen2.5:0.5b
60
+ # for embeddings
61
+ ollama pull all-minilm:33m
62
+ ```
42
63
43
- messages = [
44
- {
45
- " role" : " user" ,
46
- " content" : " Answer in up to 3 words: Which ocean contains the canarian islands?" ,
47
- }
48
- ]
64
+ Finally run the examples using [ ollama.env] ( ollama.env ) variables to point to Ollama instead of OpenAI:
49
65
50
- chat_completion = client.chat.completions.create(model = " gpt-4o-mini" , messages = messages)
66
+ ```
67
+ dotenv run -f ollama.env -- opentelemetry-instrument python examples/chat.py
51
68
```
52
69
53
70
### Instrumentation specific environment variable configuration
@@ -110,20 +127,22 @@ response without querying the LLM.
110
127
111
128
### Azure OpenAI Environment Variables
112
129
113
- Azure is different from OpenAI primarily in that a URL has an implicit model. This means it ignores
114
- the model parameter set by the OpenAI SDK. The implication is that one endpoint cannot serve both
115
- chat and embeddings at the same time. Hence, we need separate environment variables for chat and
116
- embeddings. In either case, the ` DEPLOYMENT_URL ` is the "Endpoint Target URI" and the ` API_KEY ` is
117
- the ` Endpoint Key ` for a corresponding deployment in https://oai.azure.com/resource/deployments
118
-
119
- * ` AZURE_CHAT_COMPLETIONS_DEPLOYMENT_URL `
120
- * It should look like https://endpoint.com/openai/deployments/my-deployment/chat/completions?api-version=2023-05-15
121
- * ` AZURE_CHAT_COMPLETIONS_API_KEY `
122
- * It should be in hex like ` abc01... ` and possibly the same as ` AZURE_EMBEDDINGS_API_KEY `
123
- * ` AZURE_EMBEDDINGS_DEPLOYMENT_URL `
124
- * It should look like https://endpoint.com/openai/deployments/my-deployment/embeddings?api-version=2023-05-15
125
- * ` AZURE_EMBEDDINGS_API_KEY `
126
- * It should be in hex like ` abc01... ` and possibly the same as ` AZURE_CHAT_COMPLETIONS_API_KEY `
130
+ The ` AzureOpenAI ` client extends ` OpenAI ` with parameters specific to the Azure OpenAI Service.
131
+
132
+ * ` AZURE_OPENAI_ENDPOINT ` - "Azure OpenAI Endpoint" in https://oai.azure.com/resource/overview
133
+ * It should look like ` https://<your-resource-name>.openai.azure.com/ `
134
+ * ` AZURE_OPENAI_API_KEY ` - "API key 1 (or 2)" in https://oai.azure.com/resource/overview
135
+ * It should look be a hex string like ` abc01... `
136
+ * ` OPENAI_API_VERSION ` = "Inference version" from https://learn.microsoft.com/en-us/azure/ai-services/openai/api-version-deprecation
137
+ * It should look like ` 2024-10-01-preview `
138
+ * ` TEST_CHAT_MODEL ` = "Name" from https://oai.azure.com/resource/deployments that deployed a model
139
+ that supports tool calling, such as "gpt-4o-mini".
140
+ * ` TEST_EMBEDDINGS_MODEL ` = "Name" from https://oai.azure.com/resource/deployments that deployed a
141
+ model that supports embeddings, such as "text-embedding-3-small".
142
+
143
+ Note: The model parameter of a chat completion or embeddings request is substituted for an identical
144
+ deployment name. As deployment names are arbitrary they may have no correlation with a real model
145
+ like ` gpt-4o `
127
146
128
147
## License
129
148
0 commit comments