Skip to content

Commit 5bc9397

Browse files
authored
Merge branch 'main' into dependabot/pip/grpcio-1.69.0
2 parents 372681e + 249e567 commit 5bc9397

File tree

17 files changed

+245
-16
lines changed

17 files changed

+245
-16
lines changed

.github/workflows/ci.yml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,8 @@ jobs:
4040
- run: pip-licenses
4141

4242
operator-image-buildable:
43+
env:
44+
USE_ELASTIC_REGISTRY: ${{ github.event_name != 'pull_request' || ( github.event_name == 'pull_request' && github.event.pull_request.head.repo.fork == false && github.actor != 'dependabot[bot]' ) }}
4345
runs-on: ubuntu-latest
4446
steps:
4547
- uses: actions/checkout@v4
@@ -54,7 +56,11 @@ jobs:
5456
registry: ${{ secrets.ELASTIC_DOCKER_REGISTRY }}
5557
username: ${{ secrets.ELASTIC_DOCKER_USERNAME }}
5658
password: ${{ secrets.ELASTIC_DOCKER_PASSWORD }}
59+
if: ${{ env.USE_ELASTIC_REGISTRY == 'true' }}
5760
- run: docker build -f operator/Dockerfile --build-arg DISTRO_DIR=./dist .
61+
if: ${{ env.USE_ELASTIC_REGISTRY == 'true' }}
62+
- run: docker build -f operator/Dockerfile --build-arg PYTHON_GLIBC_IMAGE=cgr.dev/chainguard/python --build-arg PYTHON_GLIBC_IMAGE_VERSION=latest-dev --build-arg DISTRO_DIR=./dist --build-arg IMAGE=cgr.dev/chainguard/bash --build-arg IMAGE_VERSION=latest .
63+
if: ${{ env.USE_ELASTIC_REGISTRY != 'true'}}
5864

5965
test:
6066
runs-on: ubuntu-latest

.github/workflows/release.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ jobs:
100100
101101
- name: Build and push image
102102
id: docker-push
103-
uses: docker/build-push-action@48aba3b46d1b1fec4febb7c5d0c644b249a11355 # v6.10.0
103+
uses: docker/build-push-action@b32b51a8eda65d6793cd0494a773d4f6bcef32dc # v6.11.0
104104
with:
105105
context: .
106106
platforms: linux/amd64,linux/arm64

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ coverage
2424
.eggs
2525
.cache
2626
/testdb.sql
27+
.venv
2728
venv
2829
benchmarks/result*
2930
coverage.xml

CHANGELOG.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,11 @@
11
# Elastic Distribution of OpenTelemetry Python Changelog
22

3+
## v0.6.1
4+
5+
- Bump opentelemetry-sdk-extension-aws to 2.1.0 (#222)
6+
- Bump opentelemetry-resourcedetector-gcp to 1.8.0a0 (#229)
7+
- Add OpenAI examples (#226)
8+
39
## v0.6.0
410

511
- Bump to OTel 1.29.0 (#211)

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -80,12 +80,12 @@ This distribution sets the following defaults:
8080
- `OTEL_METRICS_EXPORTER`: `otlp`
8181
- `OTEL_LOGS_EXPORTER`: `otlp`
8282
- `OTEL_EXPORTER_OTLP_PROTOCOL`: `grpc`
83-
- `OTEL_EXPERIMENTAL_RESOURCE_DETECTORS`: `process_runtime,os,otel,telemetry_distro,aws_ec2,aws_ecs,aws_elastic_beanstalk,azure_app_service,azure_vm`
83+
- `OTEL_EXPERIMENTAL_RESOURCE_DETECTORS`: `process_runtime,os,otel,telemetry_distro,_gcp,aws_ec2,aws_ecs,aws_elastic_beanstalk,azure_app_service,azure_vm`
8484
- `OTEL_METRICS_EXEMPLAR_FILTER`: `always_off`
8585
- `OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE`: `DELTA`
8686

8787
> [!NOTE]
88-
> `OTEL_EXPERIMENTAL_RESOURCE_DETECTORS` cloud resource detectors are dynamically set. When running in a Kubernetes Pod it will be set to `process_runtime,os,otel,telemetry_distro,aws_eks`.
88+
> `OTEL_EXPERIMENTAL_RESOURCE_DETECTORS` cloud resource detectors are dynamically set. When running in a Kubernetes Pod it will be set to `process_runtime,os,otel,telemetry_distro,_gcp,aws_eks`.
8989
9090
### Distribution specific configuration variables
9191

dev-requirements.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@ opentelemetry-proto==1.29.0
7171
# oteltest
7272
opentelemetry-resource-detector-azure==0.1.5
7373
# via elastic-opentelemetry (pyproject.toml)
74-
opentelemetry-resourcedetector-gcp==1.7.0a0
74+
opentelemetry-resourcedetector-gcp==1.8.0a0
7575
# via elastic-opentelemetry (pyproject.toml)
7676
opentelemetry-sdk==1.29.0
7777
# via
@@ -100,7 +100,7 @@ pip-tools==7.4.1
100100
# via elastic-opentelemetry (pyproject.toml)
101101
pluggy==1.5.0
102102
# via pytest
103-
protobuf==5.29.2
103+
protobuf==5.29.3
104104
# via
105105
# googleapis-common-protos
106106
# opentelemetry-proto
@@ -130,7 +130,7 @@ urllib3==2.2.3
130130
# via requests
131131
wheel==0.45.1
132132
# via pip-tools
133-
wrapt==1.17.0
133+
wrapt==1.17.1
134134
# via
135135
# deprecated
136136
# opentelemetry-instrumentation

examples/openai/README.md

Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
# OpenAI Zero-Code Instrumentation Examples
2+
3+
This is an example of how to instrument OpenAI calls with zero code changes,
4+
using `opentelemetry-instrument` included in the Elastic Distribution of
5+
OpenTelemetry Python ([EDOT Python][edot-python]).
6+
7+
When OpenAI examples run, they export traces, metrics and logs to an OTLP
8+
compatible endpoint. Traces and metrics include details such as the model used
9+
and the duration of the LLM request. In the case of chat, Logs capture the
10+
request and the generated response. The combination of these provide a
11+
comprehensive view of the performance and behavior of your OpenAI usage.
12+
13+
## Install
14+
15+
First, set up a Python virtual environment like this:
16+
```bash
17+
python3 -m venv .venv
18+
source .venv/bin/activate
19+
pip install -r requirements.txt
20+
```
21+
22+
Next, install [EDOT Python][edot-python] and dotenv which is a portable way to
23+
load environment variables.
24+
```bash
25+
pip install "python-dotenv[cli]" elastic-opentelemetry
26+
```
27+
28+
Finally, run `edot-bootstrap` which analyzes the code to add relevant
29+
instrumentation, to record traces, metrics and logs.
30+
```bash
31+
edot-bootstrap --action=install
32+
```
33+
34+
## Configure
35+
36+
Copy [env.example](env.example) to `.env` and update its `OPENAI_API_KEY`.
37+
38+
An OTLP compatible endpoint should be listening for traces, metrics and logs on
39+
`http://localhost:4317`. If not, update `OTEL_EXPORTER_OTLP_ENDPOINT` as well.
40+
41+
For example, if Elastic APM server is running locally, edit `.env` like this:
42+
```
43+
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8200
44+
```
45+
46+
## Run
47+
48+
There are two examples, and they run the same way:
49+
50+
### Chat
51+
52+
[chat.py](chat.py) asks the LLM a geography question and prints the response.
53+
54+
Run it like this:
55+
```bash
56+
dotenv run -- opentelemetry-instrument python chat.py
57+
```
58+
59+
You should see something like "Atlantic Ocean" unless your LLM hallucinates!
60+
61+
### Embeddings
62+
63+
64+
[embeddings.py](embeddings.py) creates in-memory VectorDB embeddings about
65+
Elastic products. Then, it searches for one similar to a question.
66+
67+
Run it like this:
68+
```bash
69+
dotenv run -- opentelemetry-instrument python embeddings.py
70+
```
71+
72+
You should see something like "Connectors can help you connect to a database",
73+
unless your LLM hallucinates!
74+
75+
---
76+
77+
[edot-python]: https://github.com/elastic/elastic-otel-python/blob/main/docs/get-started.md

examples/openai/chat.py

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
# Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
2+
# or more contributor license agreements. See the NOTICE file distributed with
3+
# this work for additional information regarding copyright
4+
# ownership. Elasticsearch B.V. licenses this file to you under
5+
# the Apache License, Version 2.0 (the "License"); you may
6+
# not use this file except in compliance with the License.
7+
# You may obtain a copy of the License at
8+
#
9+
# http://www.apache.org/licenses/LICENSE-2.0
10+
#
11+
# Unless required by applicable law or agreed to in writing, software
12+
# distributed under the License is distributed on an "AS IS" BASIS,
13+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
# See the License for the specific language governing permissions and
15+
# limitations under the License.
16+
17+
import os
18+
19+
import openai
20+
21+
CHAT_MODEL = os.environ.get("CHAT_MODEL", "gpt-4o-mini")
22+
23+
24+
def main():
25+
client = openai.Client()
26+
27+
messages = [
28+
{
29+
"role": "user",
30+
"content": "Answer in up to 3 words: Which ocean contains Bouvet Island?",
31+
}
32+
]
33+
34+
chat_completion = client.chat.completions.create(model=CHAT_MODEL, messages=messages)
35+
print(chat_completion.choices[0].message.content)
36+
37+
38+
if __name__ == "__main__":
39+
main()

examples/openai/embeddings.py

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
# Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
2+
# or more contributor license agreements. See the NOTICE file distributed with
3+
# this work for additional information regarding copyright
4+
# ownership. Elasticsearch B.V. licenses this file to you under
5+
# the Apache License, Version 2.0 (the "License"); you may
6+
# not use this file except in compliance with the License.
7+
# You may obtain a copy of the License at
8+
#
9+
# http://www.apache.org/licenses/LICENSE-2.0
10+
#
11+
# Unless required by applicable law or agreed to in writing, software
12+
# distributed under the License is distributed on an "AS IS" BASIS,
13+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
# See the License for the specific language governing permissions and
15+
# limitations under the License.
16+
17+
import os
18+
19+
import numpy as np
20+
import openai
21+
22+
EMBEDDINGS_MODEL = os.environ.get("EMBEDDINGS_MODEL", "text-embedding-3-small")
23+
24+
25+
def main():
26+
client = openai.Client()
27+
28+
products = [
29+
"Search: Ingest your data, and explore Elastic's machine learning and retrieval augmented generation (RAG) capabilities."
30+
"Observability: Unify your logs, metrics, traces, and profiling at scale in a single platform.",
31+
"Security: Protect, investigate, and respond to cyber threats with AI-driven security analytics."
32+
"Elasticsearch: Distributed, RESTful search and analytics.",
33+
"Kibana: Visualize your data. Navigate the Stack.",
34+
"Beats: Collect, parse, and ship in a lightweight fashion.",
35+
"Connectors: Connect popular databases, file systems, collaboration tools, and more.",
36+
"Logstash: Ingest, transform, enrich, and output.",
37+
]
38+
39+
# Generate embeddings for each product. Keep them in an array instead of a vector DB.
40+
product_embeddings = []
41+
for product in products:
42+
product_embeddings.append(create_embedding(client, product))
43+
44+
query_embedding = create_embedding(client, "What can help me connect to a database?")
45+
46+
# Calculate cosine similarity between the query and document embeddings
47+
similarities = []
48+
for product_embedding in product_embeddings:
49+
similarity = np.dot(query_embedding, product_embedding) / (
50+
np.linalg.norm(query_embedding) * np.linalg.norm(product_embedding)
51+
)
52+
similarities.append(similarity)
53+
54+
# Get the index of the most similar document
55+
most_similar_index = np.argmax(similarities)
56+
57+
print(products[most_similar_index])
58+
59+
60+
def create_embedding(client, text):
61+
return client.embeddings.create(input=[text], model=EMBEDDINGS_MODEL, encoding_format="float").data[0].embedding
62+
63+
64+
if __name__ == "__main__":
65+
main()

examples/openai/env.example

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
# Update this with your real OpenAI API key
2+
OPENAI_API_KEY=sk-YOUR_API_KEY
3+
4+
# Uncomment to use Ollama instead of OpenAI
5+
# OPENAI_BASE_URL=http://localhost:11434/v1
6+
# OPENAI_API_KEY=unused
7+
# CHAT_MODEL=qwen2.5:0.5b
8+
# EMBEDDINGS_MODEL=all-minilm:33m
9+
10+
# OTEL_EXPORTER_* variables are not required. If you would like to change your
11+
# OTLP endpoint to Elastic APM server using HTTP, uncomment the following:
12+
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8200
13+
# OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
14+
15+
OTEL_SERVICE_NAME=openai-example
16+
17+
# Change to 'false' to hide prompt and completion content
18+
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true
19+
# Change to affect behavior of which resources are detected. Note: these
20+
# choices are specific to the language, in this case Python.
21+
OTEL_EXPERIMENTAL_RESOURCE_DETECTORS=process_runtime,os,otel,telemetry_distro
22+
23+
# Export metrics every 3 seconds instead of every minute
24+
OTEL_METRIC_EXPORT_INTERVAL=3000
25+
# Export traces every 3 seconds instead of every 5 seconds
26+
OTEL_BSP_SCHEDULE_DELAY=3000

0 commit comments

Comments
 (0)