You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: solutions/observability/apps/llm-observability.md
+5-6Lines changed: 5 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ To keep your LLM-powered applications reliable, efficient, cost-effective, and e
9
9
Elastic’s end-to-end LLM observability is delivered through the following methods:
10
10
11
11
- Metrics and logs ingestion for LLM APIs (via [Elastic integrations](https://www.elastic.co/guide/en/integrations/current/introduction.html))
12
-
- APM tracing for OpenAI Models (via [instrumentation](https://elastic.github.io/opentelemetry/))
12
+
- APM tracing for LLM Models (via [instrumentation](https://elastic.github.io/opentelemetry/))
13
13
14
14
## Metrics and logs ingestion for LLM APIs (via Elastic integrations)
15
15
@@ -31,9 +31,9 @@ Depending on the LLM provider you choose, the following table shows which source
31
31
|[OpenTelemetry][int-wip-otel]| OTLP | 🚧 | 🚧 | This would support Elastic extensions of otel's GenAI semantic conventions |
32
32
33
33
34
-
## APM tracing for OpenAI models (via instrumentation)
34
+
## APM tracing for LLM models (via instrumentation)
35
35
36
-
Elastic offers specialized OpenTelemetry Protocol (OTLP) tracing for applications leveraging OpenAI models hosted on OpenAI, Azure, and Amazon Bedrock, providing a detailed view of request flows. This tracing capability captures critical insights, including the specific models used, request duration, errors encountered, token consumption per request, and the interaction between prompts and responses. Ideal for troubleshooting, APM tracing allows you to find exactly where the issue is happening with precision and efficiency in your OpenAI-powered application.
36
+
Elastic offers specialized OpenTelemetry Protocol (OTLP) tracing for applications leveraging LLM models hosted on OpenAI, Azure, and Amazon Bedrock, providing a detailed view of request flows. This tracing capability captures critical insights, including the specific models used, request duration, errors encountered, token consumption per request, and the interaction between prompts and responses. Ideal for troubleshooting, APM tracing allows you to find exactly where the issue is happening with precision and efficiency in your LLM-powered application.
37
37
38
38
You can instrument the application with one of the following Elastic Distributions of OpenTelemetry (EDOT):
39
39
@@ -49,10 +49,9 @@ EDOT includes many types of instrumentation. The following table shows the statu
49
49
| OpenAI | Python |[openai][edot-openai-py]| ✅ | ✅ | ✅ | ✅ | Tested on OpenAI, Azure and Ollama |
50
50
| OpenAI| JS/Node |[openai][edot-openai-js]| ✅ | ✅ | ✅ | ✅ | Tested on OpenAI, Azure and Ollama|
51
51
| OpenAI| Java|[com.openai:openai-java][edot-openai-java]| ✅ | ✅ | ✅| ✅| Tested on OpenAI, Azure and Ollama|
52
-
| Langchain| JS/Node|[@langchain/core][wip-edot-langchain-js]| ✅ | 🚧| 🚧 | 🔒| Tested on OpenAI; Not yet finished |
0 commit comments