Skip to content

Commit e001d45

Browse files
committed
Integrate Daniela's suggestions
1 parent f9f41eb commit e001d45

File tree

1 file changed

+5
-6
lines changed

1 file changed

+5
-6
lines changed

solutions/observability/apps/llm-observability.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ To keep your LLM-powered applications reliable, efficient, cost-effective, and e
99
Elastic’s end-to-end LLM observability is delivered through the following methods:
1010

1111
- Metrics and logs ingestion for LLM APIs (via [Elastic integrations](https://www.elastic.co/guide/en/integrations/current/introduction.html))
12-
- APM tracing for OpenAI Models (via [instrumentation](https://elastic.github.io/opentelemetry/))
12+
- APM tracing for LLM Models (via [instrumentation](https://elastic.github.io/opentelemetry/))
1313

1414
## Metrics and logs ingestion for LLM APIs (via Elastic integrations)
1515

@@ -31,9 +31,9 @@ Depending on the LLM provider you choose, the following table shows which source
3131
| [OpenTelemetry][int-wip-otel] | OTLP | 🚧 | 🚧 | This would support Elastic extensions of otel's GenAI semantic conventions |
3232

3333

34-
## APM tracing for OpenAI models (via instrumentation)
34+
## APM tracing for LLM models (via instrumentation)
3535

36-
Elastic offers specialized OpenTelemetry Protocol (OTLP) tracing for applications leveraging OpenAI models hosted on OpenAI, Azure, and Amazon Bedrock, providing a detailed view of request flows. This tracing capability captures critical insights, including the specific models used, request duration, errors encountered, token consumption per request, and the interaction between prompts and responses. Ideal for troubleshooting, APM tracing allows you to find exactly where the issue is happening with precision and efficiency in your OpenAI-powered application.
36+
Elastic offers specialized OpenTelemetry Protocol (OTLP) tracing for applications leveraging LLM models hosted on OpenAI, Azure, and Amazon Bedrock, providing a detailed view of request flows. This tracing capability captures critical insights, including the specific models used, request duration, errors encountered, token consumption per request, and the interaction between prompts and responses. Ideal for troubleshooting, APM tracing allows you to find exactly where the issue is happening with precision and efficiency in your LLM-powered application.
3737

3838
You can instrument the application with one of the following Elastic Distributions of OpenTelemetry (EDOT):
3939

@@ -49,10 +49,9 @@ EDOT includes many types of instrumentation. The following table shows the statu
4949
| OpenAI | Python | [openai][edot-openai-py]||||| Tested on OpenAI, Azure and Ollama |
5050
| OpenAI| JS/Node | [openai][edot-openai-js] ||||| Tested on OpenAI, Azure and Ollama|
5151
| OpenAI| Java| [com.openai:openai-java][edot-openai-java] ||||| Tested on OpenAI, Azure and Ollama|
52-
| Langchain| JS/Node| [@langchain/core][wip-edot-langchain-js] || 🚧| 🚧 | 🔒| Tested on OpenAI; Not yet finished |
5352
| (AWS) Boto| Python| [botocore][otel-bedrock-py]||||| Bedrock (not SageMaker) `InvokeModel*` and `Converse*` APIs Owner: Riccardo |
54-
| Cohere| Python| [cohere][wip-otel-cohere-py] | 🚧 | 🚧 | 🚧 | 🚧 | Owner: Leighton from Microsoft |
55-
| Google Cloud AI Platform | Python | [google-cloud-aiplatform][otel-vertexai-py] || 🚧| 🚧| 🚧 | Vertex (not Gemini); Clashes with OpenLLMetry package |
53+
| Google Cloud AI Platform | Python | [google-cloud-aiplatform][otel-vertexai-py] | | 🚧| 🚧| 🚧 | |
54+
| Langchain| JS/Node| [@langchain/core][wip-edot-langchain-js] || 🚧| 🚧 | 🔒| Tested on OpenAI; Not yet finished |
5655

5756
## Getting started
5857

0 commit comments

Comments
 (0)