-
Notifications
You must be signed in to change notification settings - Fork 38
Description
The EDOT Collector should natively support OpenInference semantic conventions for AI/LLM observability. OpenInference extends OpenTelemetry with semantic conventions specifically designed for tracing LLM applications — covering LLM calls, retrieval, reranking, tool use, agents, and embeddings.
As LLM-powered applications become increasingly common, teams need to observe AI workloads with the same rigor as traditional services. OpenInference is the emerging open standard for this, built on top of OTel, and is already supported by platforms like Arize Phoenix, LangSmith, and others.
Describe a specific use case for the enhancement or feature
Organizations running LLM applications (using LangChain, LlamaIndex, OpenAI SDK, CrewAI, etc.) instrumented with OpenInference want to send traces through the EDOT Collector to Elasticsearch/Kibana. Currently, EDOT passes through the spans but doesn't understand or enrich OpenInference-specific attributes like:
openinference.span.kind(LLM, CHAIN, RETRIEVER, RERANKER, TOOL, AGENT, EMBEDDING)llm.input_messages,llm.output_messagesllm.model_name,llm.token_count.*retrieval.documents,embedding.embeddingsinput.value,output.value
Proposed scope
-
Processor support — An OpenInference-aware processor that can parse, validate, and enrich spans carrying OpenInference attributes (e.g., mapping to Elastic-specific fields for better Kibana visualization).
-
Kibana/ES integration — Index templates and dashboards that understand OpenInference span kinds, enabling out-of-the-box LLM observability views (token usage, latency per model, retrieval quality, agent traces).
-
Connector or exporter enrichment — Optionally transform OpenInference spans into Elastic APM-compatible formats for unified service maps that include AI components.
-
Documentation — Guide for instrumenting LLM apps with OpenInference SDKs and routing traces through EDOT to Elastic.
Alternatives considered
- Raw OTel passthrough: Works today, but Kibana has no awareness of OpenInference semantics — spans show up as generic traces with no LLM-specific context.
- Custom ingest pipelines: Users can write Elasticsearch ingest pipelines to parse OpenInference attributes, but this is fragile and unsupported.
- Waiting for OTel GenAI semconv: OTel's own GenAI semantic conventions are still experimental. OpenInference is more mature and widely adopted today.
Additional context
- OpenInference repo: https://github.com/Arize-ai/openinference
- OpenInference semantic conventions spec: https://arize-ai.github.io/openinference/spec/semantic_conventions.html
- OpenInference supports instrumentations for: Python (LangChain, LlamaIndex, OpenAI, CrewAI, Haystack, etc.), Java (LangChain4j, Spring AI), JS
- This would position Elastic as a first-class destination for AI observability alongside traditional APM