Skip to content

[EDOT Collector] Support OpenInference semantic conventions for LLM/AI observability #554

@metalshanked

Description

@metalshanked

The EDOT Collector should natively support OpenInference semantic conventions for AI/LLM observability. OpenInference extends OpenTelemetry with semantic conventions specifically designed for tracing LLM applications — covering LLM calls, retrieval, reranking, tool use, agents, and embeddings.

As LLM-powered applications become increasingly common, teams need to observe AI workloads with the same rigor as traditional services. OpenInference is the emerging open standard for this, built on top of OTel, and is already supported by platforms like Arize Phoenix, LangSmith, and others.

Describe a specific use case for the enhancement or feature

Organizations running LLM applications (using LangChain, LlamaIndex, OpenAI SDK, CrewAI, etc.) instrumented with OpenInference want to send traces through the EDOT Collector to Elasticsearch/Kibana. Currently, EDOT passes through the spans but doesn't understand or enrich OpenInference-specific attributes like:

  • openinference.span.kind (LLM, CHAIN, RETRIEVER, RERANKER, TOOL, AGENT, EMBEDDING)
  • llm.input_messages, llm.output_messages
  • llm.model_name, llm.token_count.*
  • retrieval.documents, embedding.embeddings
  • input.value, output.value

Proposed scope

  1. Processor support — An OpenInference-aware processor that can parse, validate, and enrich spans carrying OpenInference attributes (e.g., mapping to Elastic-specific fields for better Kibana visualization).

  2. Kibana/ES integration — Index templates and dashboards that understand OpenInference span kinds, enabling out-of-the-box LLM observability views (token usage, latency per model, retrieval quality, agent traces).

  3. Connector or exporter enrichment — Optionally transform OpenInference spans into Elastic APM-compatible formats for unified service maps that include AI components.

  4. Documentation — Guide for instrumenting LLM apps with OpenInference SDKs and routing traces through EDOT to Elastic.

Alternatives considered

  • Raw OTel passthrough: Works today, but Kibana has no awareness of OpenInference semantics — spans show up as generic traces with no LLM-specific context.
  • Custom ingest pipelines: Users can write Elasticsearch ingest pipelines to parse OpenInference attributes, but this is fragile and unsupported.
  • Waiting for OTel GenAI semconv: OTel's own GenAI semantic conventions are still experimental. OpenInference is more mature and widely adopted today.

Additional context

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions