Skip to content

🐛 Bug Report: span attributes gen_ai.prompt and gen_ai.completion are deprecated in the latest OpenTelemetry Semantic Conventions #3515

@Kotaro7750

Description

@Kotaro7750

Which component is this bug for?

LLM Semantic Conventions

📜 Description

In the OpenTelemetry instrumentations in this repository, prompt contents are currently recorded under the following span attributes:

  • gen_ai.prompt for request prompts
  • gen_ai.completion for response outputs

However, in the latest (v1.38.0) OpenTelemetry GenAI Semantic Conventions, both of these attributes are explicitly marked as deprecated and removed from the specification.

👟 Reproduction steps

Here is sample code using langchain and gemini.

Note that langchain and gemini are just example, and same issue can be said in another tools.

from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.messages import SystemMessage
from langchain_core.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain_core.runnables import RunnablePassthrough


from opentelemetry import trace, _logs, metrics
# Trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter

# LLM
from opentelemetry.instrumentation.langchain import LangchainInstrumentor
from opentelemetry.instrumentation.google_generativeai import GoogleGenerativeAiInstrumentor


def setUpOpenTelemetry():
    otlp_endpoint = "http://otel-collector:4317/v1"

    # Trace
    provider = TracerProvider()
    processor = BatchSpanProcessor(OTLPSpanExporter(
        endpoint=f'{otlp_endpoint}/traces', insecure=True))
    provider.add_span_processor(processor)
    trace.set_tracer_provider(provider)

    LangchainInstrumentor().instrument()
    GoogleGenerativeAiInstrumentor().instrument()


setUpOpenTelemetry()

template = ChatPromptTemplate.from_messages([
    SystemMessage(content="""Answer the question based on the context below. If the question cannot be answered using the information provided, answer with "I don't know"."""),
    HumanMessagePromptTemplate.from_template("Context: {context}"),
    HumanMessagePromptTemplate.from_template("Questtion: {question}"),
])

model = ChatGoogleGenerativeAI(model="gemini-2.0-flash-lite")


ramen_chatbot = (
    {
        "question": RunnablePassthrough(),
        "context": lambda _: "The capital of France, Japan, and Germany are Paris, Tokyo, and Berlin respectively.",
    }
    | template
    | model
)
response = ramen_chatbot.invoke("What is the capital of Japan?")

print(response.text)

👍 Expected behavior

Although the specification refers to these deprecated attributes as "no replacement at this time", I believe the following fields provide a suitable alternative according to the latest GenAI semantic conventions:

These attributes follow the new message-structured model introduced in the GenAI specification and align with the recommended approach for capturing prompt and response data, instead of using the deprecated gen_ai.prompt and gen_ai.completion span attributes.


I believe we can replace constants such as GEN_AI_PROMPT with an appropriate alternative like GEN_AI_INPUT_MESSAGES.

_set_span_attribute(
span,
f"{GenAIAttributes.GEN_AI_PROMPT}.{i}.role",
_message_type_to_role(msg.type),
)

In addition, we should follow the updated value format specified by the GenAI semantic conventions (e.g., for gen_ai.output.messages). The expected structure looks like:

[
  {
    “role”: “assistant”,
    “parts”: [
      {
        “type”: “text”,
        “content”: “The weather in Paris is currently rainy with a temperature of 57°F."
      }
    ],
    “finish_reason”: “stop”
  }
]

👎 Actual Behavior with Screenshots

Span with deprecated attributes are emitted.

Image

🤖 Python Version

3.14.0

📃 Provide any additional context for the Bug.

These fields were deprecated in Semantic Conventions v1.28.0 and in the following commit:

32b75a8d465ddea8af396666cd4020c15f4859e1

Although that commit introduced several event-based attributes such as gen_ai.user.message, these event attributes were later re-organized into a structured messaging model in another commit, which was included in v1.37.0 of the specification.

3e06ddb5fc940eff38f66766372fc7b458d03906

This newer version consolidates prompt, output, and instruction data into attributes like gen_ai.input.messages, gen_ai.output.messages, and gen_ai.system_instructions, replacing earlier experimental event fields.


There are a few existing issues in this repository that discuss OpenTelemetry SemConv compliance, but most of them take a broad or high-level perspective. None of them focus specifically on the deprecation of gen_ai.prompt and gen_ai.completion or the migration toward the new structured message attributes defined in the latest GenAI semantic conventions.

Because those issues do not address this concrete and well-scoped problem, opening this issue is still meaningful and necessary to ensure proper alignment with the current specification.

like

👀 Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions