Skip to content

Conversation

@Dwij1704
Copy link
Member

@Dwij1704 Dwij1704 commented Mar 25, 2025

📥 Pull Request

📘 Description
When running chat.completions using stream=True, it throws out
An error occurred: type object 'SpanAttributes' has no attribute 'LLM_CONTENT_COMPLETION_CHUNK'

🧪 Testing
Added LLM_CONTENT_COMPLETION_CHUNK attribute and tested using the following script

import os
from openai import OpenAI
from dotenv import load_dotenv
load_dotenv()
import agentops
agentops.init()
agentops.start_session(tags=["openai_chat"])

client = OpenAI(
    api_key=os.getenv("OPENAI_API_KEY")  # Make sure to set your API key in environment variables
)

def chat_with_gpt():
    try:
        # Create a streaming chat completion
        stream = client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=[
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": "What is the capital of France?"}
            ],
            temperature=0.7,
            max_tokens=100,
            stream=True  # Enable streaming
        )
        
        # Print the response as it streams
        print("Assistant's response: ", end="", flush=True)
        for chunk in stream:
            if chunk.choices[0].delta.content is not None:
                print(chunk.choices[0].delta.content, end="", flush=True)
        print()  # New line at the end
        
    except Exception as e:
        print(f"An error occurred: {e}")

if __name__ == "__main__":
    chat_with_gpt() 

@Dwij1704 Dwij1704 requested a review from tcdent March 25, 2025 17:02
Copy link
Contributor

@tcdent tcdent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch!

@Dwij1704
Copy link
Member Author

Dwij1704 commented Mar 25, 2025

Good catch!

@tcdent
We heard back from VRSEN AI regarding Langchain callback handlers, they said its broken due to type object 'SpanAttributes' has no attribute 'LLM_COMPLETIONS'

@the-praxs was going through the implementation and he noticed that you have specifically mentioned NEVER manually set the root completion attributes (SpanAttributes.LLM_COMPLETIONS or "gen_ai.completion") Let OpenTelemetry backend derive these values from the detailed attributes contrary also commented out these attributes from semcnv.

Can you give me some clarity on how you wish to manage both of these conventions, manually or via semconv?

@dot-agi
Copy link
Member

dot-agi commented Mar 25, 2025

This catch is a good reminder to have tests for attributes in some way. The order of merge caused this unexpected error.

…tes. Update attribute references for LLM_COMPLETIONS, LLM_CONTENT_COMPLETION_CHUNK, and LLM_RESPONSE_MODEL to ensure consistency across the codebase.
…r enhanced semantic conventions in LangChain integration.
@codecov
Copy link

codecov bot commented Mar 25, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

📢 Thoughts on this report? Let us know!

@Dwij1704
Copy link
Member Author

Dwij1704 commented Mar 25, 2025

I fixed the error, but when testing it, I can't see token usage in any of the spans for Agents SDK
https://jaeger.agentops.cloud/trace/c9a45998d27d289fa35220a37f6fcac1
Langchain is working as expected
@the-praxs @tcdent Can you confirm if it used to set those attributes before or not?

@dot-agi
Copy link
Member

dot-agi commented Mar 25, 2025

I fixed the error, but when testing it, I can't see token usage in any of the spans for Agents SDK https://jaeger.agentops.cloud/trace/c9a45998d27d289fa35220a37f6fcac1 Langchain is working as expected @the-praxs @tcdent Can you confirm if it used to set those attributes before or not?

Token usage fixed in #888

@Dwij1704 Dwij1704 merged commit 000720c into main Mar 25, 2025
8 of 10 checks passed
@Dwij1704 Dwij1704 deleted the fix-openai-stream-instrumentor branch March 25, 2025 21:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants