Skip to content

Conversation

@Dwij1704
Copy link
Member

@Dwij1704 Dwij1704 commented Feb 19, 2025

📥 Pull Request

📘 Description
Refactored LlmTracker to ensure proper instrumentation of OpenAI and LiteLLM calls. Now, OpenAI is only tracked when explicitly called, preventing duplicate instrumentation when used via LiteLLM. Improved call stack detection to differentiate between direct OpenAI calls and LiteLLM-wrapped calls.

🧪 Testing
Executed a test script covering multiple LLM providers (Anthropic, OpenAI, LiteLLM). Verified that:
✅ LiteLLM does not override OpenAI instrumentation when used explicitly.
✅ Calls to OpenAI and Anthropic through LiteLLM are correctly tracked.
✅ Direct OpenAI and Anthropic API calls function as expected.
✅ No duplicate tracking or unintended overrides occur.

EDIT (by @the-praxs): Closes #655

@Dwij1704 Dwij1704 requested a review from dot-agi February 19, 2025 21:24
Comment on lines 103 to 139
self.litellm_initialized = False

def _is_litellm_call(self):
"""
Detects if the API call originated from LiteLLM.
Returns True if LiteLLM appears in the call stack **before** OpenAI.
"""
stack = inspect.stack()

litellm_seen = False # Track if LiteLLM was encountered
openai_seen = False # Track if OpenAI was encountered

for frame in stack:
module = inspect.getmodule(frame.frame)

module_name = module.__name__ if module else None

filename = frame.filename.lower()

if module_name and "litellm" in module_name or "litellm" in filename:
print("LiteLLM detected.")
litellm_seen = True

if module_name and "openai" in module_name or "openai" in filename:
print("OpenAI detected.")
openai_seen = True

if not litellm_seen:
return False

return litellm_seen

def override_api(self):
"""
Overrides key methods of the specified API to record events.
"""

litellm_initialized = False

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The litellm_initialized variable is declared twice - once as instance variable and once as local variable in override_api(). The local variable is never used, making the instance variable ineffective.

📝 Committable Code Suggestion

‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
self.litellm_initialized = False
def _is_litellm_call(self):
"""
Detects if the API call originated from LiteLLM.
Returns True if LiteLLM appears in the call stack **before** OpenAI.
"""
stack = inspect.stack()
litellm_seen = False # Track if LiteLLM was encountered
openai_seen = False # Track if OpenAI was encountered
for frame in stack:
module = inspect.getmodule(frame.frame)
module_name = module.__name__ if module else None
filename = frame.filename.lower()
if module_name and "litellm" in module_name or "litellm" in filename:
print("LiteLLM detected.")
litellm_seen = True
if module_name and "openai" in module_name or "openai" in filename:
print("OpenAI detected.")
openai_seen = True
if not litellm_seen:
return False
return litellm_seen
def override_api(self):
"""
Overrides key methods of the specified API to record events.
"""
litellm_initialized = False
self.litellm_initialized = False
def _is_litellm_call(self):
"""
Detects if the API call originated from LiteLLM.
Returns True if LiteLLM appears in the call stack **before** OpenAI.
"""
stack = inspect.stack()
litellm_seen = False # Track if LiteLLM was encountered
openai_seen = False # Track if OpenAI was encountered
for frame in stack:
module = inspect.getmodule(frame.frame)
module_name = module.__name__ if module else None
filename = frame.filename.lower()
if module_name and "litellm" in module_name or "litellm" in filename:
print("LiteLLM detected.")
litellm_seen = True
if module_name and "openai" in module_name or "openai" in filename:
print("OpenAI detected.")
openai_seen = True
if not litellm_seen:
return False
return litellm_seen
def override_api(self):
"""
Overrides key methods of the specified API to record events.
"""

@codecov
Copy link

codecov bot commented Feb 19, 2025

Codecov Report

Attention: Patch coverage is 12.00000% with 22 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
agentops/llms/tracker.py 12.00% 22 Missing ⚠️

📢 Thoughts on this report? Let us know!

Copy link
Contributor

@areibman areibman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works! Thanks

@areibman areibman linked an issue Feb 19, 2025 that may be closed by this pull request
3 tasks
@Dwij1704 Dwij1704 enabled auto-merge (squash) February 19, 2025 23:34
@dot-agi dot-agi changed the title Fix/pr 655 fix: LiteLLM and OpenAI SDK tracking Feb 20, 2025
Copy link
Member

@dot-agi dot-agi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this check should be done in the litellm.py file instead.

This keeps the code clean.

@dot-agi
Copy link
Member

dot-agi commented Feb 20, 2025

@Dwij1704 can you please check whether the formatters are passing in your code? The Static Analysis test is failing for that reason.

I am pushing a fix for the integration test so that test should start working.

@dot-agi dot-agi added bug Something isn't working in progress labels Feb 20, 2025
Copy link
Member

@dot-agi dot-agi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LFGOO🚀🚀

@Dwij1704 Dwij1704 disabled auto-merge February 20, 2025 19:31
@Dwij1704 Dwij1704 merged commit 0dc622f into main Feb 20, 2025
9 of 10 checks passed
@Dwij1704 Dwij1704 deleted the fix/pr-655 branch February 20, 2025 19:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working in progress

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Session not tracked for litellm + autogen-agentchat v4

4 participants