Skip to content

Conversation

@dot-agi
Copy link
Member

@dot-agi dot-agi commented May 14, 2025

📥 Pull Request

📘 Description
Fixes duplicate LLM calls in CrewAI due to instrumentation of the LLM class causing another span to be produced
Closes #959

🧪 Testing
The OG job posting example

📋 Future work

  • Link with the current span which is being called from the SDK
  • Add the custom attributes in the current span
  • Integrate LiteLLM and remove both above

@dot-agi dot-agi requested review from Dwij1704, areibman and Copilot May 14, 2025 19:06
@dot-agi dot-agi self-assigned this May 14, 2025
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR addresses duplicate LLM call issues by enhancing the instrumentation logic for CrewAI by adding provider detection logic.

  • Introduces a mapping from LLM providers to specific instrumentors to selectively skip instrumentation.
  • Wraps provider detection logic in a try/except block to handle potential errors during instrumentation setup.

@codecov
Copy link

codecov bot commented May 14, 2025

Codecov Report

Attention: Patch coverage is 0% with 48 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
agentops/instrumentation/crewai/instrumentation.py 0.00% 48 Missing ⚠️

📢 Thoughts on this report? Let us know!

@dot-agi dot-agi requested a review from Copilot May 14, 2025 20:10
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes duplicate LLM calls by detecting when a provider’s own instrumentation is in use and skipping the extra span, refactors token attribute handling, and adds error handling around provider detection.

  • Added mapping from model identifiers to known instrumentors and early-return if matched
  • Reformatted callback token attribute setting for readability
  • Wrapped provider-detection logic in a try/except to log and avoid crashing
Comments suppressed due to low confidence (2)

agentops/instrumentation/crewai/instrumentation.py:439

  • The with block appears unindented relative to the preceding else: and may execute even when an instrumentor is found, breaking the intended skip logic. Please correct the indentation so that the span is only started in the else branch.
with tracer.start_as_current_span(f"{llm}.llm", kind=SpanKind.CLIENT, attributes={}) as span:

agentops/instrumentation/crewai/instrumentation.py:431

  • [nitpick] New behavior for both matching and non-matching providers should be covered by unit tests to ensure correct skipping or instrumentation. Please add tests for cases where the provider is in the map and where it is not.
instrumentor = provider_instrumentor.get(provider.lower())

Copy link
Member

@Dwij1704 Dwij1704 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Crewai instrumentation is so much better now, Thanks @the-praxs !!

@Dwij1704 Dwij1704 merged commit 63fdd7f into main May 15, 2025
9 of 10 checks passed
@Dwij1704 Dwij1704 deleted the fix/duplicate-crew-llm-calls branch May 15, 2025 19:47
dot-agi added a commit that referenced this pull request May 21, 2025
dot-agi added a commit that referenced this pull request May 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Duplicated LLM calls on crew runs

3 participants