Skip to content

Set lower bounds for LiteLLM and OpenAI deps#20

Merged
kelsey-wong merged 4 commits intomainfrom
timklem/dep_bump
Dec 30, 2025
Merged

Set lower bounds for LiteLLM and OpenAI deps#20
kelsey-wong merged 4 commits intomainfrom
timklem/dep_bump

Conversation

@timklem-92
Copy link
Contributor

I replaced the pinned versions with lower bounds to ensure consumers of the PyPI package can integrate more easily. The openai package has a major version bump at 2.0.0 around September this year, so I upgraded the LiteLLM and OpenAI package versions to that timeframe.

I also added another Pytest fixture to deal with LiteLLM leaking coroutines (GitHub thread) and an extra null check to deal with a type change in LiteLLM.


Checklist

  • Did you link the GitHub issue?
  • Did you follow deployment steps or bump the version if needed?
  • Did you add/update tests?
  • What QA did you do?
    • Tested...

Add type check fix

Revert versions

Add fix for coroutines not awaited in litellm
@timklem-92 timklem-92 requested a review from axl1313 December 24, 2025 02:27
Comment on lines +139 to +143
@@ -140,7 +140,7 @@ async def _generate_completion(
# Convert ChoiceLogprobs to dict to avoid Pydantic validation issues
logprobs = ChoiceLogprobs.model_validate(
response.choices[0].logprobs.model_dump()
if hasattr(response.choices[0].logprobs, "model_dump")
if response.choices[0].logprobs and hasattr(response.choices[0].logprobs, "model_dump")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this check necessary? it's already within the condition if litellm_params.get("logprobs") and hasattr(response.choices[0], "logprobs"):

@kelsey-wong kelsey-wong merged commit e9a9fe9 into main Dec 30, 2025
5 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants