fix: Pass base_url, api_key, and other essential parameters to all litellm calls#487
Conversation
…tellm calls - Add _build_completion_params() method to centralize parameter handling - Update all 24+ litellm.completion() and litellm.acompletion() calls to use the new method - Fix async/sync inconsistencies in get_response_async() method - Ensure base_url, api_key, api_version, timeout, and other configured parameters are consistently passed to litellm - Resolves connection issues for Ollama and other providers using custom base URLs Fixes #482 Co-authored-by: MervinPraison <MervinPraison@users.noreply.github.com>
WalkthroughThe changes introduce a private helper method, Changes
Assessment against linked issues
Poem
Note ⚡️ AI Code Reviews for VS Code, Cursor, WindsurfCodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🔇 Additional comments (2)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello team, gemini-code-assist here with a summary of this pull request. The primary goal of this PR, as indicated by the title and description, is to address a bug where essential configuration parameters like base_url, api_key, api_version, and timeout were not being correctly passed to the underlying litellm completion calls within the LLM class. This omission prevented users from effectively using custom LLM providers or configurations, such as self-hosted Ollama instances, leading to connection errors.
The solution implemented involves introducing a new private helper method, _build_completion_params, to centralize the logic for constructing the parameter dictionary required by litellm.completion and litellm.acompletion. This method now includes all the relevant configuration attributes stored in the LLM instance, along with any call-specific parameters (like messages, temperature, stream, tools, etc.). All existing calls to litellm.completion and litellm.acompletion throughout the get_response, get_response_async, response, and aresponse methods have been updated to utilize this new helper method. Additionally, an inconsistency in the get_response_async method where synchronous litellm.completion was used instead of the asynchronous litellm.acompletion has been corrected.
The expected impact is the resolution of connection and configuration issues for users leveraging custom base_url, api_key, or other specific settings with various LLM providers supported by litellm.
Highlights
- Bug Fix: Resolves an issue where
base_url,api_key, and other configured parameters were not passed tolitellmcalls, preventing custom provider setups from working. - Refactoring: Introduces a new private helper method
_build_completion_paramsto centralize the construction of parameters forlitellmcompletion calls. - Parameter Consistency: Ensures that all relevant configuration parameters stored in the
LLMinstance are consistently included in everylitellm.completionandlitellm.acompletioncall. - Async/Sync Correction: Fixes an inconsistency in the
get_response_asyncmethod by replacing synchronouslitellm.completioncalls with their asynchronous counterparts,await litellm.acompletion.
Changelog
- src/praisonai-agents/praisonaiagents/llm/llm.py
- Added
_build_completion_paramsmethod to consolidate parameter building for litellm calls (around L1435). - Updated all instances of
litellm.completioninget_responseandresponsemethods to use_build_completion_params(Diffs 1-12, 25). - Updated all instances of
litellm.acompletioninget_response_asyncandaresponsemethods to use_build_completion_params(Diffs 13-20, 22-24, 26). - Corrected synchronous
litellm.completioncalls to asynchronousawait litellm.acompletionwithin theget_response_asyncmethod (Diffs 21, 22, 23).
- Added
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Config keys unseen,
Caused errors, quite mean.
A helper arrives,
Now the code thrives,
With parameters keen.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
|
Qodo Merge was enabled for this repository. To continue using it, please link your Git account with your Qodo account here. CI Feedback 🧐A test triggered by this PR failed. Here is an AI-generated analysis of the failure:
|
There was a problem hiding this comment.
Code Review
The pull request addresses an important issue by ensuring that essential parameters are consistently passed to all litellm calls. The introduction of the _build_completion_params() method is a good way to centralize parameter handling and reduce code duplication. The changes seem comprehensive and well-explained in the pull request description.
Merge Readiness
The pull request effectively addresses the reported issue and introduces a well-structured solution. The code is now more maintainable and less prone to errors related to missing parameters. The changes are relatively self-contained and don't introduce significant complexity. I am unable to approve this pull request, and recommend that others review and approve this code before merging. However, I believe that this pull request is in good shape to be merged after addressing the comments.
…482-20250523_112045 fix: Pass base_url, api_key, and other essential parameters to all litellm calls
Fixes #482
Problem
The LLM class stored
base_url,api_key,api_version, andtimeoutparameters but none were passed to any of the 24+ litellm calls throughout the file. This caused connection refused errors when trying to use Ollama or other providers with custom base URLs.Solution
_build_completion_params()method to centralize parameter handlinglitellm.completion()andlitellm.acompletion()calls to use the new methodget_response_async()methodbase_url,api_key,api_version,timeout, and other configured parameters are consistently passed to litellmImpact
Resolves connection issues for Ollama and other providers using custom base URLs or API configurations.
Generated with Claude Code
Summary by CodeRabbit