Skip to content

fix: Pass base_url, api_key, and other essential parameters to all litellm calls#487

Merged
MervinPraison merged 1 commit intomainfrom
claude/issue-482-20250523_112045
May 23, 2025
Merged

fix: Pass base_url, api_key, and other essential parameters to all litellm calls#487
MervinPraison merged 1 commit intomainfrom
claude/issue-482-20250523_112045

Conversation

@MervinPraison
Copy link
Copy Markdown
Owner

@MervinPraison MervinPraison commented May 23, 2025

Fixes #482

Problem

The LLM class stored base_url, api_key, api_version, and timeout parameters but none were passed to any of the 24+ litellm calls throughout the file. This caused connection refused errors when trying to use Ollama or other providers with custom base URLs.

Solution

  • Add _build_completion_params() method to centralize parameter handling
  • Update all 24+ litellm.completion() and litellm.acompletion() calls to use the new method
  • Fix async/sync inconsistencies in get_response_async() method
  • Ensure base_url, api_key, api_version, timeout, and other configured parameters are consistently passed to litellm

Impact

Resolves connection issues for Ollama and other providers using custom base URLs or API configurations.

Generated with Claude Code

Summary by CodeRabbit

  • Refactor
    • Improved internal consistency and maintainability of language model interactions. No changes to user-facing features or behavior.

…tellm calls

- Add _build_completion_params() method to centralize parameter handling
- Update all 24+ litellm.completion() and litellm.acompletion() calls to use the new method
- Fix async/sync inconsistencies in get_response_async() method
- Ensure base_url, api_key, api_version, timeout, and other configured parameters are consistently passed to litellm
- Resolves connection issues for Ollama and other providers using custom base URLs

Fixes #482

Co-authored-by: MervinPraison <MervinPraison@users.noreply.github.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented May 23, 2025

Walkthrough

The changes introduce a private helper method, _build_completion_params, within the LLM class to centralize and standardize the construction of parameter dictionaries for all calls to litellm.completion and litellm.acompletion. All previous direct parameter passing is replaced by this helper, ensuring consistent inclusion of configuration options such as base_url.

Changes

File(s) Change Summary
src/praisonai-agents/praisonaiagents/llm/llm.py Added _build_completion_params method; refactored all litellm.completion/acompletion calls to use this helper; replaced explicit parameter passing with helper usage; no logic or control flow changes.

Assessment against linked issues

Objective Addressed Explanation
Ensure base_url is passed to litellm when specified (#482)

Poem

The rabbit hops with code anew,
Centralized params—what a view!
No more missed base URLs in the night,
All calls to litellm now just right.
With helper in paw, refactor complete,
Consistency and clarity—oh, what a treat!
🐇✨

Note

⚡️ AI Code Reviews for VS Code, Cursor, Windsurf

CodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback.
Learn more here.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5aebda9 and a3c688b.

📒 Files selected for processing (1)
  • src/praisonai-agents/praisonaiagents/llm/llm.py (26 hunks)
🔇 Additional comments (2)
src/praisonai-agents/praisonaiagents/llm/llm.py (2)

1435-1474: Excellent refactoring - centralizes parameter handling perfectly!

This new helper method successfully addresses the core issue described in the PR objectives. The implementation correctly:

  • Includes all essential parameters (base_url, api_key, api_version, timeout, etc.) that were missing from litellm calls
  • Uses conditional checks to avoid passing None values
  • Maps stop_phrases to the correct stop parameter for API compatibility
  • Provides a clean override mechanism for call-specific parameters
  • Maintains proper type hints and documentation

This will resolve the connection issues with Ollama and other providers requiring custom configurations.


370-376: Consistent and comprehensive usage across all litellm calls!

The systematic replacement of direct parameter passing with _build_completion_params() is excellent:

  • All 24+ litellm calls have been consistently updated
  • Both litellm.completion() and litellm.acompletion() calls are covered
  • Override parameters are properly passed for call-specific needs (messages, temperature, stream, tools, etc.)
  • Special cases like reasoning steps, tool calls, and self-reflection are handled correctly
  • The refactoring maintains existing logic while fixing the parameter passing issue

This ensures that all litellm calls will now receive the essential configuration parameters, resolving the connection issues described in issue #482.

Also applies to: 405-411, 421-427, 436-442, 554-558, 567-571, 582-587, 617-621, 630-634, 675-681, 710-716, 725-731, 968-973, 1000-1005, 1014-1019, 1032-1038, 1144-1148, 1158-1162, 1174-1180, 1205-1211, 1221-1226, 1267-1273, 1302-1308, 1317-1323, 1534-1539, 1547-1552, 1558-1563, 1640-1645, 1653-1658, 1664-1669

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello team, gemini-code-assist here with a summary of this pull request. The primary goal of this PR, as indicated by the title and description, is to address a bug where essential configuration parameters like base_url, api_key, api_version, and timeout were not being correctly passed to the underlying litellm completion calls within the LLM class. This omission prevented users from effectively using custom LLM providers or configurations, such as self-hosted Ollama instances, leading to connection errors.

The solution implemented involves introducing a new private helper method, _build_completion_params, to centralize the logic for constructing the parameter dictionary required by litellm.completion and litellm.acompletion. This method now includes all the relevant configuration attributes stored in the LLM instance, along with any call-specific parameters (like messages, temperature, stream, tools, etc.). All existing calls to litellm.completion and litellm.acompletion throughout the get_response, get_response_async, response, and aresponse methods have been updated to utilize this new helper method. Additionally, an inconsistency in the get_response_async method where synchronous litellm.completion was used instead of the asynchronous litellm.acompletion has been corrected.

The expected impact is the resolution of connection and configuration issues for users leveraging custom base_url, api_key, or other specific settings with various LLM providers supported by litellm.

Highlights

  • Bug Fix: Resolves an issue where base_url, api_key, and other configured parameters were not passed to litellm calls, preventing custom provider setups from working.
  • Refactoring: Introduces a new private helper method _build_completion_params to centralize the construction of parameters for litellm completion calls.
  • Parameter Consistency: Ensures that all relevant configuration parameters stored in the LLM instance are consistently included in every litellm.completion and litellm.acompletion call.
  • Async/Sync Correction: Fixes an inconsistency in the get_response_async method by replacing synchronous litellm.completion calls with their asynchronous counterparts, await litellm.acompletion.

Changelog

  • src/praisonai-agents/praisonaiagents/llm/llm.py
    • Added _build_completion_params method to consolidate parameter building for litellm calls (around L1435).
    • Updated all instances of litellm.completion in get_response and response methods to use _build_completion_params (Diffs 1-12, 25).
    • Updated all instances of litellm.acompletion in get_response_async and aresponse methods to use _build_completion_params (Diffs 13-20, 22-24, 26).
    • Corrected synchronous litellm.completion calls to asynchronous await litellm.acompletion within the get_response_async method (Diffs 21, 22, 23).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Config keys unseen,
Caused errors, quite mean.
A helper arrives,
Now the code thrives,
With parameters keen.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@qodo-code-review
Copy link
Copy Markdown

Qodo Merge was enabled for this repository. To continue using it, please link your Git account with your Qodo account here.

CI Feedback 🧐

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: test

Failed stage: Install dependencies [❌]

Failure summary:

The action failed because Poetry detected that the pyproject.toml file has changed significantly
since the poetry.lock file was last generated. Poetry requires the lock file to be in sync with the
project configuration. The error message specifically states: "pyproject.toml changed significantly
since poetry.lock was last generated. Run poetry lock to fix the lock file."

Relevant error logs:
1:  ##[group]Operating System
2:  Ubuntu
...

541:  Installing Poetry (2.1.3): Done
542:  Poetry (2.1.3) is installed now. Great!
543:  You can test that everything is set up by executing:
544:  `poetry --version`
545:  ##[group]Run poetry install
546:  �[36;1mpoetry install�[0m
547:  �[36;1mpoetry run python -m pip install duckduckgo_search�[0m
548:  shell: /usr/bin/bash -e {0}
549:  env:
550:  pythonLocation: /opt/hostedtoolcache/Python/3.11.12/x64
551:  LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.11.12/x64/lib
552:  ##[endgroup]
553:  Creating virtualenv praisonai-Qhuj7T3N-py3.11 in /home/runner/.cache/pypoetry/virtualenvs
554:  Installing dependencies from lock file
555:  pyproject.toml changed significantly since poetry.lock was last generated. Run `poetry lock` to fix the lock file.
556:  ##[error]Process completed with exit code 1.
557:  Post job cleanup.

@MervinPraison MervinPraison merged commit bc6b636 into main May 23, 2025
3 of 6 checks passed
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request addresses an important issue by ensuring that essential parameters are consistently passed to all litellm calls. The introduction of the _build_completion_params() method is a good way to centralize parameter handling and reduce code duplication. The changes seem comprehensive and well-explained in the pull request description.

Merge Readiness

The pull request effectively addresses the reported issue and introduces a well-structured solution. The code is now more maintainable and less prone to errors related to missing parameters. The changes are relatively self-contained and don't introduce significant complexity. I am unable to approve this pull request, and recommend that others review and approve this code before merging. However, I believe that this pull request is in good shape to be merged after addressing the comments.

@MervinPraison MervinPraison deleted the claude/issue-482-20250523_112045 branch June 3, 2025 06:24
shaneholloman pushed a commit to shaneholloman/praisonai that referenced this pull request Feb 4, 2026
…482-20250523_112045

fix: Pass base_url, api_key, and other essential parameters to all litellm calls
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ollama doesn't work if not running on the same host

1 participant