Conversation
…n-check mito-ai: remove improper environment check
mito-ai: rename OpenAIProvider to ProviderManager
mito-ai: fix dropdown width
- Created new AddFieldButton.css with styles for: - .add-field-container (wrapper div) - .add-field-button (button width) - .add-field-dialog-textarea (textarea in dialog) - Replaced inline styles with CSS classes in AddFieldButton.tsx - Changed keyboard shortcut from Ctrl/Cmd+Enter to Enter for submit (Shift+Enter for new line) Co-authored-by: aaron <aaron@sagacollab.com>
mito-ai: implement litellm
Prominent social proof
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Fix/streamlit conversion
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| self.api_key = api_key | ||
| self.base_url = base_url | ||
| self.timeout = timeout | ||
| self.max_retries = max_retries |
There was a problem hiding this comment.
Unused max_retries parameter has no effect
Low Severity
The LiteLLMClient constructor accepts a max_retries parameter and stores it as self.max_retries, but this value is never used. Neither request_completions nor stream_completions pass max_retries to get_litellm_completion_function_params, and that function doesn't accept such a parameter. LiteLLM supports retries via a num_retries parameter, but it's not being utilized. Callers may expect retry behavior that won't actually occur.
Description
Note
Introduces enterprise-ready LiteLLM integration and centralizes AI provider routing, with security-focused model controls and API refinements.
ProviderManagerto unify model selection/streaming across OpenAI, Anthropic (beta API), Gemini, and LiteLLM; deprecateOpenAIProviderconstants, server logs for enterprise status,LiteLLMClient, backend validation of model changes, andGET /mito-ai/available-models; deployment guide addedconvertand newadd-fieldendpoint; addchart_add_field_prompt; use fast model pathProviderManagerand system prompt; removes direct Anthropic callsmodelparam, use provider-managed fast/smartest models; chat name generation uses fast modellog/handlers.py;.eslintignoreupdates;verify.mdclarifies rebuild stepsWritten by Cursor Bugbot for commit 1f1c301. This will update automatically on new commits. Configure here.