Skip to content

Commit f9c394a

Browse files
authored
Merge pull request #2155 from mito-ds/lite-llm
mito-ai: implement litellm
2 parents 3cdfc37 + 97159bf commit f9c394a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

46 files changed

+1964
-426
lines changed
Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
name: Test - Mito AI Frontend Playwright with LiteLLM
2+
3+
on:
4+
push:
5+
branches: [ dev ]
6+
paths:
7+
- 'mito-ai/**'
8+
- 'tests/llm_providers_tests/litellm_llm_providers.spec.ts'
9+
- '.github/workflows/test-litellm-llm-providers.yml'
10+
pull_request:
11+
paths:
12+
- 'mito-ai/**'
13+
- 'tests/llm_providers_tests/litellm_llm_providers.spec.ts'
14+
- '.github/workflows/test-litellm-llm-providers.yml'
15+
workflow_dispatch:
16+
17+
concurrency:
18+
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
19+
cancel-in-progress: true
20+
21+
jobs:
22+
test-mitoai-frontend-jupyterlab-litellm:
23+
runs-on: ubuntu-24.04
24+
timeout-minutes: 60
25+
strategy:
26+
matrix:
27+
python-version: ['3.10', '3.12']
28+
fail-fast: false
29+
30+
steps:
31+
- uses: actions/checkout@v4
32+
- name: Set up Python ${{ matrix.python-version }}
33+
uses: actions/setup-python@v5
34+
with:
35+
python-version: ${{ matrix.python-version }}
36+
cache: pip
37+
cache-dependency-path: |
38+
mito-ai/setup.py
39+
tests/requirements.txt
40+
- uses: actions/setup-node@v4
41+
with:
42+
node-version: 22
43+
cache: 'npm'
44+
cache-dependency-path: mito-ai/package-lock.json
45+
- name: Upgrade pip
46+
run: |
47+
python -m pip install --upgrade pip
48+
- name: Install dependencies
49+
run: |
50+
cd tests
51+
bash mac-setup.sh
52+
- name: Install mitosheet-helper-enterprise
53+
run: |
54+
cd tests
55+
source venv/bin/activate
56+
pip install mitosheet-helper-enterprise
57+
- name: Install JupyterLab
58+
run: |
59+
python -m pip install jupyterlab
60+
- name: Install Node.js dependencies
61+
run: |
62+
cd mito-ai
63+
jlpm install
64+
- name: Setup JupyterLab
65+
run: |
66+
cd tests
67+
source venv/bin/activate
68+
pip install setuptools==68.0.0
69+
cd ../mito-ai
70+
jupyter labextension develop . --overwrite
71+
jupyter server extension enable --py mito_ai
72+
- name: Start a server and run LiteLLM provider tests
73+
run: |
74+
cd tests
75+
source venv/bin/activate
76+
jupyter lab --config jupyter_server_test_config.py &
77+
jlpm run test:litellm-llm-providers
78+
env:
79+
LITELLM_BASE_URL: ${{ secrets.LITELLM_BASE_URL }}
80+
LITELLM_MODELS: ${{ secrets.LITELLM_MODELS }}
81+
LITELLM_API_KEY: ${{ secrets.LITELLM_API_KEY }}
82+
- name: Upload test-results
83+
uses: actions/upload-artifact@v4
84+
if: failure()
85+
with:
86+
name: mitoai-jupyterlab-playwright-litellm-report-${{ matrix.python-version }}-${{ github.run_id }}
87+
path: tests/playwright-report/
88+
retention-days: 14

mito-ai/.eslintignore

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
node_modules
2+
venv
3+
dist
4+
coverage
5+
**/*.d.ts
6+
tests
7+
**/__tests__
8+
ui-tests
9+
lib
10+
buildcache
11+
*.tsbuildinfo
Lines changed: 248 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,248 @@
1+
# Enterprise Deployment Guide
2+
3+
This guide explains how to configure Mito AI for enterprise deployments with strict data privacy and security requirements.
4+
5+
## Overview
6+
7+
Enterprise mode in Mito AI provides:
8+
9+
1. **LLM Model Lockdown**: AI calls ONLY go to IT-approved LLM models
10+
2. **Telemetry Elimination**: No telemetry is sent to Mito servers
11+
3. **User Protection**: End users cannot change to unapproved LLM models
12+
4. **LiteLLM Support**: Optional support for LiteLLM endpoints when enterprise mode is enabled
13+
14+
## Enabling Enterprise Mode
15+
16+
Enterprise mode is automatically enabled when the `mitosheet-helper-enterprise` package is installed. This package must be installed by your IT team with appropriate permissions.
17+
18+
```bash
19+
pip install mitosheet-helper-enterprise
20+
```
21+
22+
**Note**: Enterprise mode does not lock users out - they can continue using the Mito server normally if LiteLLM is not configured.
23+
24+
## LiteLLM Configuration (Optional)
25+
26+
When enterprise mode is enabled, you can optionally configure LiteLLM to route all AI calls to your approved LLM endpoint. LiteLLM configuration is **optional** - if not configured, users can continue using the normal Mito server flow.
27+
28+
### Prerequisites
29+
30+
1. **LiteLLM Server**: Your IT team must have a LiteLLM server running that exposes an OpenAI-compatible API
31+
2. **API Compatibility**: The LiteLLM endpoint must be compatible with the OpenAI Chat Completions API specification
32+
3. **Network Access**: End users must have network access to the LiteLLM server endpoint
33+
4. **API Key Management**: Each end user must have their own API key for authentication with the LiteLLM server
34+
35+
### Environment Variables
36+
37+
Configure the following environment variables on the Jupyter server:
38+
39+
#### IT-Controlled Variables (Set by IT Team)
40+
41+
- **`LITELLM_BASE_URL`**: The base URL of your LiteLLM server endpoint
42+
- Example: `https://your-litellm-server.com`
43+
- Must be OpenAI-compatible
44+
45+
- **`LITELLM_MODELS`**: Comma-separated list of approved model names
46+
- Model names must include provider prefix (e.g., `"openai/gpt-4o"`)
47+
- Example: `"openai/gpt-4o,openai/gpt-4o-mini,anthropic/claude-3-5-sonnet"`
48+
- Format: Comma-separated string (whitespace is automatically trimmed)
49+
50+
#### User-Controlled Variables (Set by Each End User)
51+
52+
- **`LITELLM_API_KEY`**: User's API key for authentication with the LiteLLM server
53+
- Each user sets their own API key
54+
- Keys are never sent to Mito servers
55+
56+
### Example Configuration
57+
58+
#### Jupyter Server Configuration File
59+
60+
Create or update your Jupyter server configuration file (typically `~/.jupyter/jupyter_server_config.py` or `/etc/jupyter/jupyter_server_config.d/mito_ai_enterprise.json`):
61+
62+
```python
63+
# For Python config file
64+
import os
65+
os.environ["LITELLM_BASE_URL"] = "https://your-litellm-server.com"
66+
os.environ["LITELLM_MODELS"] = "openai/gpt-4o,openai/gpt-4o-mini"
67+
```
68+
69+
Or for JSON config:
70+
71+
```json
72+
{
73+
"ServerApp": {
74+
"environment": {
75+
"LITELLM_BASE_URL": "https://your-litellm-server.com",
76+
"LITELLM_MODELS": "openai/gpt-4o,openai/gpt-4o-mini"
77+
}
78+
}
79+
}
80+
```
81+
82+
#### User Environment Variables
83+
84+
Each end user should set their own API key in their environment:
85+
86+
```bash
87+
export LITELLM_API_KEY="sk-user-specific-api-key"
88+
```
89+
90+
Or in their shell profile (`.bashrc`, `.zshrc`, etc.):
91+
92+
```bash
93+
export LITELLM_API_KEY="sk-user-specific-api-key"
94+
```
95+
96+
## Behavior
97+
98+
### When Enterprise Mode is Enabled
99+
100+
1. **Telemetry**: All telemetry is automatically disabled
101+
2. **Model Selection**:
102+
- If LiteLLM is configured: Users can only select from IT-approved models in `LITELLM_MODELS`
103+
- If LiteLLM is not configured: Users can use standard models via Mito server
104+
3. **Model Validation**: Backend validates all model selections against the approved list
105+
4. **UI Lockdown**: Frontend only displays approved models
106+
107+
### When Enterprise Mode is NOT Enabled
108+
109+
- LiteLLM environment variables are **ignored**
110+
- Normal Mito AI behavior continues
111+
- Standard model selection is available
112+
113+
## Security Guarantees
114+
115+
1. **Defense in Depth**:
116+
- Backend validates all model selections (even if frontend is bypassed)
117+
- Enterprise mode is determined by package installation (users cannot modify without admin access)
118+
- Configuration environment variables are server-side only (users cannot modify)
119+
- Frontend UI only shows approved models
120+
121+
2. **Telemetry Elimination**:
122+
- Early return in telemetry functions when enterprise mode is active
123+
- No analytics library calls made
124+
- No network requests to external telemetry servers
125+
126+
3. **Model Lockdown** (when LiteLLM is configured):
127+
- Backend validates all model selections against approved list
128+
- Backend rejects model change requests for unapproved models
129+
- Frontend shows only approved models in model selector
130+
- All API calls go to LiteLLM base URL
131+
132+
4. **API Key Management**:
133+
- Users set their own `LITELLM_API_KEY` environment variable for authentication
134+
- IT controls the LiteLLM endpoint and approved models, users control authentication
135+
- Keys never sent to Mito servers
136+
137+
## Verification
138+
139+
### Check Enterprise Mode Status
140+
141+
When you start Jupyter Lab, check the server logs for:
142+
143+
```
144+
Enterprise mode enabled
145+
LiteLLM configured: endpoint=https://your-litellm-server.com, models=['openai/gpt-4o', 'openai/gpt-4o-mini']
146+
```
147+
148+
### Verify Model Selection
149+
150+
1. Open Mito AI chat in Jupyter Lab
151+
2. Click on the model selector
152+
3. Verify only approved models from `LITELLM_MODELS` are displayed
153+
4. Verify you cannot select unapproved models
154+
155+
### Verify Telemetry Disabled
156+
157+
1. Open browser developer tools (Network tab)
158+
2. Use Mito AI features
159+
3. Verify no requests are made to analytics/telemetry servers
160+
161+
## Troubleshooting
162+
163+
### Models Not Appearing
164+
165+
- **Check environment variables**: Ensure `LITELLM_BASE_URL` and `LITELLM_MODELS` are set correctly
166+
- **Check enterprise mode**: Verify `mitosheet-helper-enterprise` is installed
167+
- **Check server logs**: Look for enterprise mode and LiteLLM configuration messages
168+
- **Restart Jupyter Lab**: Environment variables are read at server startup
169+
170+
### Invalid Model Errors
171+
172+
- **Check model format**: LiteLLM models must include provider prefix (e.g., `"openai/gpt-4o"`)
173+
- **Check model list**: Ensure the model is in the `LITELLM_MODELS` comma-separated list
174+
- **Check API compatibility**: Verify your LiteLLM endpoint supports the requested model
175+
176+
### API Connection Errors
177+
178+
- **Check network access**: Ensure the Jupyter server can reach `LITELLM_BASE_URL`
179+
- **Check API key**: Verify `LITELLM_API_KEY` is set correctly for the user
180+
- **Check endpoint**: Verify `LITELLM_BASE_URL` is correct and the server is running
181+
182+
### Telemetry Still Sending
183+
184+
- **Check enterprise mode**: Verify `mitosheet-helper-enterprise` is installed
185+
- **Check server logs**: Look for "Enterprise mode enabled" message
186+
- **Restart Jupyter Lab**: Enterprise mode is detected at server startup
187+
188+
## API Compatibility Requirements
189+
190+
Your LiteLLM endpoint must be compatible with the OpenAI Chat Completions API. Specifically, it must support:
191+
192+
- **Endpoint**: `/v1/chat/completions` (or equivalent)
193+
- **Method**: POST
194+
- **Request Format**: OpenAI Chat Completions request format
195+
- **Response Format**: OpenAI Chat Completions response format
196+
- **Streaming**: Support for streaming responses (optional but recommended)
197+
198+
### Verification Question for IT Admin
199+
200+
Before deploying, ask your IT admin:
201+
202+
> "Does your LiteLLM endpoint support the OpenAI Chat Completions API specification? Specifically, can it accept POST requests to `/v1/chat/completions` (or equivalent) with the standard OpenAI request format and return responses in the OpenAI response format?"
203+
204+
## Example Deployment
205+
206+
### Step 1: Install Enterprise Package
207+
208+
```bash
209+
pip install mitosheet-helper-enterprise
210+
```
211+
212+
### Step 2: Configure Jupyter Server
213+
214+
Create `/etc/jupyter/jupyter_server_config.d/mito_ai_enterprise.json`:
215+
216+
```json
217+
{
218+
"ServerApp": {
219+
"environment": {
220+
"LITELLM_BASE_URL": "https://your-litellm-server.com",
221+
"LITELLM_MODELS": "openai/gpt-4o,openai/gpt-4o-mini"
222+
}
223+
}
224+
}
225+
```
226+
227+
### Step 3: User API Key Setup
228+
229+
Each user sets their API key in their environment:
230+
231+
```bash
232+
export LITELLM_API_KEY="sk-user-api-key"
233+
```
234+
235+
### Step 4: Restart Jupyter Lab
236+
237+
Restart Jupyter Lab to apply configuration changes.
238+
239+
### Step 5: Verify
240+
241+
1. Check server logs for enterprise mode confirmation
242+
2. Open Mito AI chat
243+
3. Verify only approved models are shown
244+
4. Test a completion to verify it uses LiteLLM endpoint
245+
246+
## Support
247+
248+
For issues or questions about enterprise deployment, contact your IT administrator or Mito support.

0 commit comments

Comments
 (0)