Skip to content

Commit 7786efc

Browse files
committed
mito-ai: update docs
1 parent f9c394a commit 7786efc

File tree

1 file changed

+7
-168
lines changed

1 file changed

+7
-168
lines changed

mito-ai/docs/enterprise-deployment.md

Lines changed: 7 additions & 168 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,6 @@ Enterprise mode in Mito AI provides:
99
1. **LLM Model Lockdown**: AI calls ONLY go to IT-approved LLM models
1010
2. **Telemetry Elimination**: No telemetry is sent to Mito servers
1111
3. **User Protection**: End users cannot change to unapproved LLM models
12-
4. **LiteLLM Support**: Optional support for LiteLLM endpoints when enterprise mode is enabled
1312

1413
## Enabling Enterprise Mode
1514

@@ -19,19 +18,10 @@ Enterprise mode is automatically enabled when the `mitosheet-helper-enterprise`
1918
pip install mitosheet-helper-enterprise
2019
```
2120

22-
**Note**: Enterprise mode does not lock users out - they can continue using the Mito server normally if LiteLLM is not configured.
23-
24-
## LiteLLM Configuration (Optional)
21+
## LiteLLM Configuration
2522

2623
When enterprise mode is enabled, you can optionally configure LiteLLM to route all AI calls to your approved LLM endpoint. LiteLLM configuration is **optional** - if not configured, users can continue using the normal Mito server flow.
2724

28-
### Prerequisites
29-
30-
1. **LiteLLM Server**: Your IT team must have a LiteLLM server running that exposes an OpenAI-compatible API
31-
2. **API Compatibility**: The LiteLLM endpoint must be compatible with the OpenAI Chat Completions API specification
32-
3. **Network Access**: End users must have network access to the LiteLLM server endpoint
33-
4. **API Key Management**: Each end user must have their own API key for authentication with the LiteLLM server
34-
3525
### Environment Variables
3626

3727
Configure the following environment variables on the Jupyter server:
@@ -46,88 +36,32 @@ Configure the following environment variables on the Jupyter server:
4636
- Model names must include provider prefix (e.g., `"openai/gpt-4o"`)
4737
- Example: `"openai/gpt-4o,openai/gpt-4o-mini,anthropic/claude-3-5-sonnet"`
4838
- Format: Comma-separated string (whitespace is automatically trimmed)
39+
- The first model in the list is the default model.
4940

5041
#### User-Controlled Variables (Set by Each End User)
5142

5243
- **`LITELLM_API_KEY`**: User's API key for authentication with the LiteLLM server
5344
- Each user sets their own API key
5445
- Keys are never sent to Mito servers
5546

56-
### Example Configuration
57-
58-
#### Jupyter Server Configuration File
59-
60-
Create or update your Jupyter server configuration file (typically `~/.jupyter/jupyter_server_config.py` or `/etc/jupyter/jupyter_server_config.d/mito_ai_enterprise.json`):
61-
62-
```python
63-
# For Python config file
64-
import os
65-
os.environ["LITELLM_BASE_URL"] = "https://your-litellm-server.com"
66-
os.environ["LITELLM_MODELS"] = "openai/gpt-4o,openai/gpt-4o-mini"
67-
```
68-
69-
Or for JSON config:
70-
71-
```json
72-
{
73-
"ServerApp": {
74-
"environment": {
75-
"LITELLM_BASE_URL": "https://your-litellm-server.com",
76-
"LITELLM_MODELS": "openai/gpt-4o,openai/gpt-4o-mini"
77-
}
78-
}
79-
}
80-
```
81-
82-
#### User Environment Variables
83-
84-
Each end user should set their own API key in their environment:
85-
86-
```bash
87-
export LITELLM_API_KEY="sk-user-specific-api-key"
88-
```
89-
90-
Or in their shell profile (`.bashrc`, `.zshrc`, etc.):
91-
92-
```bash
93-
export LITELLM_API_KEY="sk-user-specific-api-key"
94-
```
95-
96-
## Behavior
97-
98-
### When Enterprise Mode is Enabled
99-
100-
1. **Telemetry**: All telemetry is automatically disabled
101-
2. **Model Selection**:
102-
- If LiteLLM is configured: Users can only select from IT-approved models in `LITELLM_MODELS`
103-
- If LiteLLM is not configured: Users can use standard models via Mito server
104-
3. **Model Validation**: Backend validates all model selections against the approved list
105-
4. **UI Lockdown**: Frontend only displays approved models
106-
107-
### When Enterprise Mode is NOT Enabled
108-
109-
- LiteLLM environment variables are **ignored**
110-
- Normal Mito AI behavior continues
111-
- Standard model selection is available
112-
11347
## Security Guarantees
11448

11549
1. **Defense in Depth**:
11650
- Backend validates all model selections (even if frontend is bypassed)
117-
- Enterprise mode is determined by package installation (users cannot modify without admin access)
118-
- Configuration environment variables are server-side only (users cannot modify)
11951
- Frontend UI only shows approved models
52+
- All API calls go to LiteLLM base URL
53+
- If user does not set correct API key, the app will still not send requests to the Mito server, instead it will just show an error message.
54+
12055

12156
2. **Telemetry Elimination**:
12257
- Early return in telemetry functions when enterprise mode is active
12358
- No analytics library calls made
12459
- No network requests to external telemetry servers
12560

126-
3. **Model Lockdown** (when LiteLLM is configured):
61+
3. **Model Lockdown**:
12762
- Backend validates all model selections against approved list
12863
- Backend rejects model change requests for unapproved models
12964
- Frontend shows only approved models in model selector
130-
- All API calls go to LiteLLM base URL
13165

13266
4. **API Key Management**:
13367
- Users set their own `LITELLM_API_KEY` environment variable for authentication
@@ -150,99 +84,4 @@ LiteLLM configured: endpoint=https://your-litellm-server.com, models=['openai/gp
15084
1. Open Mito AI chat in Jupyter Lab
15185
2. Click on the model selector
15286
3. Verify only approved models from `LITELLM_MODELS` are displayed
153-
4. Verify you cannot select unapproved models
154-
155-
### Verify Telemetry Disabled
156-
157-
1. Open browser developer tools (Network tab)
158-
2. Use Mito AI features
159-
3. Verify no requests are made to analytics/telemetry servers
160-
161-
## Troubleshooting
162-
163-
### Models Not Appearing
164-
165-
- **Check environment variables**: Ensure `LITELLM_BASE_URL` and `LITELLM_MODELS` are set correctly
166-
- **Check enterprise mode**: Verify `mitosheet-helper-enterprise` is installed
167-
- **Check server logs**: Look for enterprise mode and LiteLLM configuration messages
168-
- **Restart Jupyter Lab**: Environment variables are read at server startup
169-
170-
### Invalid Model Errors
171-
172-
- **Check model format**: LiteLLM models must include provider prefix (e.g., `"openai/gpt-4o"`)
173-
- **Check model list**: Ensure the model is in the `LITELLM_MODELS` comma-separated list
174-
- **Check API compatibility**: Verify your LiteLLM endpoint supports the requested model
175-
176-
### API Connection Errors
177-
178-
- **Check network access**: Ensure the Jupyter server can reach `LITELLM_BASE_URL`
179-
- **Check API key**: Verify `LITELLM_API_KEY` is set correctly for the user
180-
- **Check endpoint**: Verify `LITELLM_BASE_URL` is correct and the server is running
181-
182-
### Telemetry Still Sending
183-
184-
- **Check enterprise mode**: Verify `mitosheet-helper-enterprise` is installed
185-
- **Check server logs**: Look for "Enterprise mode enabled" message
186-
- **Restart Jupyter Lab**: Enterprise mode is detected at server startup
187-
188-
## API Compatibility Requirements
189-
190-
Your LiteLLM endpoint must be compatible with the OpenAI Chat Completions API. Specifically, it must support:
191-
192-
- **Endpoint**: `/v1/chat/completions` (or equivalent)
193-
- **Method**: POST
194-
- **Request Format**: OpenAI Chat Completions request format
195-
- **Response Format**: OpenAI Chat Completions response format
196-
- **Streaming**: Support for streaming responses (optional but recommended)
197-
198-
### Verification Question for IT Admin
199-
200-
Before deploying, ask your IT admin:
201-
202-
> "Does your LiteLLM endpoint support the OpenAI Chat Completions API specification? Specifically, can it accept POST requests to `/v1/chat/completions` (or equivalent) with the standard OpenAI request format and return responses in the OpenAI response format?"
203-
204-
## Example Deployment
205-
206-
### Step 1: Install Enterprise Package
207-
208-
```bash
209-
pip install mitosheet-helper-enterprise
210-
```
211-
212-
### Step 2: Configure Jupyter Server
213-
214-
Create `/etc/jupyter/jupyter_server_config.d/mito_ai_enterprise.json`:
215-
216-
```json
217-
{
218-
"ServerApp": {
219-
"environment": {
220-
"LITELLM_BASE_URL": "https://your-litellm-server.com",
221-
"LITELLM_MODELS": "openai/gpt-4o,openai/gpt-4o-mini"
222-
}
223-
}
224-
}
225-
```
226-
227-
### Step 3: User API Key Setup
228-
229-
Each user sets their API key in their environment:
230-
231-
```bash
232-
export LITELLM_API_KEY="sk-user-api-key"
233-
```
234-
235-
### Step 4: Restart Jupyter Lab
236-
237-
Restart Jupyter Lab to apply configuration changes.
238-
239-
### Step 5: Verify
240-
241-
1. Check server logs for enterprise mode confirmation
242-
2. Open Mito AI chat
243-
3. Verify only approved models are shown
244-
4. Test a completion to verify it uses LiteLLM endpoint
245-
246-
## Support
247-
248-
For issues or questions about enterprise deployment, contact your IT administrator or Mito support.
87+
4. Verify you cannot select unapproved models

0 commit comments

Comments
 (0)