You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: mito-ai/docs/enterprise-deployment.md
+7-168Lines changed: 7 additions & 168 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,6 @@ Enterprise mode in Mito AI provides:
9
9
1.**LLM Model Lockdown**: AI calls ONLY go to IT-approved LLM models
10
10
2.**Telemetry Elimination**: No telemetry is sent to Mito servers
11
11
3.**User Protection**: End users cannot change to unapproved LLM models
12
-
4.**LiteLLM Support**: Optional support for LiteLLM endpoints when enterprise mode is enabled
13
12
14
13
## Enabling Enterprise Mode
15
14
@@ -19,19 +18,10 @@ Enterprise mode is automatically enabled when the `mitosheet-helper-enterprise`
19
18
pip install mitosheet-helper-enterprise
20
19
```
21
20
22
-
**Note**: Enterprise mode does not lock users out - they can continue using the Mito server normally if LiteLLM is not configured.
23
-
24
-
## LiteLLM Configuration (Optional)
21
+
## LiteLLM Configuration
25
22
26
23
When enterprise mode is enabled, you can optionally configure LiteLLM to route all AI calls to your approved LLM endpoint. LiteLLM configuration is **optional** - if not configured, users can continue using the normal Mito server flow.
27
24
28
-
### Prerequisites
29
-
30
-
1.**LiteLLM Server**: Your IT team must have a LiteLLM server running that exposes an OpenAI-compatible API
31
-
2.**API Compatibility**: The LiteLLM endpoint must be compatible with the OpenAI Chat Completions API specification
32
-
3.**Network Access**: End users must have network access to the LiteLLM server endpoint
33
-
4.**API Key Management**: Each end user must have their own API key for authentication with the LiteLLM server
34
-
35
25
### Environment Variables
36
26
37
27
Configure the following environment variables on the Jupyter server:
@@ -46,88 +36,32 @@ Configure the following environment variables on the Jupyter server:
46
36
- Model names must include provider prefix (e.g., `"openai/gpt-4o"`)
- Format: Comma-separated string (whitespace is automatically trimmed)
39
+
- The first model in the list is the default model.
49
40
50
41
#### User-Controlled Variables (Set by Each End User)
51
42
52
43
-**`LITELLM_API_KEY`**: User's API key for authentication with the LiteLLM server
53
44
- Each user sets their own API key
54
45
- Keys are never sent to Mito servers
55
46
56
-
### Example Configuration
57
-
58
-
#### Jupyter Server Configuration File
59
-
60
-
Create or update your Jupyter server configuration file (typically `~/.jupyter/jupyter_server_config.py` or `/etc/jupyter/jupyter_server_config.d/mito_ai_enterprise.json`):
-**Request Format**: OpenAI Chat Completions request format
195
-
-**Response Format**: OpenAI Chat Completions response format
196
-
-**Streaming**: Support for streaming responses (optional but recommended)
197
-
198
-
### Verification Question for IT Admin
199
-
200
-
Before deploying, ask your IT admin:
201
-
202
-
> "Does your LiteLLM endpoint support the OpenAI Chat Completions API specification? Specifically, can it accept POST requests to `/v1/chat/completions` (or equivalent) with the standard OpenAI request format and return responses in the OpenAI response format?"
0 commit comments