Skip to content

Commit c659b7e

Browse files
authored
Merge branch 'main' into akshoop/fastuuid-dep-make-optional
2 parents 93e127c + 7216983 commit c659b7e

File tree

83 files changed

+4185
-622
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

83 files changed

+4185
-622
lines changed

.github/workflows/test-mcp.yml

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
name: LiteLLM MCP Tests (folder - tests/mcp_tests)
2+
3+
on:
4+
pull_request:
5+
branches: [ main ]
6+
7+
jobs:
8+
test:
9+
runs-on: ubuntu-latest
10+
timeout-minutes: 25
11+
12+
steps:
13+
- uses: actions/checkout@v4
14+
15+
- name: Thank You Message
16+
run: |
17+
echo "### 🙏 Thank you for contributing to LiteLLM!" >> $GITHUB_STEP_SUMMARY
18+
echo "Your PR is being tested now. We appreciate your help in making LiteLLM better!" >> $GITHUB_STEP_SUMMARY
19+
20+
- name: Set up Python
21+
uses: actions/setup-python@v4
22+
with:
23+
python-version: '3.12'
24+
25+
- name: Install Poetry
26+
uses: snok/install-poetry@v1
27+
28+
- name: Install dependencies
29+
run: |
30+
poetry install --with dev,proxy-dev --extras "proxy semantic-router"
31+
poetry run pip install "pytest==7.3.1"
32+
poetry run pip install "pytest-retry==1.6.3"
33+
poetry run pip install "pytest-cov==5.0.0"
34+
poetry run pip install "pytest-asyncio==0.21.1"
35+
poetry run pip install "respx==0.22.0"
36+
poetry run pip install "pydantic==2.10.2"
37+
poetry run pip install "mcp==1.10.1"
38+
poetry run pip install pytest-xdist
39+
40+
- name: Setup litellm-enterprise as local package
41+
run: |
42+
cd enterprise
43+
python -m pip install -e .
44+
cd ..
45+
46+
- name: Run MCP tests
47+
run: |
48+
poetry run pytest tests/mcp_tests -x -vv -n 4 --cov=litellm --cov-report=xml --durations=5
Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
#!/usr/bin/env python3
2+
"""
3+
Example: Using CLI token with LiteLLM SDK
4+
5+
This example shows how to use the CLI authentication token
6+
in your Python scripts after running `litellm-proxy login`.
7+
"""
8+
9+
from textwrap import indent
10+
import litellm
11+
LITELLM_BASE_URL = "http://localhost:4000/"
12+
13+
14+
def main():
15+
"""Using CLI token with LiteLLM SDK"""
16+
print("🚀 Using CLI Token with LiteLLM SDK")
17+
print("=" * 40)
18+
#litellm._turn_on_debug()
19+
20+
# Get the CLI token
21+
api_key = litellm.get_litellm_gateway_api_key()
22+
23+
if not api_key:
24+
print("❌ No CLI token found. Please run 'litellm-proxy login' first.")
25+
return
26+
27+
print("✅ Found CLI token.")
28+
29+
available_models = litellm.get_valid_models(
30+
check_provider_endpoint=True,
31+
custom_llm_provider="litellm_proxy",
32+
api_key=api_key,
33+
api_base=LITELLM_BASE_URL
34+
)
35+
36+
print("✅ Available models:")
37+
if available_models:
38+
for i, model in enumerate(available_models, 1):
39+
print(f" {i:2d}. {model}")
40+
else:
41+
print(" No models available")
42+
43+
# Use with LiteLLM
44+
try:
45+
response = litellm.completion(
46+
model="litellm_proxy/gemini/gemini-2.5-flash",
47+
messages=[{"role": "user", "content": "Hello from CLI token!"}],
48+
api_key=api_key,
49+
base_url=LITELLM_BASE_URL
50+
)
51+
print(f"✅ LLM Response: {response.model_dump_json(indent=4)}")
52+
except Exception as e:
53+
print(f"❌ Error: {e}")
54+
55+
56+
if __name__ == "__main__":
57+
main()
58+
59+
print("\n💡 Tips:")
60+
print("1. Run 'litellm-proxy login' to authenticate first")
61+
print("2. Replace 'https://your-proxy.com' with your actual proxy URL")
62+
print("3. The token is stored locally at ~/.litellm/token.json")

docs/my-website/docs/completion/provider_specific_params.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -423,16 +423,17 @@ model_list:
423423
curl -X POST 'http://0.0.0.0:4000/chat/completions' \
424424
-H 'Content-Type: application/json' \
425425
-H 'Authorization: Bearer sk-1234' \
426-
-D '{
426+
-d '{
427427
"model": "llama-3-8b-instruct",
428428
"messages": [
429429
{
430430
"role": "user",
431431
"content": "What'\''s the weather like in Boston today?"
432432
}
433433
],
434-
"adapater_id": "my-special-adapter-id" # 👈 PROVIDER-SPECIFIC PARAM
435-
}'
434+
"adapater_id": "my-special-adapter-id"
435+
}'
436+
```
436437

437438
## Provider-Specific Metadata Parameters
438439

@@ -482,5 +483,4 @@ response = litellm.completion(
482483
```
483484

484485
</TabItem>
485-
</Tabs>
486-
```
486+
</Tabs>

docs/my-website/docs/completion/usage.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ response = completion(
2626

2727
print(response.usage)
2828
```
29+
> **Note:** LiteLLM supports endpoint bridging—if a model does not natively support a requested endpoint, LiteLLM will automatically route the call to the correct supported endpoint (such as bridging `/chat/completions` to `/responses` or vice versa) based on the model's `mode`set in `model_prices_and_context_window`.
2930
3031
## Streaming Usage
3132

docs/my-website/docs/enterprise.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,11 @@
11
import Image from '@theme/IdealImage';
22

33
# Enterprise
4+
5+
:::info
6+
✨ SSO is free for up to 5 users. After that, an enterprise license is required. [Get Started with Enterprise here](https://www.litellm.ai/enterprise)
7+
:::
8+
49
For companies that need SSO, user management and professional support for LiteLLM Proxy
510

611
:::info

docs/my-website/docs/fine_tuning.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,8 @@ This is an Enterprise only endpoint [Get Started with Enterprise here](https://c
1313
| Feature | Supported | Notes |
1414
|-------|-------|-------|
1515
| Supported Providers | OpenAI, Azure OpenAI, Vertex AI | - |
16+
17+
#### ⚡️See an exhaustive list of supported models and providers at [models.litellm.ai](https://models.litellm.ai/)
1618
| Cost Tracking | 🟡 | [Let us know if you need this](https://github.com/BerriAI/litellm/issues) |
1719
| Logging || Works across all logging integrations |
1820

docs/my-website/docs/getting_started.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,8 @@ Next Steps 👉 [Call all supported models - e.g. Claude-2, Llama2-70b, etc.](./
3232
More details 👉
3333

3434
- [Completion() function details](./completion/)
35-
- [All supported models / providers on LiteLLM](./providers/)
35+
- [Overview of supported models / providers on LiteLLM](./providers/)
36+
- [Search all models / providers](https://models.litellm.ai/)
3637
- [Build your own OpenAI proxy](https://github.com/BerriAI/liteLLM-proxy/tree/main)
3738

3839
## streaming

docs/my-website/docs/image_edits.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,9 @@ LiteLLM provides image editing functionality that maps to OpenAI's `/images/edit
1818
| Supported LiteLLM Proxy Versions | 1.71.1+ | |
1919
| Supported LLM providers | **OpenAI** | Currently only `openai` is supported |
2020

21+
#### ⚡️See all supported models and providers at [models.litellm.ai](https://models.litellm.ai/)
22+
23+
2124
## Usage
2225

2326
### LiteLLM Python SDK

docs/my-website/docs/image_generation.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -279,6 +279,8 @@ print(f"response: {response}")
279279

280280
## Supported Providers
281281

282+
#### ⚡️See all supported models and providers at [models.litellm.ai](https://models.litellm.ai/)
283+
282284
| Provider | Documentation Link |
283285
|----------|-------------------|
284286
| OpenAI | [OpenAI Image Generation →](./providers/openai) |

docs/my-website/docs/index.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -524,6 +524,15 @@ try:
524524
except OpenAIError as e:
525525
print(e)
526526
```
527+
### See How LiteLLM Transforms Your Requests
528+
529+
Want to understand how LiteLLM parses and normalizes your LLM API requests? Use the `/utils/transform_request` endpoint to see exactly how your request is transformed internally.
530+
531+
You can try it out now directly on our Demo App!
532+
Go to the [LiteLLM API docs for transform_request](https://litellm-api.up.railway.app/#/llm%20utils/transform_request_utils_transform_request_post)
533+
534+
LiteLLM will show you the normalized, provider-agnostic version of your request. This is useful for debugging, learning, and understanding how LiteLLM handles different providers and options.
535+
527536

528537
### Logging Observability - Log LLM Input/Output ([Docs](https://docs.litellm.ai/docs/observability/callbacks))
529538
LiteLLM exposes pre defined callbacks to send data to Lunary, MLflow, Langfuse, Helicone, Promptlayer, Traceloop, Slack

0 commit comments

Comments
 (0)