Skip to content

Commit af151e6

Browse files
Merge branch 'main' into doc-updates-sept-2025
2 parents 4142ddc + b7803bc commit af151e6

File tree

63 files changed

+5314
-494
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

63 files changed

+5314
-494
lines changed

.github/workflows/test-mcp.yml

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
name: LiteLLM MCP Tests (folder - tests/mcp_tests)
2+
3+
on:
4+
pull_request:
5+
branches: [ main ]
6+
7+
jobs:
8+
test:
9+
runs-on: ubuntu-latest
10+
timeout-minutes: 25
11+
12+
steps:
13+
- uses: actions/checkout@v4
14+
15+
- name: Thank You Message
16+
run: |
17+
echo "### 🙏 Thank you for contributing to LiteLLM!" >> $GITHUB_STEP_SUMMARY
18+
echo "Your PR is being tested now. We appreciate your help in making LiteLLM better!" >> $GITHUB_STEP_SUMMARY
19+
20+
- name: Set up Python
21+
uses: actions/setup-python@v4
22+
with:
23+
python-version: '3.12'
24+
25+
- name: Install Poetry
26+
uses: snok/install-poetry@v1
27+
28+
- name: Install dependencies
29+
run: |
30+
poetry install --with dev,proxy-dev --extras "proxy semantic-router"
31+
poetry run pip install "pytest==7.3.1"
32+
poetry run pip install "pytest-retry==1.6.3"
33+
poetry run pip install "pytest-cov==5.0.0"
34+
poetry run pip install "pytest-asyncio==0.21.1"
35+
poetry run pip install "respx==0.22.0"
36+
poetry run pip install "pydantic==2.10.2"
37+
poetry run pip install "mcp==1.10.1"
38+
poetry run pip install pytest-xdist
39+
40+
- name: Setup litellm-enterprise as local package
41+
run: |
42+
cd enterprise
43+
python -m pip install -e .
44+
cd ..
45+
46+
- name: Run MCP tests
47+
run: |
48+
poetry run pytest tests/mcp_tests -x -vv -n 4 --cov=litellm --cov-report=xml --durations=5
Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
#!/usr/bin/env python3
2+
"""
3+
Example: Using CLI token with LiteLLM SDK
4+
5+
This example shows how to use the CLI authentication token
6+
in your Python scripts after running `litellm-proxy login`.
7+
"""
8+
9+
from textwrap import indent
10+
import litellm
11+
LITELLM_BASE_URL = "http://localhost:4000/"
12+
13+
14+
def main():
15+
"""Using CLI token with LiteLLM SDK"""
16+
print("🚀 Using CLI Token with LiteLLM SDK")
17+
print("=" * 40)
18+
#litellm._turn_on_debug()
19+
20+
# Get the CLI token
21+
api_key = litellm.get_litellm_gateway_api_key()
22+
23+
if not api_key:
24+
print("❌ No CLI token found. Please run 'litellm-proxy login' first.")
25+
return
26+
27+
print("✅ Found CLI token.")
28+
29+
available_models = litellm.get_valid_models(
30+
check_provider_endpoint=True,
31+
custom_llm_provider="litellm_proxy",
32+
api_key=api_key,
33+
api_base=LITELLM_BASE_URL
34+
)
35+
36+
print("✅ Available models:")
37+
if available_models:
38+
for i, model in enumerate(available_models, 1):
39+
print(f" {i:2d}. {model}")
40+
else:
41+
print(" No models available")
42+
43+
# Use with LiteLLM
44+
try:
45+
response = litellm.completion(
46+
model="litellm_proxy/gemini/gemini-2.5-flash",
47+
messages=[{"role": "user", "content": "Hello from CLI token!"}],
48+
api_key=api_key,
49+
base_url=LITELLM_BASE_URL
50+
)
51+
print(f"✅ LLM Response: {response.model_dump_json(indent=4)}")
52+
except Exception as e:
53+
print(f"❌ Error: {e}")
54+
55+
56+
if __name__ == "__main__":
57+
main()
58+
59+
print("\n💡 Tips:")
60+
print("1. Run 'litellm-proxy login' to authenticate first")
61+
print("2. Replace 'https://your-proxy.com' with your actual proxy URL")
62+
print("3. The token is stored locally at ~/.litellm/token.json")

docs/my-website/docs/completion/provider_specific_params.md

Lines changed: 54 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -423,14 +423,64 @@ model_list:
423423
curl -X POST 'http://0.0.0.0:4000/chat/completions' \
424424
-H 'Content-Type: application/json' \
425425
-H 'Authorization: Bearer sk-1234' \
426-
-D '{
426+
-d '{
427427
"model": "llama-3-8b-instruct",
428428
"messages": [
429429
{
430430
"role": "user",
431431
"content": "What'\''s the weather like in Boston today?"
432432
}
433433
],
434-
"adapater_id": "my-special-adapter-id" # 👈 PROVIDER-SPECIFIC PARAM
435-
}'
436-
```
434+
"adapater_id": "my-special-adapter-id"
435+
}'
436+
```
437+
438+
## Provider-Specific Metadata Parameters
439+
440+
| Provider | Parameter | Use Case |
441+
|----------|-----------|----------|
442+
| **AWS Bedrock** | `requestMetadata` | Cost attribution, logging |
443+
| **Gemini/Vertex AI** | `labels` | Resource labeling |
444+
| **Anthropic** | `metadata` | User identification |
445+
446+
<Tabs>
447+
<TabItem value="bedrock" label="AWS Bedrock">
448+
449+
```python
450+
import litellm
451+
452+
response = litellm.completion(
453+
model="bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0",
454+
messages=[{"role": "user", "content": "Hello!"}],
455+
requestMetadata={"cost_center": "engineering"}
456+
)
457+
```
458+
459+
</TabItem>
460+
<TabItem value="gemini" label="Gemini/Vertex AI">
461+
462+
```python
463+
import litellm
464+
465+
response = litellm.completion(
466+
model="vertex_ai/gemini-pro",
467+
messages=[{"role": "user", "content": "Hello!"}],
468+
labels={"environment": "production"}
469+
)
470+
```
471+
472+
</TabItem>
473+
<TabItem value="anthropic" label="Anthropic">
474+
475+
```python
476+
import litellm
477+
478+
response = litellm.completion(
479+
model="anthropic/claude-3-sonnet-20240229",
480+
messages=[{"role": "user", "content": "Hello!"}],
481+
metadata={"user_id": "user123"}
482+
)
483+
```
484+
485+
</TabItem>
486+
</Tabs>

0 commit comments

Comments
 (0)