Skip to content

Commit 000d367

Browse files
author
Motta Kin
committed
Update docs
1 parent 7463d3e commit 000d367

File tree

3 files changed

+29
-6
lines changed

3 files changed

+29
-6
lines changed

docs/api/providers.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,3 +39,5 @@
3939
::: pydantic_ai.providers.moonshotai.MoonshotAIProvider
4040

4141
::: pydantic_ai.providers.ollama.OllamaProvider
42+
43+
::: pydantic_ai.providers.litellm.LiteLLMProvider

docs/models/openai.md

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -563,3 +563,28 @@ result = agent.run_sync('What is the capital of France?')
563563
print(result.output)
564564
#> The capital of France is Paris.
565565
```
566+
567+
### LiteLLM
568+
569+
To use [LiteLLM](https://www.litellm.ai/), set the configs as outlined in the [doc](https://docs.litellm.ai/docs/set_keys). What specific configs you need to set depends on your setup. For example, if you are using a LiteLLM proxy server, then you need to set the `api_base` and `api_key` configs.
570+
571+
Once you have the configs, use the [`LiteLLMProvider`][pydantic_ai.providers.litellm.LiteLLMProvider] as follows:
572+
573+
```python
574+
from pydantic_ai import Agent
575+
from pydantic_ai.models.openai import OpenAIChatModel
576+
from pydantic_ai.providers.litellm import LiteLLMProvider
577+
578+
model = OpenAIChatModel(
579+
'openai/gpt-3.5-turbo',
580+
provider=LiteLLMProvider(
581+
api_base='<litellm-api-base-url>',
582+
api_key='<litellm-api-key>'
583+
)
584+
)
585+
agent = Agent(model)
586+
587+
result = agent.run_sync('What is the capital of France?')
588+
print(result.output)
589+
...
590+
```

pydantic_ai_slim/pydantic_ai/providers/litellm.py

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -125,18 +125,14 @@ def __init__(
125125
self._client = openai_client
126126
return
127127

128-
# Use api_base if provided, otherwise use a generic base URL
129-
# LiteLLM doesn't actually use this URL - it routes internally
130-
base_url = api_base or 'https://api.litellm.ai/v1'
131-
132128
# Create OpenAI client that will be used with LiteLLM's completion function
133129
# The actual API calls will be intercepted and routed through LiteLLM
134130
if http_client is not None:
135131
self._client = AsyncOpenAI(
136-
base_url=base_url, api_key=api_key or 'litellm-placeholder', http_client=http_client
132+
base_url=api_base, api_key=api_key or 'litellm-placeholder', http_client=http_client
137133
)
138134
else:
139135
http_client = cached_async_http_client(provider='litellm')
140136
self._client = AsyncOpenAI(
141-
base_url=base_url, api_key=api_key or 'litellm-placeholder', http_client=http_client
137+
base_url=api_base, api_key=api_key or 'litellm-placeholder', http_client=http_client
142138
)

0 commit comments

Comments
 (0)