|
1 | | -# Using any model via LiteLLM |
2 | | - |
3 | | -!!! note |
4 | | - |
5 | | - The LiteLLM integration is in beta. You may run into issues with some model providers, especially smaller ones. Please report any issues via [Github issues](https://github.com/openai/openai-agents-python/issues) and we'll fix quickly. |
6 | | - |
7 | | -[LiteLLM](https://docs.litellm.ai/docs/) is a library that allows you to use 100+ models via a single interface. We've added a LiteLLM integration to allow you to use any AI model in the Agents SDK. |
8 | | - |
9 | | -## Setup |
10 | | - |
11 | | -You'll need to ensure `litellm` is available. You can do this by installing the optional `litellm` dependency group: |
12 | | - |
13 | | -```bash |
14 | | -pip install "openai-agents[litellm]" |
15 | | -``` |
16 | | - |
17 | | -Once done, you can use [`LitellmModel`][agents.extensions.models.litellm_model.LitellmModel] in any agent. |
18 | | - |
19 | | -## Example |
20 | | - |
21 | | -This is a fully working example. When you run it, you'll be prompted for a model name and API key. For example, you could enter: |
22 | | - |
23 | | -- `openai/gpt-4.1` for the model, and your OpenAI API key |
24 | | -- `anthropic/claude-3-5-sonnet-20240620` for the model, and your Anthropic API key |
25 | | -- etc |
26 | | - |
27 | | -For a full list of models supported in LiteLLM, see the [litellm providers docs](https://docs.litellm.ai/docs/providers). |
28 | | - |
29 | | -```python |
30 | | -from __future__ import annotations |
31 | | - |
32 | | -import asyncio |
33 | | - |
34 | | -from agents import Agent, Runner, function_tool, set_tracing_disabled |
35 | | -from agents.extensions.models.litellm_model import LitellmModel |
36 | | - |
37 | | -@function_tool |
38 | | -def get_weather(city: str): |
39 | | - print(f"[debug] getting weather for {city}") |
40 | | - return f"The weather in {city} is sunny." |
41 | | - |
42 | | - |
43 | | -async def main(model: str, api_key: str): |
44 | | - agent = Agent( |
45 | | - name="Assistant", |
46 | | - instructions="You only respond in haikus.", |
47 | | - model=LitellmModel(model=model, api_key=api_key), |
48 | | - tools=[get_weather], |
49 | | - ) |
50 | | - |
51 | | - result = await Runner.run(agent, "What's the weather in Tokyo?") |
52 | | - print(result.final_output) |
53 | | - |
54 | | - |
55 | | -if __name__ == "__main__": |
56 | | - # First try to get model/api key from args |
57 | | - import argparse |
58 | | - |
59 | | - parser = argparse.ArgumentParser() |
60 | | - parser.add_argument("--model", type=str, required=False) |
61 | | - parser.add_argument("--api-key", type=str, required=False) |
62 | | - args = parser.parse_args() |
63 | | - |
64 | | - model = args.model |
65 | | - if not model: |
66 | | - model = input("Enter a model name for Litellm: ") |
67 | | - |
68 | | - api_key = args.api_key |
69 | | - if not api_key: |
70 | | - api_key = input("Enter an API key for Litellm: ") |
71 | | - |
72 | | - asyncio.run(main(model, api_key)) |
73 | | -``` |
74 | | - |
75 | | -## Tracking usage data |
76 | | - |
77 | | -If you want LiteLLM responses to populate the Agents SDK usage metrics, pass `ModelSettings(include_usage=True)` when creating your agent. |
78 | | - |
79 | | -```python |
80 | | -from agents import Agent, ModelSettings |
81 | | -from agents.extensions.models.litellm_model import LitellmModel |
82 | | - |
83 | | -agent = Agent( |
84 | | - name="Assistant", |
85 | | - model=LitellmModel(model="your/model", api_key="..."), |
86 | | - model_settings=ModelSettings(include_usage=True), |
87 | | -) |
88 | | -``` |
89 | | - |
90 | | -With `include_usage=True`, LiteLLM requests report token and request counts through `result.context_wrapper.usage` just like the built-in OpenAI models. |
| 1 | +# Using any model via LiteLLM |
| 2 | + |
| 3 | +!!! note |
| 4 | + |
| 5 | + The LiteLLM integration is in beta. You may run into issues with some model providers, especially smaller ones. Please report any issues via [Github issues](https://github.com/openai/openai-agents-python/issues) and we'll fix quickly. |
| 6 | + |
| 7 | +[LiteLLM](https://docs.litellm.ai/docs/) is a library that allows you to use 100+ models via a single interface. We've added a LiteLLM integration to allow you to use any AI model in the Agents SDK. |
| 8 | + |
| 9 | +## Setup |
| 10 | + |
| 11 | +You'll need to ensure `litellm` is available. You can do this by installing the optional `litellm` dependency group: |
| 12 | + |
| 13 | +```bash |
| 14 | +pip install "openai-agents[litellm]" |
| 15 | +``` |
| 16 | + |
| 17 | +Once done, you can use [`LitellmModel`][agents.extensions.models.litellm_model.LitellmModel] in any agent. |
| 18 | + |
| 19 | +## Example |
| 20 | + |
| 21 | +This is a fully working example. When you run it, you'll be prompted for a model name and API key. For example, you could enter: |
| 22 | + |
| 23 | +- `openai/gpt-4.1` for the model, and your OpenAI API key |
| 24 | +- `anthropic/claude-3-5-sonnet-20240620` for the model, and your Anthropic API key |
| 25 | +- etc |
| 26 | + |
| 27 | +For a full list of models supported in LiteLLM, see the [litellm providers docs](https://docs.litellm.ai/docs/providers). |
| 28 | + |
| 29 | +```python |
| 30 | +from __future__ import annotations |
| 31 | + |
| 32 | +import asyncio |
| 33 | + |
| 34 | +from agents import Agent, Runner, function_tool, set_tracing_disabled |
| 35 | +from agents.extensions.models.litellm_model import LitellmModel |
| 36 | + |
| 37 | +@function_tool |
| 38 | +def get_weather(city: str): |
| 39 | + print(f"[debug] getting weather for {city}") |
| 40 | + return f"The weather in {city} is sunny." |
| 41 | + |
| 42 | + |
| 43 | +async def main(model: str, api_key: str): |
| 44 | + agent = Agent( |
| 45 | + name="Assistant", |
| 46 | + instructions="You only respond in haikus.", |
| 47 | + model=LitellmModel(model=model, api_key=api_key), |
| 48 | + tools=[get_weather], |
| 49 | + ) |
| 50 | + |
| 51 | + result = await Runner.run(agent, "What's the weather in Tokyo?") |
| 52 | + print(result.final_output) |
| 53 | + |
| 54 | + |
| 55 | +if __name__ == "__main__": |
| 56 | + # First try to get model/api key from args |
| 57 | + import argparse |
| 58 | + |
| 59 | + parser = argparse.ArgumentParser() |
| 60 | + parser.add_argument("--model", type=str, required=False) |
| 61 | + parser.add_argument("--api-key", type=str, required=False) |
| 62 | + args = parser.parse_args() |
| 63 | + |
| 64 | + model = args.model |
| 65 | + if not model: |
| 66 | + model = input("Enter a model name for Litellm: ") |
| 67 | + |
| 68 | + api_key = args.api_key |
| 69 | + if not api_key: |
| 70 | + api_key = input("Enter an API key for Litellm: ") |
| 71 | + |
| 72 | + asyncio.run(main(model, api_key)) |
| 73 | +``` |
| 74 | + |
| 75 | +## Tracking usage data |
| 76 | + |
| 77 | +If you want LiteLLM responses to populate the Agents SDK usage metrics, pass `ModelSettings(include_usage=True)` when creating your agent. |
| 78 | + |
| 79 | +```python |
| 80 | +from agents import Agent, ModelSettings |
| 81 | +from agents.extensions.models.litellm_model import LitellmModel |
| 82 | + |
| 83 | +agent = Agent( |
| 84 | + name="Assistant", |
| 85 | + model=LitellmModel(model="your/model", api_key="..."), |
| 86 | + model_settings=ModelSettings(include_usage=True), |
| 87 | +) |
| 88 | +``` |
| 89 | + |
| 90 | +With `include_usage=True`, LiteLLM requests report token and request counts through `result.context_wrapper.usage` just like the built-in OpenAI models. |
| 91 | + |
| 92 | +## Using tools with structured outputs |
| 93 | + |
| 94 | +Some models accessed via LiteLLM (particularly Google Gemini) don't natively support using tools and structured outputs simultaneously. For these models, enable prompt injection: |
| 95 | + |
| 96 | +```python |
| 97 | +from pydantic import BaseModel |
| 98 | +from agents import Agent, function_tool |
| 99 | +from agents.extensions.models.litellm_model import LitellmModel |
| 100 | + |
| 101 | + |
| 102 | +class Report(BaseModel): |
| 103 | + summary: str |
| 104 | + confidence: float |
| 105 | + |
| 106 | + |
| 107 | +@function_tool |
| 108 | +def analyze_data(query: str) -> dict: |
| 109 | + return {"result": f"Analysis of {query}"} |
| 110 | + |
| 111 | + |
| 112 | +agent = Agent( |
| 113 | + name="Analyst", |
| 114 | + model=LitellmModel("gemini/gemini-1.5-flash"), |
| 115 | + tools=[analyze_data], |
| 116 | + output_type=Report, |
| 117 | + enable_structured_output_with_tools=True, # Required for Gemini |
| 118 | +) |
| 119 | +``` |
| 120 | + |
| 121 | +The `enable_structured_output_with_tools` parameter enables a workaround that injects JSON formatting instructions into the system prompt instead of using the native API. This allows models like Gemini to return structured outputs even when using tools. |
| 122 | + |
| 123 | +See the [prompt injection documentation](structured_output_with_tools.md) for complete details. |
0 commit comments