Skip to content

Commit 666bcac

Browse files
Merge pull request #14769 from TeddyAmkie/doc-updates-sept-2025
Doc updates sept 2025
2 parents b7803bc + af151e6 commit 666bcac

26 files changed

+430
-269
lines changed

docs/my-website/docs/completion/usage.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ response = completion(
2626

2727
print(response.usage)
2828
```
29+
> **Note:** LiteLLM supports endpoint bridging—if a model does not natively support a requested endpoint, LiteLLM will automatically route the call to the correct supported endpoint (such as bridging `/chat/completions` to `/responses` or vice versa) based on the model's `mode`set in `model_prices_and_context_window`.
2930
3031
## Streaming Usage
3132

docs/my-website/docs/enterprise.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,11 @@
11
import Image from '@theme/IdealImage';
22

33
# Enterprise
4+
5+
:::info
6+
✨ SSO is free for up to 5 users. After that, an enterprise license is required. [Get Started with Enterprise here](https://www.litellm.ai/enterprise)
7+
:::
8+
49
For companies that need SSO, user management and professional support for LiteLLM Proxy
510

611
:::info

docs/my-website/docs/fine_tuning.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,8 @@ This is an Enterprise only endpoint [Get Started with Enterprise here](https://c
1313
| Feature | Supported | Notes |
1414
|-------|-------|-------|
1515
| Supported Providers | OpenAI, Azure OpenAI, Vertex AI | - |
16+
17+
#### ⚡️See an exhaustive list of supported models and providers at [models.litellm.ai](https://models.litellm.ai/)
1618
| Cost Tracking | 🟡 | [Let us know if you need this](https://github.com/BerriAI/litellm/issues) |
1719
| Logging || Works across all logging integrations |
1820

docs/my-website/docs/getting_started.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,8 @@ Next Steps 👉 [Call all supported models - e.g. Claude-2, Llama2-70b, etc.](./
3232
More details 👉
3333

3434
- [Completion() function details](./completion/)
35-
- [All supported models / providers on LiteLLM](./providers/)
35+
- [Overview of supported models / providers on LiteLLM](./providers/)
36+
- [Search all models / providers](https://models.litellm.ai/)
3637
- [Build your own OpenAI proxy](https://github.com/BerriAI/liteLLM-proxy/tree/main)
3738

3839
## streaming

docs/my-website/docs/image_edits.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,9 @@ LiteLLM provides image editing functionality that maps to OpenAI's `/images/edit
1818
| Supported LiteLLM Proxy Versions | 1.71.1+ | |
1919
| Supported LLM providers | **OpenAI** | Currently only `openai` is supported |
2020

21+
#### ⚡️See all supported models and providers at [models.litellm.ai](https://models.litellm.ai/)
22+
23+
2124
## Usage
2225

2326
### LiteLLM Python SDK

docs/my-website/docs/image_generation.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -279,6 +279,8 @@ print(f"response: {response}")
279279

280280
## Supported Providers
281281

282+
#### ⚡️See all supported models and providers at [models.litellm.ai](https://models.litellm.ai/)
283+
282284
| Provider | Documentation Link |
283285
|----------|-------------------|
284286
| OpenAI | [OpenAI Image Generation →](./providers/openai) |

docs/my-website/docs/index.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -524,6 +524,15 @@ try:
524524
except OpenAIError as e:
525525
print(e)
526526
```
527+
### See How LiteLLM Transforms Your Requests
528+
529+
Want to understand how LiteLLM parses and normalizes your LLM API requests? Use the `/utils/transform_request` endpoint to see exactly how your request is transformed internally.
530+
531+
You can try it out now directly on our Demo App!
532+
Go to the [LiteLLM API docs for transform_request](https://litellm-api.up.railway.app/#/llm%20utils/transform_request_utils_transform_request_post)
533+
534+
LiteLLM will show you the normalized, provider-agnostic version of your request. This is useful for debugging, learning, and understanding how LiteLLM handles different providers and options.
535+
527536

528537
### Logging Observability - Log LLM Input/Output ([Docs](https://docs.litellm.ai/docs/observability/callbacks))
529538
LiteLLM exposes pre defined callbacks to send data to Lunary, MLflow, Langfuse, Helicone, Promptlayer, Traceloop, Slack

docs/my-website/docs/moderation.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -130,6 +130,8 @@ Here's the exact json output and type you can expect from all moderation calls:
130130

131131
## **Supported Providers**
132132

133+
#### ⚡️See all supported models and providers at [models.litellm.ai](https://models.litellm.ai/)
134+
133135
| Provider |
134136
|-------------|
135137
| OpenAI |

docs/my-website/docs/observability/callbacks.md

Lines changed: 17 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,15 @@
55
liteLLM provides `input_callbacks`, `success_callbacks` and `failure_callbacks`, making it easy for you to send data to a particular provider depending on the status of your responses.
66

77
:::tip
8-
**New to LiteLLM Callbacks?** Check out our comprehensive [Callback Management Guide](./callback_management.md) to understand when to use different callback hooks like `async_log_success_event` vs `async_post_call_success_hook`.
8+
**New to LiteLLM Callbacks?**
9+
10+
- For proxy/server logging and observability, see the [Proxy Logging Guide](https://docs.litellm.ai/docs/proxy/logging).
11+
- To write your own callback logic, see the [Custom Callbacks Guide](https://docs.litellm.ai/docs/observability/custom_callback).
912
:::
1013

11-
liteLLM supports:
1214

13-
- [Custom Callback Functions](https://docs.litellm.ai/docs/observability/custom_callback)
14-
- [Callback Management Guide](./callback_management.md) - **Comprehensive guide for choosing the right hooks**
15+
### Supported Callback Integrations
16+
1517
- [Lunary](https://lunary.ai/docs)
1618
- [Langfuse](https://langfuse.com/docs)
1719
- [LangSmith](https://www.langchain.com/langsmith)
@@ -21,9 +23,20 @@ liteLLM supports:
2123
- [Sentry](https://docs.sentry.io/platforms/python/)
2224
- [PostHog](https://posthog.com/docs/libraries/python)
2325
- [Slack](https://slack.dev/bolt-python/concepts)
26+
- [Arize](https://docs.arize.com/)
27+
- [PromptLayer](https://docs.promptlayer.com/)
2428

2529
This is **not** an extensive list. Please check the dropdown for all logging integrations.
2630

31+
### Related Cookbooks
32+
Try out our cookbooks for code snippets and interactive demos:
33+
34+
- [Langfuse Callback Example (Colab)](https://colab.research.google.com/github/BerriAI/litellm/blob/main/cookbook/logging_observability/LiteLLM_Langfuse.ipynb)
35+
- [Lunary Callback Example (Colab)](https://colab.research.google.com/github/BerriAI/litellm/blob/main/cookbook/logging_observability/LiteLLM_Lunary.ipynb)
36+
- [Arize Callback Example (Colab)](https://colab.research.google.com/github/BerriAI/litellm/blob/main/cookbook/logging_observability/LiteLLM_Arize.ipynb)
37+
- [Proxy + Langfuse Callback Example (Colab)](https://colab.research.google.com/github/BerriAI/litellm/blob/main/cookbook/logging_observability/LiteLLM_Proxy_Langfuse.ipynb)
38+
- [PromptLayer Callback Example (Colab)](https://colab.research.google.com/github/BerriAI/litellm/blob/main/cookbook/LiteLLM_PromptLayer.ipynb)
39+
2740
### Quick Start
2841

2942
```python

docs/my-website/docs/observability/custom_callback.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,23 @@ asyncio.run(completion())
6767
- `async_post_call_success_hook` - Access user data + modify responses
6868
- `async_pre_call_hook` - Modify requests before sending
6969

70+
### Example: Modifying the Response in async_post_call_success_hook
71+
72+
You can use `async_post_call_success_hook` to add custom headers or metadata to the response before it is returned to the client. For example:
73+
74+
```python
75+
async def async_post_call_success_hook(data, user_api_key_dict, response):
76+
# Add a custom header to the response
77+
additional_headers = getattr(response, "_hidden_params", {}).get("additional_headers", {}) or {}
78+
additional_headers["x-litellm-custom-header"] = "my-value"
79+
if not hasattr(response, "_hidden_params"):
80+
response._hidden_params = {}
81+
response._hidden_params["additional_headers"] = additional_headers
82+
return response
83+
```
84+
85+
This allows you to inject custom metadata or headers into the response for downstream consumers. You can use this pattern to pass information to clients, proxies, or observability tools.
86+
7087
## Callback Functions
7188
If you just want to log on a specific event (e.g. on input) - you can use callback functions.
7289

0 commit comments

Comments
 (0)