Skip to content

Commit b9fd03f

Browse files
Merge pull request #5701 from voutilad/prompt-caching-ft
Remove FT callout for prompt caching.
2 parents c96afe0 + cc27d30 commit b9fd03f

File tree

1 file changed

+0
-3
lines changed

1 file changed

+0
-3
lines changed

articles/ai-services/openai/how-to/prompt-caching.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,9 +34,6 @@ Currently only the following models support prompt caching with Azure OpenAI:
3434
- `gpt-4.1-2025-04-14`
3535
- `gpt-4.1-nano-2025-04-14`
3636

37-
> [!NOTE]
38-
> Prompt caching is now also available as part of model fine-tuning for `gpt-4o` and `gpt-4o-mini`. Refer to the fine-tuning section of the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for details.
39-
4037
## API support
4138

4239
Official support for prompt caching was first added in API version `2024-10-01-preview`. At this time, only the o-series model family supports the `cached_tokens` API response parameter.

0 commit comments

Comments
 (0)