Skip to content

Commit 87eec83

Browse files
Update articles/ai-services/openai/how-to/prompt-caching.md
Co-authored-by: Michael <[email protected]>
1 parent 4784493 commit 87eec83

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/ai-services/openai/how-to/prompt-caching.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,4 +87,4 @@ Prompt caching is enabled by default. There is no opt-out option.
8787

8888
## How does prompt caching work for Provisioned deployments?
8989

90-
For supported models on provisioned deployments, we discount up to 100% of cached input tokens. For more information, see our [Provisioned Throughput documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/provisioned-throughput).
90+
For supported models on provisioned deployments, we discount up to 100% of cached input tokens. For more information, see our [Provisioned Throughput documentation](/azure/ai-services/openai/concepts/provisioned-throughput).

0 commit comments

Comments
 (0)