Skip to content

Commit 0855211

Browse files
committed
update
1 parent f55b5c3 commit 0855211

File tree

1 file changed

+1
-2
lines changed

1 file changed

+1
-2
lines changed

articles/ai-services/openai/how-to/prompt-caching.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,6 @@ For a request to take advantage of prompt caching the request must be:
3636

3737
When a match is found between a prompt and the current content of the prompt cache it is referred to a cache hit. Cache hits will show up as [`cached_tokens`](/azure/ai-services/openai/reference-preview#cached_tokens) under [`prompt_token_details`](/azure/ai-services/openai/reference-preview#properties-for-prompt_tokens_details) in the chat completions response.
3838

39-
4039
```json
4140
{
4241
"created": 1729227448,
@@ -57,7 +56,7 @@ When a match is found between a prompt and the current content of the prompt cac
5756
"cached_tokens": 1408
5857
}
5958
}
60-
59+
}
6160
```
6261

6362
After the first 1024 tokens cache hits will occur for every 128 additional identical tokens.

0 commit comments

Comments
 (0)