Skip to content

Commit f96a9b1

Browse files
fix the typo
1 parent 6f26251 commit f96a9b1

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/ai-services/openai/how-to/prompt-caching.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ For a request to take advantage of prompt caching the request must be both:
3939
- A minimum of 1,024 tokens in length.
4040
- The first 1,024 tokens in the prompt must be identical.
4141

42-
When a match is found between the token computations in a prompt and the current content of the prompt cache, it's referred to as a cache hit. Cache hits will show up as [`cached_tokens`](/azure/ai-services/openai/reference-preview#cached_tokens) under [`prompt_token_details`](/azure/ai-services/openai/reference-preview#properties-for-prompt_tokens_details) in the chat completions response.
42+
When a match is found between the token computations in a prompt and the current content of the prompt cache, it's referred to as a cache hit. Cache hits will show up as [`cached_tokens`](/azure/ai-services/openai/reference-preview#cached_tokens) under [`prompt_tokens_details`](/azure/ai-services/openai/reference-preview#properties-for-prompt_tokens_details) in the chat completions response.
4343

4444
```json
4545
{
@@ -85,4 +85,4 @@ To improve the likelihood of cache hits occurring, you should structure your req
8585

8686
## Can I disable prompt caching?
8787

88-
Prompt caching is enabled by default for all supported models. There is no opt-out support for prompt caching.
88+
Prompt caching is enabled by default for all supported models. There is no opt-out support for prompt caching.

0 commit comments

Comments
 (0)