You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-cache-for-redis/cache-tutorial-semantic-cache.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: 'Tutorial: Use Azure Cache for Redis as a Semantic Cache'
2
+
title: 'Tutorial: Use Azure Cache for Redis as a semantic cache'
3
3
description: In this tutorial, you learn how to use Azure Cache for Redis as a semantic cache.
4
4
author: flang-msft
5
5
ms.author: franlanglois
@@ -10,7 +10,7 @@ ms.date: 01/08/2024
10
10
#CustomerIntent: As a developer, I want to develop some code using a sample so that I see an example of a semantic cache with an AI-based large language model.
11
11
---
12
12
13
-
# Tutorial: Use Azure Cache for Redis as a Semantic Cache
13
+
# Tutorial: Use Azure Cache for Redis as a semantic cache
14
14
15
15
In this tutorial, you use Azure Cache for Redis as a semantic cache with an AI-based large language model (LLM). You use Azure Open AI Service to generate LLM responses to queries and cache those responses using Azure Cache for Redis, delivering faster responses and lowering costs.
16
16
@@ -44,7 +44,7 @@ In this tutorial, you learn how to:
44
44
45
45
- An Azure OpenAI resource with the **text-embedding-ada-002 (Version 2)** and **gpt-35-turbo-instruct** models deployed. These models are currently only available in [certain regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability). See the [resource deployment guide](../ai-services/openai/how-to/create-resource.md) for instructions on how to deploy the models.
46
46
47
-
## Create an Azure Cache for Redis Instance
47
+
## Create an Azure Cache for Redis instance
48
48
49
49
Follow the [Quickstart: Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md) guide. On the **Advanced** page, make sure that you added the **RediSearch** module and chose the **Enterprise** Cluster Policy. All other settings can match the default described in the quickstart.
50
50
@@ -64,7 +64,7 @@ Follow the [Quickstart: Create a Redis Enterprise cache](quickstart-create-redis
64
64
pip install openai langchain redis tiktoken
65
65
```
66
66
67
-
## Create Azure OpenAI Models
67
+
## Create Azure OpenAI models
68
68
69
69
Make sure you have two models deployed to your Azure OpenAI resource:
70
70
@@ -257,7 +257,7 @@ Finally, query the LLM to get an AI generated response. If you're using a Jupyte
257
257
258
258
1. Change the query from`Please write a poem about cute kittens` to `Write a poem about cute kittens`and run cell 5 again. You should see the _exact same output_ and a _lower wall time_ than the original query. Even though the query changed, the _semantic meaning_ of the query remained the same so the same cached output was returned. This is the advantage of semantic caching!
259
259
260
-
## Change the Similarity Threshold
260
+
## Change the similarity threshold
261
261
262
262
1. Try running a similar query with a different meaning, like `Please write a poem about cute puppies`. Notice that the cached result is returned here as well. The semantic meaning of the word `puppies`is close enough to the word `kittens` that the cached result is returned.
0 commit comments