You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -94,10 +94,9 @@ To successfully make a call against Azure OpenAI, you need an **endpoint** and a
94
94
from num2words import num2words
95
95
import os
96
96
import pandas as pd
97
-
from openai.embeddings_utils import get_embedding
98
97
import tiktoken
99
98
from typing import List
100
-
from langchain.embeddings importOpenAIEmbeddings
99
+
from langchain.embeddings importAzureOpenAIEmbeddings
101
100
from langchain.vectorstores.redis import Redis as RedisVectorStore
102
101
from langchain.document_loaders import DataFrameLoader
103
102
@@ -226,13 +225,14 @@ Now that the data has been filtered and loaded into LangChain, you'll create emb
226
225
```python
227
226
# Code cell 8
228
227
229
-
embedding =OpenAIEmbeddings(
228
+
embedding =AzureOpenAIEmbeddings(
230
229
deployment=DEPLOYMENT_NAME,
231
230
model=MODEL_NAME,
232
-
openai_api_base=RESOURCE_ENDPOINT,
231
+
azure_endpoint=RESOURCE_ENDPOINT,
233
232
openai_api_type="azure",
234
233
openai_api_key=API_KEY,
235
234
openai_api_version="2023-05-15",
235
+
show_progress_bar=True,
236
236
chunk_size=16# current limit with Azure OpenAI service. This will likely increase in the future.
237
237
)
238
238
@@ -255,8 +255,11 @@ Now that the data has been filtered and loaded into LangChain, you'll create emb
255
255
vectorstore.write_schema("redis_schema.yaml")
256
256
```
257
257
258
-
1. Execute code cell 8. This can take up to 10 minutes to complete. A `redis_schema.yaml` file is generated as well. This file is useful if you want to connect to your index in Azure Cache for Redis instance without re-generating embeddings.
258
+
1. Execute code cell 8. This can take over 30 minutes to complete. A `redis_schema.yaml` file is generated as well. This file is useful if you want to connect to your index in Azure Cache for Redis instance without re-generating embeddings.
259
259
260
+
> [!Important]
261
+
> The speed at which embeddings are generated depends on the [quota available](../ai-services/openai/quotas-limits.md) to the Azure OpenAI Model. With a quota of 240k tokens per minute, it will take around 30 minutes to process the 7M tokens in the data set.
262
+
>
260
263
## Run vector search queries
261
264
262
265
Now that your dataset, Azure OpenAI service API, and Redis instance are set up, you can search using vectors. In this example, the top 10 results for a given query are returned.
0 commit comments