You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/search/tutorial-rag-build-solution-index-schema.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ author: HeidiSteen
8
8
ms.author: heidist
9
9
ms.service: cognitive-search
10
10
ms.topic: tutorial
11
-
ms.date: 10/01/2024
11
+
ms.date: 10/04/2024
12
12
13
13
---
14
14
@@ -43,7 +43,7 @@ Chunks are the focus of the schema, and each chunk is the defining element of a
43
43
44
44
### Enhanced with generated data
45
45
46
-
In this tutorial, sample data consists of PDFs and content from the [NASA Earth Book](https://www.nasa.gov/ebooks/earth/). This content is descriptive and informative, with numerous references to geographies, countries, and areas across the world. All of the textual content is captured in chunks, but these recurring instances of place names create an opportunity for adding structure to the index. Using skills, it's possible to recognize entities in the text and capture them in an index for use in queries and filters. In this tutorial, we include an [entity recognition skill](cognitive-search-skill-entity-recognition-v3.md) that recognizes and extracts location entities, loading it into a searchable and filterable `locations` field. Adding structured content to your index gives you more options for filtering, improved relevance, and more focused answers.
46
+
In this tutorial, sample data consists of PDFs and content from the [NASA Earth Book](https://www.nasa.gov/ebooks/earth/). This content is descriptive and informative, with numerous references to geographies, countries, and areas across the world. All of the textual content is captured in chunks, but recurring instances of place names create an opportunity for adding structure to the index. Using skills, it's possible to recognize entities in the text and capture them in an index for use in queries and filters. In this tutorial, we include an [entity recognition skill](cognitive-search-skill-entity-recognition-v3.md) that recognizes and extracts location entities, loading it into a searchable and filterable `locations` field. Adding structured content to your index gives you more options for filtering, improved relevance, and more focused answers.
47
47
48
48
### Parent-child fields in one or two indexes?
49
49
@@ -61,11 +61,11 @@ In Azure AI Search, an index that works best for RAG workloads has these qualiti
61
61
62
62
- Maintains a parent-child relationship between chunks of a document and the properties of the parent document, such as the file name, file type, title, author, and so forth. To answer a query, chunks could be pulled from anywhere in the index. Association with the parent document providing the chunk is useful for context, citations, and follow up queries.
63
63
64
-
- Accommodates the queries you want create. You should have fields for vector and hybrid content, and those fields should be attributed to support specific query behaviors. You can only query one index at a time (no joins) so your fields collection should define all of your searchable content.
64
+
- Accommodates the queries you want create. You should have fields for vector and hybrid content, and those fields should be attributed to support specific query behaviors, such as searchable or filterable. You can only query one index at a time (no joins) so your fields collection should define all of your searchable content.
65
65
66
66
- Your schema should be flat (no complex types or structures). This requirement is specific to the RAG pattern in Azure AI Search.
67
67
68
-
Although Azure AI Search can't join indexes, you can create indexes that preserve parent-child relationship, and then use sequential or parallel queries in your search logic to pull from both. This exercise includes templates for parent-child elements in the same index and in separate indexes, where information from the parent index is retrieved using a lookup query.
68
+
<!--Although Azure AI Search can't join indexes, you can create indexes that preserve parent-child relationship, and then use sequential queries in your search logic to pull from both (a query on the chunked data index, a lookup on the parent index). This exercise includes templates for parent-child elements in the same index and in separate indexes, where information from the parent index is retrieved using a lookup query.-->
69
69
70
70
<!-- > [!NOTE]
71
71
> Schema design affects storage and costs. This exercise is focused on schema fundamentals. In the [Minimize storage and costs](tutorial-rag-build-solution-minimize-storage.md) tutorial, you revisit schema design to consider narrow data types, attribution, and vector configurations that offer more efficient. -->
@@ -136,7 +136,7 @@ A minimal index for LLM is designed to store chunks of content. It typically inc
@@ -157,8 +157,8 @@ A minimal index for LLM is designed to store chunks of content. It typically inc
157
157
kind="azureOpenAI",
158
158
parameters=AzureOpenAIVectorizerParameters(
159
159
resource_url=AZURE_OPENAI_ACCOUNT,
160
-
deployment_name="text-embedding-ada-002",
161
-
model_name="text-embedding-ada-002"
160
+
deployment_name="text-embedding-3-large",
161
+
model_name="text-embedding-3-large"
162
162
),
163
163
),
164
164
],
@@ -170,7 +170,7 @@ A minimal index for LLM is designed to store chunks of content. It typically inc
170
170
print(f"{result.name} created")
171
171
```
172
172
173
-
1. For an index schema that more closely mimics structured content, you would have separate indexes for parent and child (chunked) fields. You would need index projections to coordinate the indexing of the two indexes simultaneously. Queries execute against the child index. Query logic includes a lookup query, using the parent_idto retrieve content from the parent index.
173
+
1. For an index schema that more closely mimics structured content, you would have separate indexes for parent and child (chunked) fields. You would need [index projections](index-projections-concept-intro.md) to coordinate the indexing of the two indexes simultaneously. Queries execute against the child index. Query logic includes a lookup query, using the parent_idt retrieve content from the parent index.
Copy file name to clipboardExpand all lines: articles/search/tutorial-rag-build-solution-maximize-relevance.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ In this tutorial, you modify the existing search index and queries to use:
27
27
This tutorial updates the search index created by the [indexing pipeline](tutorial-rag-build-solution-pipeline.md). Updates don't affect the existing content, so no rebuild is necessary and you don't need to rerun the indexer.
28
28
29
29
> [!NOTE]
30
-
> There are more relevance features in preview, including vector query weighting and setting minimum thresholds, but we omit them from this tutorial becaues they aren't yet available in the Azure SDK for Python.
30
+
> There are more relevance features in preview, including vector query weighting and setting minimum thresholds, but we omit them from this tutorial because they're in preview.
Copy file name to clipboardExpand all lines: articles/search/tutorial-rag-build-solution-models.md
+17-12Lines changed: 17 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ ms.author: heidist
9
9
ms.service: cognitive-search
10
10
ms.topic: tutorial
11
11
ms.custom: references_regions
12
-
ms.date: 10/01/2024
12
+
ms.date: 10/04/2024
13
13
14
14
---
15
15
@@ -48,7 +48,7 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
48
48
49
49
-[Azure AI Studio](/azure/ai-studio/reference/region-support) regions.
50
50
51
-
Azure AI Search is currently facing limited availability in some regions, such as West Europe and West US 2/3. Check the [Azure AI Search region list](search-region-support.md) to confirm region status.
51
+
Azure AI Search is currently facing limited availability in some regions, such as West Europe and West US 2/3. To confirm region status, check the [Azure AI Search region list](search-region-support.md).
52
52
53
53
> [!TIP]
54
54
> Currently, the following regions provide the most overlap among the model providers and have the most capacity: **East US2** and **South Central** in the Americas; **France Central** or **Switzerland North** in Europe; **Australia East** in Asia Pacific.
@@ -59,7 +59,7 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
59
59
60
60
Vectorized content improves the query results in a RAG solution. Azure AI Search supports a built-in vectorization action in an indexing pipeline. It also supports vectorization at query time, converting text or image inputs into embeddings for a vector search. In this step, identify an embedding model that works for your content and queries. If you're providing raw vector data and raw vector queries, or if your RAG solution doesn't include vector data, skip this step.
61
61
62
-
Vector queries that include a text-to-vector conversion step must use the same embedding model that was used during indexing. The search engine won't throw an error if you use different models, but you'll get poor results.
62
+
Vector queries that include a text-to-vector conversion step must use the same embedding model that was used during indexing. The search engine doesn't throw an error if you use different models, but you get poor results.
63
63
64
64
To meet the same-model requirement, choose embedding models that can be referenced through *skills* during indexing and through *vectorizers* during query execution. The following table lists the skill and vectorizer pairs. To see how the embedding models are used, skip ahead to [Create an indexing pipeline](tutorial-rag-build-solution-pipeline.md) for code that calls an embedding skill and a matching vectorizer.
65
65
@@ -75,7 +75,7 @@ Azure AI Search provides skill and vectorizer support for the following embeddin
75
75
76
76
<sup>2</sup> Deployed models in the model catalog are accessed over an AML endpoint. We use the existing AML skill for this connection.
77
77
78
-
You can use other models besides those listed here. For more information, see [Use non-Azure models for embeddings](#use-non-azure-models-for-embeddings) in this article.
78
+
You can use other models besides the ones listed here. For more information, see [Use non-Azure models for embeddings](#use-non-azure-models-for-embeddings) in this article.
79
79
80
80
> [!NOTE]
81
81
> Inputs to an embedding models are typically chunked data. In an Azure AI Search RAG pattern, chunking is handled in the indexer pipeline, covered in [another tutorial](tutorial-rag-build-solution-pipeline.md) in this series.
@@ -90,16 +90,18 @@ The following models are commonly used for a chat search experience:
GPT-35-Turbo and GPT-4 models are optimized to work with inputs formatted as a conversation.
93
+
GPT-35-Turbo and GPT-4 models are optimized to work with inputs formatted as a conversation.
94
+
95
+
We use GPT-4o in this tutorial. During testing, we found that it's less likely to supplement with its own training data. For example, given the query "how much of the earth is covered by water?", GPT-35-Turbo answered using its built-in knowledge of earth to state that 71% of the earth is covered by water, even though the sample data doesn't provide that fact. In contrast, GPT-4o responded (correctly) with "I don't know".
94
96
95
97
## Deploy models and collect information
96
98
97
-
Models must be deployed and accessible through an endpoint. Both embedding-related skills and vectorizers need the number of dimensions and the model name. Other details about your model might be required by the client used on the connection.
99
+
Models must be deployed and accessible through an endpoint. Both embedding-related skills and vectorizers need the number of dimensions and the model name.
98
100
99
101
This tutorial series uses the following models and model providers:
100
102
101
-
- Text-embedding-ada-02 on Azure OpenAI for embeddings
102
-
- GPT-35-Turbo on Azure OpenAI for chat completion
103
+
- Text-embedding-3-large on Azure OpenAI for embeddings
104
+
- GPT-4o on Azure OpenAI for chat completion
103
105
104
106
You must have [**Cognitive Services OpenAI Contributor**](/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-contributor) or higher to deploy models in Azure OpenAI.
105
107
@@ -109,17 +111,17 @@ You must have [**Cognitive Services OpenAI Contributor**]( /azure/ai-services/op
109
111
110
112
1. Select **Deploy model** > **Deploy base model**.
111
113
112
-
1. Select **text-embedding-ada-02** from the dropdown list and confirm the selection.
114
+
1. Select **text-embedding-3-large** from the dropdown list and confirm the selection.
113
115
114
-
1. Specify a deployment name. We recommend "text-embedding-ada-002".
116
+
1. Specify a deployment name. We recommend "text-embedding-3-large".
115
117
116
118
1. Accept the defaults.
117
119
118
120
1. Select **Deploy**.
119
121
120
-
1. Repeat the previous steps for **gpt-35-turbo**.
122
+
1. Repeat the previous steps for **gpt-4o**.
121
123
122
-
1. Make a note of the model names and endpoint. Embedding skills and vectorizers assemble the full endpoint internally, so you only need the resource URI. For example, given `https://MY-FAKE-ACCOUNT.openai.azure.com/openai/deployments/text-embedding-ada-002/embeddings?api-version=2023-05-15`, the endpoint you should provide in skill and vectorizer definitions is `https://MY-FAKE-ACCOUNT.openai.azure.com`.
124
+
1. Make a note of the model names and endpoint. Embedding skills and vectorizers assemble the full endpoint internally, so you only need the resource URI. For example, given `https://MY-FAKE-ACCOUNT.openai.azure.com/openai/deployments/text-embedding-3-large/embeddings?api-version=2024-06-01`, the endpoint you should provide in skill and vectorizer definitions is `https://MY-FAKE-ACCOUNT.openai.azure.com`.
123
125
124
126
## Configure search engine access to Azure models
125
127
@@ -138,10 +140,13 @@ Assign yourself and the search service identity permissions on Azure OpenAI. The
1. Select **Managed identity** and then select **Members**. Find the system-managed identity for your search service in the dropdown list.
142
145
143
146
1. Next, select **User, group, or service principal** and then select **Members**. Search for your user account and then select it from the dropdown list.
144
147
148
+
1. Make sure you have two security principals assigned to the role.
149
+
145
150
1. Select **Review and Assign** to create the role assignments.
146
151
147
152
For access to models on Azure AI Vision, assign **Cognitive Services OpenAI User**. For Azure AI Studio, assign **Azure AI Developer**.
0 commit comments