You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/use-your-data-securely.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ recommendations: false
16
16
17
17
> [!Note]
18
18
> Since June 2024, the form application of the Microsoft managed private endpoint to Azure AI Search is no longer needed.
19
-
> The managed private endpoint will be deleted from Microsoft managed virtual network at July 2025. If you have already provisioned managed private endpoint through the form application process before June 2024, please migrate to the [Azure AI Service trusted service](#enable-trusted-service-of-search-resource) as early as possible to avoid service disruption.
19
+
> The managed private endpoint will be deleted from Microsoft managed virtual network at July 2025. If you have already provisioned managed private endpoint through the form application process before June 2024, please migrate to the [Azure AI Service trusted service](#enable-trusted-service-1) as early as possible to avoid service disruption.
20
20
21
21
Use this article to learn how to use Azure OpenAI On Your Data securely by protecting data and resources with Microsoft Entra ID role-based access control, virtual networks, and private endpoints.
22
22
@@ -32,12 +32,12 @@ When you use Azure OpenAI On Your Data to ingest data from Azure blob storage, l
32
32
* Downloading URLs to your blob storage is not illustrated in this diagram. After web pages are downloaded from the internet and uploaded to blob storage, steps 3 onward are the same.
33
33
* Two indexers, two indexes, two data sources and a [custom skill](/azure/search/cognitive-search-custom-skill-interface) are created in the Azure AI Search resource.
34
34
* The chunks container is created in the blob storage.
35
-
* If the ingestion is triggered by a [scheduled refresh](../concepts/use-your-data.md#schedule-automatic-index-refreshes), the ingestion process starts from step 7.
35
+
* If the ingestion is triggered by a scheduled refresh, the ingestion process starts from step 7.
36
36
* Azure OpenAI's `preprocessing-jobs` API implements the [Azure AI Search customer skill web API protocol](/azure/search/cognitive-search-custom-skill-web-api), and processes the documents in a queue.
37
37
* Azure OpenAI:
38
38
1. Internally uses the first indexer created earlier to crack the documents.
39
39
1. Uses a heuristic-based algorithm to perform chunking, honoring table layouts and other formatting elements in the chunk boundary to ensure the best chunking quality.
40
-
1. If you choose to enable vector search, Azure OpenAI uses the selected embedding deployment to vectorize the chunks internally.
40
+
1. If you choose to enable vector search, Azure OpenAI uses the selected embedding setting to vectorize the chunks.
41
41
* When all the data that the service is monitoring are processed, Azure OpenAI triggers the second indexer.
42
42
* The indexer stores the processed data into an Azure AI Search service.
43
43
@@ -47,9 +47,9 @@ For the managed identities used in service calls, only system assigned managed i
47
47
48
48
:::image type="content" source="../media/use-your-data/inference-architecture.png" alt-text="A diagram showing the process of using the inference API." lightbox="../media/use-your-data/inference-architecture.png":::
49
49
50
-
When you send API calls to chat with an Azure OpenAI model on your data, the service needs to retrieve the index fields during inference to perform fields mapping automatically if the fields mapping isn't explicitly set in the request. Therefore the service requires the Azure OpenAI identity to have the `Search Service Contributor` role for the search service even during inference.
50
+
When you send API calls to chat with an Azure OpenAI model on your data, the service needs to retrieve the index fields during inference to perform fields mapping. Therefore the service requires the Azure OpenAI identity to have the `Search Service Contributor` role for the search service even during inference.
51
51
52
-
If an embedding deployment is provided in the inference request, the rewritten query will be vectorized by Azure OpenAI, and both query and vector are sent Azure AI Search for vector search.
52
+
If an embedding deployment is provided in the inference request, the rewritten query will be vectorized by Azure OpenAI, and both query and vector are sent to Azure AI Search for vector search.
53
53
54
54
## Document-level access control
55
55
@@ -132,8 +132,8 @@ Create a resource group, so you can organize all the relevant resources. The res
132
132
133
133
The virtual network has three subnets.
134
134
135
-
1. The first subnet is used for the private IPs of the three private endpoints.
136
-
1. The second subnet is created automatically when you create the virtual network gateway.
135
+
1. The first subnet is used for the virtual network gateway.
136
+
1. The second subnet is used for the private endpoints for the three key services.
137
137
1. The third subnet is empty, and used for Web App outbound virtual network integration.
@@ -208,7 +208,7 @@ You can disable public network access of your Azure AI Search resource in the Az
208
208
To allow access to your Azure AI Search resource from your client machines, like using Azure OpenAI Studio, you need to create [private endpoint connections](/azure/search/service-create-private-endpoint) that connect to your Azure AI Search resource.
209
209
210
210
211
-
### Enable trusted service of search resource
211
+
### Enable trusted service
212
212
213
213
You can enable trusted service of your search resource from Azure portal.
0 commit comments