Skip to content

Commit fc96134

Browse files
authored
Merge pull request #3004 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-ai-docs (branch main)
2 parents 51bcc52 + 54247d6 commit fc96134

File tree

3 files changed

+8
-15
lines changed

3 files changed

+8
-15
lines changed

articles/ai-services/openai/how-to/code-interpreter.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -164,9 +164,10 @@ curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2
164164
"tools": [
165165
{ "type": "code_interpreter" }
166166
],
167-
"model": "gpt-4-1106-preview",
168-
"tool_resources"{
169-
"code interpreter": {
167+
"name": "Assistants playground",
168+
"model": "Replace it with your-custom-model-deployment-name",
169+
"tool_resources":{
170+
"code_interpreter": {
170171
"file_ids": ["assistant-1234"]
171172
}
172173
}

articles/ai-services/openai/how-to/on-your-data-configuration.md

Lines changed: 4 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -29,16 +29,8 @@ When you use Azure OpenAI On Your Data to ingest data from Azure blob storage, l
2929

3030
* Steps 1 and 2 are only used for file upload.
3131
* Downloading URLs to your blob storage is not illustrated in this diagram. After web pages are downloaded from the internet and uploaded to blob storage, steps 3 onward are the same.
32-
* Two indexers, two indexes, two data sources and a [custom skill](/azure/search/cognitive-search-custom-skill-interface) are created in the Azure AI Search resource.
33-
* The chunks container is created in the blob storage.
34-
* If the schedule triggers the ingestion, the ingestion process starts from step 7.
35-
* Azure OpenAI's `preprocessing-jobs` API implements the [Azure AI Search customer skill web API protocol](/azure/search/cognitive-search-custom-skill-web-api), and processes the documents in a queue.
36-
* Azure OpenAI:
37-
1. Internally uses the first indexer created earlier to crack the documents.
38-
1. Uses a heuristic-based algorithm to perform chunking. It honors table layouts and other formatting elements in the chunk boundary to ensure the best chunking quality.
39-
1. If you choose to enable vector search, Azure OpenAI uses the selected embedding setting to vectorize the chunks.
40-
* When all the data that the service is monitoring are processed, Azure OpenAI triggers the second indexer.
41-
* The indexer stores the processed data into an Azure AI Search service.
32+
* One indexer, one index, and one data source in the Azure AI Search resource is created using prebuilt skills and [integrated vectorization](/azure/search/vector-search-integrated-vectorization.md).
33+
* Azure AI Search handles the extraction, chunking, and vectorization of chunked documents through integrated vectorization. If a scheduling interval is specified, the indexer will run accordingly.
4234

4335
For the managed identities used in service calls, only system assigned managed identities are supported. User assigned managed identities aren't supported.
4436

@@ -167,7 +159,7 @@ To set the managed identities via the management API, see [the management API re
167159

168160
### Enable trusted service
169161

170-
To allow your Azure AI Search to call your Azure OpenAI `preprocessing-jobs` as custom skill web API, while Azure OpenAI has no public network access, you need to set up Azure OpenAI to bypass Azure AI Search as a trusted service based on managed identity. Azure OpenAI identifies the traffic from your Azure AI Search by verifying the claims in the JSON Web Token (JWT). Azure AI Search must use the system assigned managed identity authentication to call the custom skill web API.
162+
To allow your Azure AI Search to call your Azure OpenAI `embedding model, while Azure OpenAI has no public network access, you need to set up Azure OpenAI to bypass Azure AI Search as a trusted service based on managed identity. Azure OpenAI identifies the traffic from your Azure AI Search by verifying the claims in the JSON Web Token (JWT). Azure AI Search must use the system assigned managed identity authentication to call the embedding endpoint.
171163

172164
Set `networkAcls.bypass` as `AzureServices` from the management API. For more information, see [Virtual networks article](/azure/ai-services/cognitive-services-virtual-networks?tabs=portal#grant-access-to-trusted-azure-services-for-azure-openai).
173165

@@ -268,7 +260,7 @@ So far you have already setup each resource work independently. Next you need to
268260
| `Search Index Data Reader` | Azure OpenAI | Azure AI Search | Inference service queries the data from the index. |
269261
| `Search Service Contributor` | Azure OpenAI | Azure AI Search | Inference service queries the index schema for auto fields mapping. Data ingestion service creates index, data sources, skill set, indexer, and queries the indexer status. |
270262
| `Storage Blob Data Contributor` | Azure OpenAI | Storage Account | Reads from the input container, and writes the preprocessed result to the output container. |
271-
| `Cognitive Services OpenAI Contributor` | Azure AI Search | Azure OpenAI | Custom skill. |
263+
| `Cognitive Services OpenAI Contributor` | Azure AI Search | Azure OpenAI | to allow the Azure AI Search resource access to the Azure OpenAI embedding endpoint. |
272264
| `Storage Blob Data Reader` | Azure AI Search | Storage Account | Reads document blobs and chunk blobs. |
273265
| `Reader` | Azure AI Foundry Project | Azure Storage Private Endpoints (Blob & File) | Read search indexes created in blob storage within an Azure AI Foundry Project. |
274266
| `Cognitive Services OpenAI User` | Web app | Azure OpenAI | Inference. |
22.1 KB
Loading

0 commit comments

Comments
 (0)