Skip to content

Commit f9aa01e

Browse files
authored
Merge pull request #3012 from MicrosoftDocs/main
2/18/2025 PM Publish
2 parents afa1213 + fc96134 commit f9aa01e

File tree

15 files changed

+149
-42
lines changed

15 files changed

+149
-42
lines changed

articles/ai-services/agents/how-to/tools/bing-grounding.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ services: cognitive-services
66
manager: nitinme
77
ms.service: azure-ai-agent-service
88
ms.topic: how-to
9-
ms.date: 01/07/2025
9+
ms.date: 02/18/2025
1010
author: aahill
1111
ms.author: aahi
1212
zone_pivot_groups: selection-bing-grounding

articles/ai-services/content-safety/concepts/protected-material.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,8 +21,7 @@ The [Protected material text API](../quickstart-protected-material.md) flags kno
2121

2222
The [Protected material code API](../quickstart-protected-material-code.md) flags protected code content (from known GitHub repositories, including software libraries, source code, algorithms, and other proprietary programming content) that might be output by large language models.
2323

24-
> [!CAUTION]
25-
> The content safety service's code scanner/indexer is only current through November 6, 2021. Code that was added to GitHub after this date will not be detected. Use your own discretion when using Protected Material for Code to detect recent bodies of code.
24+
[!INCLUDE [content-safety-code-indexer](../includes/code-indexer.md)]
2625

2726
By detecting and preventing the display of protected material, organizations can ensure compliance with intellectual property laws, maintain content originality, and protect their reputations.
2827

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
---
2+
title: "Code indexer"
3+
author: PatrickFarley
4+
manager: nitinme
5+
ms.service: azure-ai-content-safety
6+
ms.custom:
7+
ms.topic: include
8+
ms.date: 02/18/2025
9+
ms.author: pafarley
10+
---
11+
12+
13+
> [!CAUTION]
14+
> The content safety service's code scanner/indexer is only current through April 6, 2023. Code that was added to GitHub after this date will not be detected. Use your own discretion when using Protected Material for Code to detect recent bodies of code.

articles/ai-services/content-safety/quickstart-protected-material-code.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,7 @@ ms.author: pafarley
1515

1616
The Protected Material for Code feature provides a comprehensive solution for identifying AI outputs that match code from existing GitHub repositories. This feature allows code generation models to be used confidently, in a way that enhances transparency to end users and promotes compliance with organizational policies.
1717

18-
> [!CAUTION]
19-
> The content safety service's code scanner/indexer is only current through November 6, 2021. Code that was added to GitHub after this date will not be detected. Use your own discretion when using Protected Material for Code to detect recent bodies of code.
18+
[!INCLUDE [content-safety-code-indexer](./includes/code-indexer.md)]
2019

2120
The key objectives of the Protected Material Detection for Code feature for AI-generated code are:
2221

articles/ai-services/content-safety/whats-new.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,7 @@ The Multimodal API analyzes materials containing both image content and text con
3434

3535
The Protected material code API flags protected code content (from known GitHub repositories, including software libraries, source code, algorithms, and other proprietary programming content) that might be output by large language models. Follow the [quickstart](./quickstart-protected-material-code.md) to get started.
3636

37-
> [!CAUTION]
38-
> The content safety service's code scanner/indexer is only current through November 6, 2021. Code that was added to GitHub after this date will not be detected. Use your own discretion when using Protected Material for Code to detect recent bodies of code.
37+
[!INCLUDE [content-safety-code-indexer](./includes/code-indexer.md)]
3938

4039
### Groundedness correction (preview)
4140

articles/ai-services/openai/concepts/models.md

Lines changed: 16 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ The Azure OpenAI o<sup>&#42;</sup> series models are specifically designed to ta
3737
| `o3-mini` (2025-01-31) | The latest reasoning model, offering [enhanced reasoning abilities](../how-to/reasoning.md). <br> - Structured outputs<br> - Text-only processing <br> - Functions/Tools <br> <br> **Request access: [limited access model application](https://aka.ms/OAI/o1access)** | Input: 200,000 <br> Output: 100,000 | Oct 2023 |
3838
| `o1` (2024-12-17) | The most capable model in the o1 series, offering [enhanced reasoning abilities](../how-to/reasoning.md). <br> - Structured outputs<br> - Text, image processing <br> - Functions/Tools <br> <br> **Request access: [limited access model application](https://aka.ms/OAI/o1access)** | Input: 200,000 <br> Output: 100,000 | Oct 2023 |
3939
|`o1-preview` (2024-09-12) | Older preview version | Input: 128,000 <br> Output: 32,768 | Oct 2023 |
40-
| `o1-mini` (2024-09-12) | A faster and more cost-efficient option in the o1 series, ideal for coding tasks requiring speed and lower resource consumption. <br> Global standard deployment available by default <br> For standard deployments, **Request access: [limited access model application](https://aka.ms/OAI/o1access)** | Input: 128,000 <br> Output: 65,536 | Oct 2023 |
40+
| `o1-mini` (2024-09-12) | A faster and more cost-efficient option in the o1 series, ideal for coding tasks requiring speed and lower resource consumption. <br><br> Global standard deployment available by default. <br> <br> Standard (regional) deployments are currently only available for select customers who received access as part of the `o1-preview` limited access release. | Input: 128,000 <br> Output: 65,536 | Oct 2023 |
4141

4242
### Availability
4343

@@ -55,7 +55,7 @@ To learn more about the advanced `o-series` models see, [getting started with re
5555
|---|---|
5656
|`o3-mini` | East US2 (Global Standard) <br> Sweden Central (Global Standard) |
5757
|`o1` | East US2 (Global Standard) <br> Sweden Central (Global Standard) |
58-
| `o1-preview` | See the [models table](#model-summary-table-and-region-availability). |
58+
| `o1-preview` | See the [models table](#model-summary-table-and-region-availability). This model is only available for customers who were granted access as part of the original limited access release. |
5959
| `o1-mini` | See the [models table](#model-summary-table-and-region-availability). |
6060

6161
## GPT-4o audio
@@ -221,6 +221,11 @@ All deployments can perform the exact same inference operations, however the bil
221221

222222
[!INCLUDE [Standard Global](../includes/model-matrix/standard-global.md)]
223223

224+
> [!NOTE]
225+
> **Most o-series models are limited access**. Request access: [limited access model application](https://aka.ms/OAI/o1access). `o1-mini` is currently available to all customers for global standard deployment.
226+
>
227+
> Select customers were granted standard (regional) deployment access to `o1-mini` as part of the `o1-preview` limited access release. At this time access to `o1-mini` standard (regional) deployments is not being expanded.
228+
224229
# [Global Provisioned Managed](#tab/global-ptum)
225230

226231
### Global provisioned managed model availability
@@ -257,7 +262,11 @@ All deployments can perform the exact same inference operations, however the bil
257262

258263
[!INCLUDE [Standard Models](../includes/model-matrix/standard-models.md)]
259264

260-
**o-series models require registration for standard deployments**. Request access: [limited access model application](https://aka.ms/OAI/o1access)
265+
> [!NOTE]
266+
> **Most o-series models are limited access**. Request access: [limited access model application](https://aka.ms/OAI/o1access). `o1-mini` is currently available to all customers for global standard deployment.
267+
>
268+
> Select customers were granted standard (regional) deployment access to `o1-mini` as part of the `o1-preview` limited access release. At this time access to `o1-mini` standard (regional) deployments is not being expanded.
269+
261270

262271
# [Provisioned Managed](#tab/provisioned)
263272

@@ -282,7 +291,10 @@ This table doesn't include fine-tuning regional availability information. Consu
282291

283292
[!INCLUDE [Chat Completions](../includes/model-matrix/standard-chat-completions.md)]
284293

285-
**o-series models require registration for standard deployments**. Request access: [limited access model application](https://aka.ms/OAI/o1access)
294+
> [!NOTE]
295+
> **Most o-series models are limited access**. Request access: [limited access model application](https://aka.ms/OAI/o1access). `o1-mini` is currently available to all customers for global standard deployment.
296+
>
297+
> Select customers were granted standard (regional) deployment access to `o1-mini` as part of the `o1-preview` limited access release. At this time access to `o1-mini` standard (regional) deployments is not being expanded.
286298
287299
### GPT-4 and GPT-4 Turbo model availability
288300

articles/ai-services/openai/concepts/use-your-data.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -41,11 +41,6 @@ Typically, the development process you'd use with Azure OpenAI On Your Data is:
4141

4242
To get started, [connect your data source](../use-your-data-quickstart.md) using Azure AI Foundry portal and start asking questions and chatting on your data.
4343

44-
> [!NOTE]
45-
> The following models are not supported by Azure OpenAI On Your Data:
46-
> * o1 models
47-
> * o3 models
48-
4944
## Azure Role-based access controls (Azure RBAC) for adding data sources
5045

5146
To use Azure OpenAI On Your Data fully, you need to set one or more Azure RBAC roles. See [Azure OpenAI On Your Data configuration](../how-to/on-your-data-configuration.md#role-assignments) for more information.
@@ -719,6 +714,11 @@ Each user message can translate to multiple search queries, all of which get sen
719714

720715
## Regional availability and model support
721716

717+
> [!NOTE]
718+
> The following models are not supported by Azure OpenAI On Your Data:
719+
> * o1 models
720+
> * o3 models
721+
722722
| Region | `gpt-35-turbo-16k (0613)` | `gpt-35-turbo (1106)` | `gpt-4-32k (0613)` | `gpt-4 (1106-preview)` | `gpt-4 (0125-preview)` | `gpt-4 (0613)` | `gpt-4o`\*\* | `gpt-4 (turbo-2024-04-09)` |
723723
|------|---|---|---|---|---|----|----|----|
724724
| Australia East ||||| || | |

articles/ai-services/openai/how-to/code-interpreter.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -164,9 +164,10 @@ curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2
164164
"tools": [
165165
{ "type": "code_interpreter" }
166166
],
167-
"model": "gpt-4-1106-preview",
168-
"tool_resources"{
169-
"code interpreter": {
167+
"name": "Assistants playground",
168+
"model": "Replace it with your-custom-model-deployment-name",
169+
"tool_resources":{
170+
"code_interpreter": {
170171
"file_ids": ["assistant-1234"]
171172
}
172173
}

articles/ai-services/openai/how-to/on-your-data-configuration.md

Lines changed: 4 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -29,16 +29,8 @@ When you use Azure OpenAI On Your Data to ingest data from Azure blob storage, l
2929

3030
* Steps 1 and 2 are only used for file upload.
3131
* Downloading URLs to your blob storage is not illustrated in this diagram. After web pages are downloaded from the internet and uploaded to blob storage, steps 3 onward are the same.
32-
* Two indexers, two indexes, two data sources and a [custom skill](/azure/search/cognitive-search-custom-skill-interface) are created in the Azure AI Search resource.
33-
* The chunks container is created in the blob storage.
34-
* If the schedule triggers the ingestion, the ingestion process starts from step 7.
35-
* Azure OpenAI's `preprocessing-jobs` API implements the [Azure AI Search customer skill web API protocol](/azure/search/cognitive-search-custom-skill-web-api), and processes the documents in a queue.
36-
* Azure OpenAI:
37-
1. Internally uses the first indexer created earlier to crack the documents.
38-
1. Uses a heuristic-based algorithm to perform chunking. It honors table layouts and other formatting elements in the chunk boundary to ensure the best chunking quality.
39-
1. If you choose to enable vector search, Azure OpenAI uses the selected embedding setting to vectorize the chunks.
40-
* When all the data that the service is monitoring are processed, Azure OpenAI triggers the second indexer.
41-
* The indexer stores the processed data into an Azure AI Search service.
32+
* One indexer, one index, and one data source in the Azure AI Search resource is created using prebuilt skills and [integrated vectorization](/azure/search/vector-search-integrated-vectorization.md).
33+
* Azure AI Search handles the extraction, chunking, and vectorization of chunked documents through integrated vectorization. If a scheduling interval is specified, the indexer will run accordingly.
4234

4335
For the managed identities used in service calls, only system assigned managed identities are supported. User assigned managed identities aren't supported.
4436

@@ -167,7 +159,7 @@ To set the managed identities via the management API, see [the management API re
167159

168160
### Enable trusted service
169161

170-
To allow your Azure AI Search to call your Azure OpenAI `preprocessing-jobs` as custom skill web API, while Azure OpenAI has no public network access, you need to set up Azure OpenAI to bypass Azure AI Search as a trusted service based on managed identity. Azure OpenAI identifies the traffic from your Azure AI Search by verifying the claims in the JSON Web Token (JWT). Azure AI Search must use the system assigned managed identity authentication to call the custom skill web API.
162+
To allow your Azure AI Search to call your Azure OpenAI `embedding model, while Azure OpenAI has no public network access, you need to set up Azure OpenAI to bypass Azure AI Search as a trusted service based on managed identity. Azure OpenAI identifies the traffic from your Azure AI Search by verifying the claims in the JSON Web Token (JWT). Azure AI Search must use the system assigned managed identity authentication to call the embedding endpoint.
171163

172164
Set `networkAcls.bypass` as `AzureServices` from the management API. For more information, see [Virtual networks article](/azure/ai-services/cognitive-services-virtual-networks?tabs=portal#grant-access-to-trusted-azure-services-for-azure-openai).
173165

@@ -268,7 +260,7 @@ So far you have already setup each resource work independently. Next you need to
268260
| `Search Index Data Reader` | Azure OpenAI | Azure AI Search | Inference service queries the data from the index. |
269261
| `Search Service Contributor` | Azure OpenAI | Azure AI Search | Inference service queries the index schema for auto fields mapping. Data ingestion service creates index, data sources, skill set, indexer, and queries the indexer status. |
270262
| `Storage Blob Data Contributor` | Azure OpenAI | Storage Account | Reads from the input container, and writes the preprocessed result to the output container. |
271-
| `Cognitive Services OpenAI Contributor` | Azure AI Search | Azure OpenAI | Custom skill. |
263+
| `Cognitive Services OpenAI Contributor` | Azure AI Search | Azure OpenAI | to allow the Azure AI Search resource access to the Azure OpenAI embedding endpoint. |
272264
| `Storage Blob Data Reader` | Azure AI Search | Storage Account | Reads document blobs and chunk blobs. |
273265
| `Reader` | Azure AI Foundry Project | Azure Storage Private Endpoints (Blob & File) | Read search indexes created in blob storage within an Azure AI Foundry Project. |
274266
| `Cognitive Services OpenAI User` | Web app | Azure OpenAI | Inference. |

articles/ai-services/openai/how-to/reasoning.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,8 +36,8 @@ Request access: [limited access model application](https://aka.ms/OAI/o1access)
3636
|---|---|---|
3737
| `o3-mini` | East US2 (Global Standard) <br> Sweden Central (Global Standard) | [Limited access model application](https://aka.ms/OAI/o1access) |
3838
|`o1` | East US2 (Global Standard) <br> Sweden Central (Global Standard) | [Limited access model application](https://aka.ms/OAI/o1access) |
39-
| `o1-preview` | See [models page](../concepts/models.md#global-standard-model-availability). | [Limited access model application](https://aka.ms/OAI/o1access) |
40-
| `o1-mini` | See [models page](../concepts/models.md#global-standard-model-availability). | No access request needed for Global Standard deployments<br>Standard (regional) deployments require: [Limited access model application](https://aka.ms/OAI/o1access) |
39+
| `o1-preview` | See [models page](../concepts/models.md#global-standard-model-availability). |This model is only available for customers who were granted access as part of the original limited access release. We're currently not expanding access to `o1-preview`. |
40+
| `o1-mini` | See [models page](../concepts/models.md#global-standard-model-availability). | No access request needed for Global Standard deployments.<br><br>Standard (regional) deployments are currently only available to select customers who were previously granted access as part of the `o1-preview` release.|
4141

4242
## API & feature support
4343

0 commit comments

Comments
 (0)