You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/concepts/protected-material.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,8 +21,7 @@ The [Protected material text API](../quickstart-protected-material.md) flags kno
21
21
22
22
The [Protected material code API](../quickstart-protected-material-code.md) flags protected code content (from known GitHub repositories, including software libraries, source code, algorithms, and other proprietary programming content) that might be output by large language models.
23
23
24
-
> [!CAUTION]
25
-
> The content safety service's code scanner/indexer is only current through November 6, 2021. Code that was added to GitHub after this date will not be detected. Use your own discretion when using Protected Material for Code to detect recent bodies of code.
By detecting and preventing the display of protected material, organizations can ensure compliance with intellectual property laws, maintain content originality, and protect their reputations.
> The content safety service's code scanner/indexer is only current through April 6, 2023. Code that was added to GitHub after this date will not be detected. Use your own discretion when using Protected Material for Code to detect recent bodies of code.
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/quickstart-protected-material-code.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,8 +15,7 @@ ms.author: pafarley
15
15
16
16
The Protected Material for Code feature provides a comprehensive solution for identifying AI outputs that match code from existing GitHub repositories. This feature allows code generation models to be used confidently, in a way that enhances transparency to end users and promotes compliance with organizational policies.
17
17
18
-
> [!CAUTION]
19
-
> The content safety service's code scanner/indexer is only current through November 6, 2021. Code that was added to GitHub after this date will not be detected. Use your own discretion when using Protected Material for Code to detect recent bodies of code.
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/whats-new.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,8 +34,7 @@ The Multimodal API analyzes materials containing both image content and text con
34
34
35
35
The Protected material code API flags protected code content (from known GitHub repositories, including software libraries, source code, algorithms, and other proprietary programming content) that might be output by large language models. Follow the [quickstart](./quickstart-protected-material-code.md) to get started.
36
36
37
-
> [!CAUTION]
38
-
> The content safety service's code scanner/indexer is only current through November 6, 2021. Code that was added to GitHub after this date will not be detected. Use your own discretion when using Protected Material for Code to detect recent bodies of code.
|`o1` (2024-12-17) | The most capable model in the o1 series, offering [enhanced reasoning abilities](../how-to/reasoning.md). <br> - Structured outputs<br> - Text, image processing <br> - Functions/Tools <br> <br> **Request access: [limited access model application](https://aka.ms/OAI/o1access)**| Input: 200,000 <br> Output: 100,000 | Oct 2023 |
39
39
|`o1-preview` (2024-09-12) | Older preview version | Input: 128,000 <br> Output: 32,768 | Oct 2023 |
40
-
|`o1-mini` (2024-09-12) | A faster and more cost-efficient option in the o1 series, ideal for coding tasks requiring speed and lower resource consumption. <br> Global standard deployment available by default <br> For standard deployments, **Request access: [limited access model application](https://aka.ms/OAI/o1access)**| Input: 128,000 <br> Output: 65,536 | Oct 2023 |
40
+
|`o1-mini` (2024-09-12) | A faster and more cost-efficient option in the o1 series, ideal for coding tasks requiring speed and lower resource consumption. <br><br> Global standard deployment available by default. <br> <br> Standard (regional) deployments are currently only available for select customers who received access as part of the `o1-preview`limited access release.| Input: 128,000 <br> Output: 65,536 | Oct 2023 |
41
41
42
42
### Availability
43
43
@@ -55,7 +55,7 @@ To learn more about the advanced `o-series` models see, [getting started with re
55
55
|---|---|
56
56
|`o3-mini`| East US2 (Global Standard) <br> Sweden Central (Global Standard) |
57
57
|`o1`| East US2 (Global Standard) <br> Sweden Central (Global Standard) |
58
-
|`o1-preview`| See the [models table](#model-summary-table-and-region-availability). |
58
+
|`o1-preview`| See the [models table](#model-summary-table-and-region-availability). This model is only available for customers who were granted access as part of the original limited access release. |
59
59
|`o1-mini`| See the [models table](#model-summary-table-and-region-availability). |
60
60
61
61
## GPT-4o audio
@@ -221,6 +221,11 @@ All deployments can perform the exact same inference operations, however the bil
> **Most o-series models are limited access**. Request access: [limited access model application](https://aka.ms/OAI/o1access). `o1-mini` is currently available to all customers for global standard deployment.
226
+
>
227
+
> Select customers were granted standard (regional) deployment access to `o1-mini` as part of the `o1-preview` limited access release. At this time access to `o1-mini` standard (regional) deployments is not being expanded.
228
+
224
229
# [Global Provisioned Managed](#tab/global-ptum)
225
230
226
231
### Global provisioned managed model availability
@@ -257,7 +262,11 @@ All deployments can perform the exact same inference operations, however the bil
**o-series models require registration for standard deployments**. Request access: [limited access model application](https://aka.ms/OAI/o1access)
265
+
> [!NOTE]
266
+
> **Most o-series models are limited access**. Request access: [limited access model application](https://aka.ms/OAI/o1access). `o1-mini` is currently available to all customers for global standard deployment.
267
+
>
268
+
> Select customers were granted standard (regional) deployment access to `o1-mini` as part of the `o1-preview` limited access release. At this time access to `o1-mini` standard (regional) deployments is not being expanded.
269
+
261
270
262
271
# [Provisioned Managed](#tab/provisioned)
263
272
@@ -282,7 +291,10 @@ This table doesn't include fine-tuning regional availability information. Consu
**o-series models require registration for standard deployments**. Request access: [limited access model application](https://aka.ms/OAI/o1access)
294
+
> [!NOTE]
295
+
> **Most o-series models are limited access**. Request access: [limited access model application](https://aka.ms/OAI/o1access). `o1-mini` is currently available to all customers for global standard deployment.
296
+
>
297
+
> Select customers were granted standard (regional) deployment access to `o1-mini` as part of the `o1-preview` limited access release. At this time access to `o1-mini` standard (regional) deployments is not being expanded.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/use-your-data.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,11 +41,6 @@ Typically, the development process you'd use with Azure OpenAI On Your Data is:
41
41
42
42
To get started, [connect your data source](../use-your-data-quickstart.md) using Azure AI Foundry portal and start asking questions and chatting on your data.
43
43
44
-
> [!NOTE]
45
-
> The following models are not supported by Azure OpenAI On Your Data:
46
-
> * o1 models
47
-
> * o3 models
48
-
49
44
## Azure Role-based access controls (Azure RBAC) for adding data sources
50
45
51
46
To use Azure OpenAI On Your Data fully, you need to set one or more Azure RBAC roles. See [Azure OpenAI On Your Data configuration](../how-to/on-your-data-configuration.md#role-assignments) for more information.
@@ -719,6 +714,11 @@ Each user message can translate to multiple search queries, all of which get sen
719
714
720
715
## Regional availability and model support
721
716
717
+
> [!NOTE]
718
+
> The following models are not supported by Azure OpenAI On Your Data:
719
+
> * o1 models
720
+
> * o3 models
721
+
722
722
| Region |`gpt-35-turbo-16k (0613)`|`gpt-35-turbo (1106)`|`gpt-4-32k (0613)`|`gpt-4 (1106-preview)`|`gpt-4 (0125-preview)`|`gpt-4 (0613)`|`gpt-4o`\*\*|`gpt-4 (turbo-2024-04-09)`|
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/on-your-data-configuration.md
+4-12Lines changed: 4 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,16 +29,8 @@ When you use Azure OpenAI On Your Data to ingest data from Azure blob storage, l
29
29
30
30
* Steps 1 and 2 are only used for file upload.
31
31
* Downloading URLs to your blob storage is not illustrated in this diagram. After web pages are downloaded from the internet and uploaded to blob storage, steps 3 onward are the same.
32
-
* Two indexers, two indexes, two data sources and a [custom skill](/azure/search/cognitive-search-custom-skill-interface) are created in the Azure AI Search resource.
33
-
* The chunks container is created in the blob storage.
34
-
* If the schedule triggers the ingestion, the ingestion process starts from step 7.
35
-
* Azure OpenAI's `preprocessing-jobs` API implements the [Azure AI Search customer skill web API protocol](/azure/search/cognitive-search-custom-skill-web-api), and processes the documents in a queue.
36
-
* Azure OpenAI:
37
-
1. Internally uses the first indexer created earlier to crack the documents.
38
-
1. Uses a heuristic-based algorithm to perform chunking. It honors table layouts and other formatting elements in the chunk boundary to ensure the best chunking quality.
39
-
1. If you choose to enable vector search, Azure OpenAI uses the selected embedding setting to vectorize the chunks.
40
-
* When all the data that the service is monitoring are processed, Azure OpenAI triggers the second indexer.
41
-
* The indexer stores the processed data into an Azure AI Search service.
32
+
* One indexer, one index, and one data source in the Azure AI Search resource is created using prebuilt skills and [integrated vectorization](/azure/search/vector-search-integrated-vectorization.md).
33
+
* Azure AI Search handles the extraction, chunking, and vectorization of chunked documents through integrated vectorization. If a scheduling interval is specified, the indexer will run accordingly.
42
34
43
35
For the managed identities used in service calls, only system assigned managed identities are supported. User assigned managed identities aren't supported.
44
36
@@ -167,7 +159,7 @@ To set the managed identities via the management API, see [the management API re
167
159
168
160
### Enable trusted service
169
161
170
-
To allow your Azure AI Search to call your Azure OpenAI `preprocessing-jobs` as custom skill web API, while Azure OpenAI has no public network access, you need to set up Azure OpenAI to bypass Azure AI Search as a trusted service based on managed identity. Azure OpenAI identifies the traffic from your Azure AI Search by verifying the claims in the JSON Web Token (JWT). Azure AI Search must use the system assigned managed identity authentication to call the custom skill web API.
162
+
To allow your Azure AI Search to call your Azure OpenAI `embedding model, while Azure OpenAI has no public network access, you need to set up Azure OpenAI to bypass Azure AI Search as a trusted service based on managed identity. Azure OpenAI identifies the traffic from your Azure AI Search by verifying the claims in the JSON Web Token (JWT). Azure AI Search must use the system assigned managed identity authentication to call the embedding endpoint.
171
163
172
164
Set `networkAcls.bypass` as `AzureServices` from the management API. For more information, see [Virtual networks article](/azure/ai-services/cognitive-services-virtual-networks?tabs=portal#grant-access-to-trusted-azure-services-for-azure-openai).
173
165
@@ -268,7 +260,7 @@ So far you have already setup each resource work independently. Next you need to
268
260
|`Search Index Data Reader`| Azure OpenAI | Azure AI Search | Inference service queries the data from the index. |
269
261
|`Search Service Contributor`| Azure OpenAI | Azure AI Search | Inference service queries the index schema for auto fields mapping. Data ingestion service creates index, data sources, skill set, indexer, and queries the indexer status. |
270
262
|`Storage Blob Data Contributor`| Azure OpenAI | Storage Account | Reads from the input container, and writes the preprocessed result to the output container. |
|`Cognitive Services OpenAI Contributor`| Azure AI Search | Azure OpenAI |to allow the Azure AI Search resource access to the Azure OpenAI embedding endpoint. |
272
264
|`Storage Blob Data Reader`| Azure AI Search | Storage Account | Reads document blobs and chunk blobs. |
273
265
|`Reader`| Azure AI Foundry Project | Azure Storage Private Endpoints (Blob & File) | Read search indexes created in blob storage within an Azure AI Foundry Project. |
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/reasoning.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,8 +36,8 @@ Request access: [limited access model application](https://aka.ms/OAI/o1access)
36
36
|---|---|---|
37
37
|`o3-mini`| East US2 (Global Standard) <br> Sweden Central (Global Standard) |[Limited access model application](https://aka.ms/OAI/o1access)|
38
38
|`o1`| East US2 (Global Standard) <br> Sweden Central (Global Standard) |[Limited access model application](https://aka.ms/OAI/o1access)|
39
-
|`o1-preview`| See [models page](../concepts/models.md#global-standard-model-availability). |[Limited access model application](https://aka.ms/OAI/o1access)|
40
-
|`o1-mini`| See [models page](../concepts/models.md#global-standard-model-availability). | No access request needed for Global Standard deployments<br>Standard (regional) deployments require: [Limited access model application](https://aka.ms/OAI/o1access)|
39
+
|`o1-preview`| See [models page](../concepts/models.md#global-standard-model-availability). |This model is only available for customers who were granted access as part of the original limited access release. We're currently not expanding access to `o1-preview`.|
40
+
|`o1-mini`| See [models page](../concepts/models.md#global-standard-model-availability). | No access request needed for Global Standard deployments.<br><br>Standard (regional) deployments are currently only available to select customers who were previously granted access as part of the `o1-preview` release.|
0 commit comments