Skip to content

Commit 553a32f

Browse files
authored
Merge pull request #6319 from eric-urban/eur/context-switch-openai
[SCOPED] openai folder moved to ai-foundry
2 parents 923553e + bbab5cf commit 553a32f

File tree

97 files changed

+238
-238
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

97 files changed

+238
-238
lines changed

articles/ai-foundry/.openpublishing.redirection.ai-studio.json

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -467,7 +467,7 @@
467467
},
468468
{
469469
"source_path_from_root": "/articles/ai-foundry/foundry-models/supported-languages-openai.md",
470-
"redirect_url": "/azure/ai-services/openai/supported-languages",
470+
"redirect_url": "/azure/ai-foundry/openai/supported-languages",
471471
"redirect_document_id": false
472472
},
473473
{
@@ -945,8 +945,8 @@
945945
},
946946
{
947947
"source_path_from_root": "/articles/ai-studio/quickstarts/assistants.md",
948-
"redirect_url": "/azure/ai-services/openai/assistants-quickstart",
949-
"redirect_document_id": true
948+
"redirect_url": "/azure/ai-foundry/openai/how-to/assistant",
949+
"redirect_document_id": false
950950
},
951951
{
952952
"source_path_from_root": "/articles/ai-studio/how-to/prompt-flow-tools/vector-db-lookup-tool.md",
@@ -1110,7 +1110,7 @@
11101110
},
11111111
{
11121112
"source_path_from_root": "/articles/ai-studio/quickstarts/multimodal-vision.md",
1113-
"redirect_url": "/azure/ai-services/openai/gpt-v-quickstart",
1113+
"redirect_url": "/azure/ai-foundry/openai/gpt-v-quickstart",
11141114
"redirect_document_id": false
11151115
},
11161116
{

articles/ai-foundry/concepts/ai-resources.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,6 +90,6 @@ If not provided by you, the following dependent resources are automatically crea
9090
## Next steps
9191

9292
- [Create a [!INCLUDE [hub-project-name](../includes/hub-project-name.md)]](../how-to/create-projects.md?pivots=hub-project)
93-
- [Quickstart: Analyze images and video in the chat playground](/azure/ai-services/openai/gpt-v-quickstart)
93+
- [Quickstart: Analyze images and video in the chat playground](/azure/ai-foundry/openai/gpt-v-quickstart)
9494
- [Learn more about Azure AI Foundry](../what-is-azure-ai-foundry.md)
9595
- [Learn more about projects](../how-to/create-projects.md?pivots=hub-project)

articles/ai-foundry/concepts/foundry-models-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -236,7 +236,7 @@ To set the public network access flag for the Azure AI Foundry hub:
236236

237237
* If you have an Azure AI Foundry hub with MaaS deployments created before July 11, 2024, and you enable a private endpoint on this hub, the existing serverless API deployments won't follow the hub's networking configuration. For serverless API deployments in the hub to follow the hub's networking configuration, you need to create the deployments again.
238238

239-
* Currently, [Azure OpenAI On Your Data](/azure/ai-services/openai/concepts/use-your-data) support isn't available for serverless API deployments in private hubs, because private hubs have the public network access flag disabled.
239+
* Currently, [Azure OpenAI On Your Data](/azure/ai-foundry/openai/concepts/use-your-data) support isn't available for serverless API deployments in private hubs, because private hubs have the public network access flag disabled.
240240

241241
* Any network configuration change (for example, enabling or disabling the public network access flag) might take up to five minutes to propagate.
242242

articles/ai-foundry/concepts/model-benchmarks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ Performance metrics are calculated as an aggregate over 14 days, based on 24 tra
107107

108108
| Parameter | Value | Applicable For |
109109
|---------------------------------------|----------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
110-
| Region | East US/East US2 | [serverless API deployments](../how-to/model-catalog-overview.md#serverless-api-deployment-pay-per-token-offer-billing) and [Azure OpenAI](/azure/ai-services/openai/overview) |
110+
| Region | East US/East US2 | [serverless API deployments](../how-to/model-catalog-overview.md#serverless-api-deployment-pay-per-token-offer-billing) and [Azure OpenAI](/azure/ai-foundry/openai/overview) |
111111
| Tokens per minute (TPM) rate limit | 30k (180 RPM based on Azure OpenAI) for non-reasoning and 100k for reasoning models <br> N/A (serverless API deployments) | For Azure OpenAI models, selection is available for users with rate limit ranges based on deployment type (serverless API, global, global standard, and so on.) <br> For serverless API deployments, this setting is abstracted. |
112112
| Number of requests | Two requests in a trail for every hour (24 trails per day) | serverless API deployments, Azure OpenAI |
113113
| Number of trails/runs | 14 days with 24 trails per day for 336 runs | serverless API deployments, Azure OpenAI |

articles/ai-foundry/concepts/rbac-azure-ai-foundry.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -590,7 +590,7 @@ When using Microsoft Entra ID authenticated connections in the chat playground,
590590
591591
## Scenario: Use an existing Azure OpenAI resource
592592
593-
When you create a connection to an existing Azure OpenAI resource, you must also assign roles to your users so they can access the resource. You should assign either the **Cognitive Services OpenAI User** or **Cognitive Services OpenAI Contributor** role, depending on the tasks they need to perform. For information on these roles and the tasks they enable, see [Azure OpenAI roles](/azure/ai-services/openai/how-to/role-based-access-control#azure-openai-roles).
593+
When you create a connection to an existing Azure OpenAI resource, you must also assign roles to your users so they can access the resource. You should assign either the **Cognitive Services OpenAI User** or **Cognitive Services OpenAI Contributor** role, depending on the tasks they need to perform. For information on these roles and the tasks they enable, see [Azure OpenAI roles](/azure/ai-foundry/openai/how-to/role-based-access-control#azure-openai-roles).
594594
595595
## Scenario: Use Azure Container Registry
596596
@@ -615,7 +615,7 @@ Azure Application Insights is an optional dependency for Azure AI Foundry hub. T
615615

616616
## Scenario: Provisioned throughput unit procurer
617617

618-
The following example defines a custom role that can procure [provisioned throughput units (PTU)](/azure/ai-services/openai/concepts/provisioned-throughput).
618+
The following example defines a custom role that can procure [provisioned throughput units (PTU)](/azure/ai-foundry/openai/concepts/provisioned-throughput).
619619

620620
```json
621621
{
@@ -659,7 +659,7 @@ The following example defines a custom role that can procure [provisioned throug
659659

660660
## Scenario: Azure OpenAI Assistants API
661661

662-
The following example defines a role for a developer using [Azure OpenAI Assistants](/azure/ai-services/openai/how-to/assistant).
662+
The following example defines a role for a developer using [Azure OpenAI Assistants](/azure/ai-foundry/openai/how-to/assistant).
663663

664664
```json
665665
{

articles/ai-foundry/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ metadata:
1414
author: sdgilley
1515
title: Azure AI Foundry frequently asked questions
1616
summary: |
17-
FAQ for [Azure AI Foundry](https://ai.azure.com/?cid=learnDocs). If you can't find answers to your questions in this document, and still need help check the [Azure AI services support options guide](../ai-services/cognitive-services-support-options.md?context=/azure/ai-services/openai/context/context). Azure OpenAI is part of Azure AI services.
17+
FAQ for [Azure AI Foundry](https://ai.azure.com/?cid=learnDocs). If you can't find answers to your questions in this document, and still need help check the [Azure AI services support options guide](../ai-services/cognitive-services-support-options.md?context=/azure/ai-foundry/openai/context/context). Azure OpenAI is part of Azure AI services.
1818
sections:
1919
- name: General questions
2020
questions:

articles/ai-foundry/foundry-models/concepts/deployment-types.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Key use cases include:
7878

7979
Data zone standard deployments are available in the same Azure AI Foundry resource as all other AI Foundry Models deployment types but allow you to leverage Azure global infrastructure to dynamically route traffic to the data center within the Microsoft defined data zone with the best availability for each request. Data zone standard provides higher default quotas than our Azure geography-based deployment types.
8080

81-
Customers with high consistent volume may experience greater latency variability. The threshold is set per model. See the [Quotas and limits](/azure/ai-services/openai/quotas-limits#usage-tiers) page to learn more. For workloads that require low latency variance at large volume, we recommend leveraging the provisioned deployment offerings.
81+
Customers with high consistent volume may experience greater latency variability. The threshold is set per model. See the [Quotas and limits](/azure/ai-foundry/openai/quotas-limits#usage-tiers) page to learn more. For workloads that require low latency variance at large volume, we recommend leveraging the provisioned deployment offerings.
8282

8383
## Data zone provisioned
8484

@@ -110,7 +110,7 @@ Standard deployments are optimized for low to medium volume workloads with high
110110

111111
**SKU name in code:** `ProvisionedManaged`
112112

113-
Provisioned deployments allow you to specify the amount of throughput you require in a deployment. The service then allocates the necessary model processing capacity and ensures it's ready for you. Throughput is defined in terms of provisioned throughput units (PTU) which is a normalized way of representing the throughput for your deployment. Each model-version pair requires different amounts of PTU to deploy and provide different amounts of throughput per PTU. Learn more from our [Provisioned throughput concepts article](/azure/ai-services/openai/concepts/provisioned-throughput).
113+
Provisioned deployments allow you to specify the amount of throughput you require in a deployment. The service then allocates the necessary model processing capacity and ensures it's ready for you. Throughput is defined in terms of provisioned throughput units (PTU) which is a normalized way of representing the throughput for your deployment. Each model-version pair requires different amounts of PTU to deploy and provide different amounts of throughput per PTU. Learn more from our [Provisioned throughput concepts article](/azure/ai-foundry/openai/concepts/provisioned-throughput).
114114

115115

116116
## Control deployment options

articles/ai-foundry/foundry-models/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ metadata:
1212
author: santiagxf
1313
title: Foundry Models frequently asked questions
1414
summary: |
15-
If you can't find answers to your questions in this document, and still need help check the [Azure AI Foundry services (formerly known Azure AI Services) support options guide](../../ai-services/cognitive-services-support-options.md?context=/azure/ai-services/openai/context/context).
15+
If you can't find answers to your questions in this document, and still need help check the [Azure AI Foundry services (formerly known Azure AI Services) support options guide](../../ai-services/cognitive-services-support-options.md?context=/azure/ai-foundry/openai/context/context).
1616
sections:
1717
- name: General
1818
questions:

articles/ai-foundry/foundry-models/how-to/quickstart-github-models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ Azure AI Foundry Models supports additional features not available in GitHub Mod
9191
* Configure [content filtering](../../model-inference/how-to/configure-content-filters.md).
9292
* Configure rate limiting (for specific models).
9393
* Explore additional [deployment SKUs (for specific models)](../../model-inference/concepts/deployment-types.md).
94-
* Configure [private networking](../../../ai-services/cognitive-services-virtual-networks.md?context=/azure/ai-services/openai/context/context).
94+
* Configure [private networking](../../../ai-services/cognitive-services-virtual-networks.md?context=/azure/ai-foundry/openai/context/context).
9595

9696
## Got troubles?
9797

articles/ai-foundry/how-to/connections-add.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -142,7 +142,7 @@ For more on how to set private endpoints to your connected resources, see the fo
142142
| Azure Storage | [Use private endpoints](/azure/storage/common/storage-private-endpoints) |
143143
| Azure Cosmos DB | [Configure Azure Private Link for Azure Cosmos DB](/azure/cosmos-db/how-to-configure-private-endpoints?tabs=arm-bicep) |
144144
| Azure AI Search | [Create a private endpoint for a secure connection](/azure/search/service-create-private-endpoint) |
145-
| Azure OpenAI | [Securing Azure OpenAI inside a virtual network with private endpoints](/azure/ai-services/openai/how-to/network) |
145+
| Azure OpenAI | [Securing Azure OpenAI inside a virtual network with private endpoints](/azure/ai-foundry/openai/how-to/network) |
146146
| Application Insights | [Use Azure Private Link to connect networks to Azure Monitor](/azure/azure-monitor/logs/private-link-security) |
147147

148148

0 commit comments

Comments
 (0)