Skip to content

Commit a4fe82e

Browse files
authored
Update links in deployment-types.md for accuracy
1 parent 41d4b3e commit a4fe82e

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/ai-foundry/foundry-models/concepts/deployment-types.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Global deployments are available in the same Azure AI Foundry resources as non-g
7878
> [!IMPORTANT]
7979
> Data stored at rest remains in the designated Azure geography. However, data might be processed for inferencing in any Azure AI Foundry location. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
8080
81-
[Global Batch](../../openai/batch.md) is designed to efficiently handle large-scale and high-volume processing tasks. You can process asynchronous groups of requests with separate quota and a 24-hour target turnaround, at [50% less cost than Global Standard](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/). With batch processing, rather than sending one request at a time, you send a large number of requests in a single file. Global Batch requests have a separate enqueued token quota, which avoids any disruption of your online workloads.
81+
[Global Batch](../../openai/how-to/batch.md) is designed to efficiently handle large-scale and high-volume processing tasks. You can process asynchronous groups of requests with separate quota and a 24-hour target turnaround, at [50% less cost than Global Standard](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/). With batch processing, rather than sending one request at a time, you send a large number of requests in a single file. Global Batch requests have a separate enqueued token quota, which avoids any disruption of your online workloads.
8282

8383
Key use cases include:
8484

@@ -117,7 +117,7 @@ Data Zone Provisioned deployments are available in the same Azure AI Foundry res
117117
> [!IMPORTANT]
118118
> Data stored at rest remains in the designated Azure geography. However, data might be processed for inferencing in any Azure AI Foundry location within the Microsoft-specified data zone. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
119119
120-
Data Zone Batch deployments provide all the same functionality as [Global Batch deployments](../../openai/batch.md). However, they allow you to use the global infrastructure of Azure to dynamically route traffic to only datacenters within the Microsoft-defined data zone with the best availability for each request.
120+
Data Zone Batch deployments provide all the same functionality as [Global Batch deployments](../../openai/how-to/batch.md). However, they allow you to use the global infrastructure of Azure to dynamically route traffic to only datacenters within the Microsoft-defined data zone with the best availability for each request.
121121

122122
## Standard
123123

@@ -170,7 +170,7 @@ Fine-tuned models support a `Developer` deployment designed to support custom mo
170170

171171
## Deploy models
172172

173-
:::image type="content" source="../media/deployment-types/deploy-models-new.png" alt-text="Screenshot that shows the model deployment dialog in Azure AI Foundry portal with a deployment type highlighted.":::
173+
:::image type="content" source="../../openai/media/deployment-types/deploy-models-new.png" alt-text="Screenshot that shows the model deployment dialog in Azure AI Foundry portal with a deployment type highlighted.":::
174174

175175
To learn about creating resources and deploying models, refer to the [Resource creation guide](../../openai/how-to/create-resource.md).
176176

0 commit comments

Comments
 (0)