You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Azure AI model inference in Azure AI services provides customers with choices on the hosting structure that fits their business and usage patterns. The service offers two main types of deployment: **standard** and **provisioned**. Standard is offered with a global deployment option, routing traffic globally to provide higher throughput. Provisioned is also offered with a global deployment option, allowing customers to purchase and deploy provisioned throughput units across Azure global infrastructure.
16
+
Azure AI model inference makes models available using the *model deployment* concept in Azure AI Services resources. *Model deployments* are also Azure resources and, when created, they give access to a given model under certain configurations. Such configuration includes the infrastructure require to process the requests.
17
17
18
-
All deployments can perform the exact same inference operations, however the billing, scale, and performance are substantially different. As part of your solution design, you need to make two key decisions:
18
+
Azure AI model inference provides customers with choices on the hosting structure that fits their business and usage patterns. Those options are translated to different deployments types (or SKUs) that are available at model deployment time in the Azure AI Services resource.
19
19
20
-
-**Data residency needs**: global vs. regional resources
21
-
-**Call volume**: standard vs. provisioned
20
+
:::image type="content" source="../media/add-model-deployments/models-deploy-deployment-type.png" alt-text="Screenshot showing how to customize the deployment type for a given model deployment." lightbox="../media/add-model-deployments/models-deploy-deployment-type.png":::
22
21
23
-
Deployment types support varies by model and model provider. You can see which deployment type (SKU) each model supports in the [Models section](models.md).
22
+
Different model providers offer different deployments SKUs that you can select from. When selecting a deployment type, consider your **data residency needs** and **call volume/capacity** requirements.
24
23
25
-
## Global versus regional deployment types
24
+
## Deployment types for Azure OpenAI models
26
25
27
-
For standard and provisioned deployments, you have an option of two types of configurations within your resource – **global**or **regional**. Global standard is the recommended starting point.
26
+
The service offers two main types of deployments: **standard** and **provisioned**. For a given deployment type, customers can align their workloads with their data processing requirements by choosing an Azure geography (`Standard` or `Provisioned-Managed`), Microsoft specified data zone (`DataZone-Standard` or `DataZone Provisioned-Managed`), or Global (`Global-Standard` or `Global Provisioned-Managed`) processing options.
28
27
29
-
Global deployments leverage Azure's global infrastructure, dynamically route customer traffic to the data center with best availability for the customer's inference requests. This means you get the highest initial throughput limits and best model availability with Global while still providing our uptime SLA and low latency. For high volume workloads above the specified usage tiers on standard and global standard, you may experience increased latency variation. For customers that require the lower latency variance at large workload usage, we recommend purchasing provisioned throughput.
28
+
To learn more about deployment options for Azure OpenAI models see [Azure OpenAI documentation](../../../ai-services/openai/how-to/deployment-types.md).
30
29
31
-
Our global deployments are the first location for all new models and features. Customers with large throughput requirements should consider our provisioned deployment offering.
30
+
## Deployment types for Models-as-a-Service models
32
31
33
-
## Standard
32
+
Models from third-party model providers with pay-as-you-go billing (collectively called Models-as-a-Service), makes models available in Azure AI model inference under **standard** deployments with a Global processing option (`Global-Standard`).
34
33
35
-
Standard deployments provide a pay-per-call billing model on the chosen model. Provides the fastest way to get started as you only pay for what you consume. Models available in each region and throughput may be limited.
34
+
Models-as-a-Service offers regional deployment options under [Serverless API endpoints](../../../ai-studio/how-to/deploy-models-serverless.md) in Azure AI Foundry. Prompts and outputs are processed within the geography specified during deployment. However, those deployments can't be accessed using the Azure AI model inference endpoint in Azure AI Services.
36
35
37
-
Standard deployments are optimized for low to medium volume workloads with high burstiness. Customers with high consistent volume may experience greater latency variability.
36
+
### Global-Standard
38
37
39
-
Only Azure OpenAI models support this deployment type.
38
+
Global deployments leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global standard provides the highest default quota and eliminates the need to load balance across multiple resources. Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure location. Learn more about [data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
40
39
41
-
## Global standard
40
+
## Control deployment options
42
41
43
-
Global deployments are available in the same Azure AI services resources as non-global deployment types but allow you to leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global standard provides the highest default quota and eliminates the need to load balance across multiple resources.
42
+
Administrators can control which model deployment types are available to their users by using Azure Policies. Learn more about [How to control AI model deployment with custom policies](../../../ai-studio/how-to/custom-policy-model-deployment.md).
44
43
45
-
Customers with high consistent volume may experience greater latency variability. The threshold is set per model. For applications that require the lower latency variance at large workload usage, we recommend purchasing provisioned throughput if available.
44
+
## Related content
46
45
47
-
## Global provisioned
48
-
49
-
Global deployments are available in the same Azure AI services resources as non-global deployment types but allow you to leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global provisioned deployments provide reserved model processing capacity for high and predictable throughput using Azure global infrastructure.
50
-
51
-
Only Azure OpenAI models support this deployment type.
52
-
53
-
## Next steps
54
-
55
-
-[Quotas & limits](../quotas-limits.md)
46
+
-[Quotas & limits](../quotas-limits.md)
47
+
-[Data privacy, and security for Models-as-a-Service models](../../../ai-studio/how-to/concept-data-privacy.md)
* An AI project connected to your Azure AI Services resource. You call follow the steps at [Configure Azure AI model inference service in my project](../../how-to/configure-project-connection.md) in Azure AI Foundry.
13
+
* An AI project resource.
14
+
15
+
* The feature **Deploy models to Azure AI model inference service** on.
16
+
17
+
:::image type="content" source="../../media/quickstart-ai-project/ai-project-inference-endpoint.gif" alt-text="An animation showing how to turn on the Deploy models to Azure AI model inference service feature in Azure AI Foundry portal." lightbox="../../media/quickstart-ai-project/ai-project-inference-endpoint.gif":::
14
18
15
19
## Add a connection
16
20
@@ -50,4 +54,4 @@ You can see the model deployments available in the connected resource by followi
50
54
51
55
5. The details page shows information about the specific deployment. If you want to test the model, you can use the option **Open in playground**.
52
56
53
-
6. The Azure AI Foundry playground is displayed, where you can interact with the given model.
57
+
6. The Azure AI Foundry playground is displayed, where you can interact with the given model.
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/custom-named-entity-recognition/includes/use-pre-existing-resource.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,7 +51,7 @@ Make sure to enable **Custom text classification / Custom Named Entity Recogniti
51
51
5. Select **Apply**.
52
52
53
53
>[!Important]
54
-
> *Make sure that your **Language resource**has **storage blob data contributor** role assigned on the storage account you are connecting.
54
+
> Make sure that the user making changes has **storage blob data contributor** role assigned for them.
Copy file name to clipboardExpand all lines: articles/ai-services/translator/firewalls.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,19 +1,19 @@
1
1
---
2
-
title: Translate behind firewalls - Translator
2
+
title: Use Azure AI Translator to translate behind firewalls.
3
3
titleSuffix: Azure AI services
4
4
description: Azure AI Translator can translate behind firewalls using either domain-name or IP filtering.
5
5
#services: cognitive-services
6
6
author: laujan
7
7
manager: nitinme
8
8
ms.service: azure-ai-translator
9
9
ms.topic: conceptual
10
-
ms.date: 07/09/2024
10
+
ms.date: 01/27/2025
11
11
ms.author: lajanuar
12
12
---
13
13
14
-
# Use Translator behind firewalls
14
+
# Use Azure AI Translator behind firewalls
15
15
16
-
Translator can translate behind firewalls using either [Domain-name](/azure/firewall/dns-settings#dns-proxy-configuration) or [IP filtering](#configure-firewall). Domain-name filtering is the preferred method.
16
+
Azure AI Translator can translate behind firewalls using either [Domain-name](/azure/firewall/dns-settings#dns-proxy-configuration) or [IP filtering](#configure-firewall). Domain-name filtering is the preferred method.
17
17
18
18
If you still require IP filtering, you can get the [IP addresses details using service tag](/azure/virtual-network/service-tags-overview#discover-service-tags-by-using-downloadable-json-files). Translator is under the **CognitiveServicesManagement** service tag.
Copy file name to clipboardExpand all lines: articles/ai-services/translator/whats-new.md
+3-4Lines changed: 3 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,13 +1,12 @@
1
1
---
2
2
title: What's new in Azure AI Translator?
3
3
titleSuffix: Azure AI services
4
-
description: Learn of the latest changes to the Translator Service API.
4
+
description: Learn about the latest changes to the Azure AI Translator Service API.
5
5
author: laujan
6
6
manager: nitinme
7
7
ms.service: azure-ai-translator
8
-
ms.custom: build-2023
9
8
ms.topic: overview
10
-
ms.date: 06/19/2024
9
+
ms.date: 01/27/2025
11
10
ms.author: lajanuar
12
11
---
13
12
<!-- markdownlint-disable MD024 -->
@@ -20,7 +19,7 @@ Bookmark this page to stay up to date with release notes, feature enhancements,
20
19
21
20
Translator is a language service that enables users to translate text and documents, helps entities expand their global outreach, and supports preservation of at-risk and endangered languages.
22
21
23
-
Translator service supports language translation for more than 100 languages. If your language community is interested in partnering with Microsoft to add your language to Translator, contact us via the [Translator community partner onboarding form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-riVR3Xj0tOnIRdZOALbM9UOU1aMlNaWFJOOE5YODhRR1FWVzY0QzU1OS4u).
22
+
Azure AI Translator service supports language translation for more than 100 languages. If your language community is interested in partnering with Microsoft to add your language to Translator, contact us via the [Translator community partner onboarding form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-riVR3Xj0tOnIRdZOALbM9UOU1aMlNaWFJOOE5YODhRR1FWVzY0QzU1OS4u).
Copy file name to clipboardExpand all lines: articles/search/search-indexer-how-to-access-private-sql.md
+11-13Lines changed: 11 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ author: mattgotteiner
7
7
ms.author: magottei
8
8
ms.service: azure-ai-search
9
9
ms.topic: how-to
10
-
ms.date: 12/10/2024
10
+
ms.date: 01/27/2025
11
11
---
12
12
13
13
# Create a shared private link for a SQL managed instance from Azure AI Search
@@ -38,11 +38,11 @@ Although you can call the Management REST API directly, it's easier to use the A
38
38
39
39
## 1 - Retrieve connection information
40
40
41
-
Retrieve the FQDN of the managed instance, including the DNS zone. The DNS zone is part of the domain name of the SQL Managed Instance. For example, if the FQDN of the SQL Managed Instance is `my-sql-managed-instance.00000000000.database.windows.net`, the DNS zone is `00000000000`.
41
+
In this section, get the DNS zone from the host name and a connection string.
42
42
43
43
1. In Azure portal, find the SQL managed instance object.
44
44
45
-
1. On the **Overview** tab, locate the Host property. Copy the DNS zone portion of the FQDN for the next step.
45
+
1. On the **Overview** tab, locate the Host property. Copy the *DNS zone* portion of the FQDN for the next step. The DNS zone is part of the domain name of the SQL Managed Instance. For example, if the FQDN of the SQL Managed Instance is `my-sql-managed-instance.a1b22c333d44.database.windows.net`, the DNS zone is `a1b22c333d44`.
46
46
47
47
1. On the **Connection strings** tab, copy the ADO.NET connection string for a later step. It's needed for the data source connection when testing the private connection.
48
48
@@ -64,11 +64,11 @@ For more information about connection properties, see [Create an Azure SQL Manag
64
64
}
65
65
```
66
66
67
-
1. Provide a meaningful name for the shared private link. The shared private link appears alongside other private endpoints. A name like "shared-private-link-for-search" can remind you how it's used.
67
+
Provide a meaningful name for the shared private link. The shared private link appears alongside other private endpoints. A name like "shared-private-link-for-search" can remind you how it's used.
68
68
69
-
1. Paste in the DNS zone name in "dnsZonePrefix" that you retrieved in an earlier step.
69
+
Paste in the DNS zone name in "dnsZonePrefix" that you retrieved in an earlier step.
70
70
71
-
1.Edit the "privateLinkResourceId" to reflect the private endpoint of your managed instance. Provide the subscription ID, resource group name, and object name of the managed instance.
71
+
Edit the "privateLinkResourceId", substitute valid for values for the placeholders. Provide a valid subscription ID, resource group name, and managed instance name.
72
72
73
73
1. Save the file locally as *create-pe.json* (or use another name, remembering to update the Azure CLI syntax in the next step).
74
74
@@ -84,7 +84,7 @@ For more information about connection properties, see [Create an Azure SQL Manag
84
84
85
85
1. Call the `az rest` command to use the [Management REST API](/rest/api/searchmanagement) of Azure AI Search.
86
86
87
-
Because shared private link support for SQL managed instances is still in preview, you need a preview version of the REST API. Use `2021-04-01-preview` or a later preview API version for this step. We recommend using the latest preview API version.
87
+
Because shared private link support for SQL managed instances is still in preview, you need a preview version of the management REST API. Use `2021-04-01-preview` or a later preview API version for this step. We recommend using the latest preview API version.
88
88
89
89
```azurecli
90
90
az rest --method put --uri https://management.azure.com/subscriptions/{{search-service-subscription-ID}}/resourceGroups/{{search service-resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}/sharedPrivateLinkResources/{{shared-private-link-name}}?api-version=2024-06-01-preview --body @create-pe.json
@@ -122,11 +122,9 @@ On the Azure AI Search side, you can confirm request approval by revisiting the
122
122
123
123
You can now configure an indexer and its data source to use an outbound private connection to your managed instance.
124
124
125
-
You could use the [**Import data**](search-get-started-portal.md)wizard for this step, but the indexer that's generated won't be valid for this scenario. You'll need to modify the indexer JSON property as described in this step to make it compliant for this scenario. You'll then need to [reset and rerun the indexer](search-howto-run-reset-indexers.md) to fully test the pipeline using the updated indexer.
125
+
This article assumes a [REST client](search-get-started-rest.md) and uses the REST APIs.
126
126
127
-
This article assumes a [REST client](search-get-started-rest.md) and uses the REST APIs to make it easier to see all of the properties. Recall that REST API calls for indexers and data sources use the [Search REST APIs](/rest/api/searchservice/), not the [Management REST APIs](/rest/api/searchmanagement/) used to create the shared private link. The syntax and API versions are different between the two REST APIs.
128
-
129
-
1.[Create the data source definition](search-how-to-index-sql-database.md) as you would normally for Azure SQL. The format of the connection string is slightly different for a managed instance, but other properties are the same as if you were configuring a data source connection to Azure SQL database.
127
+
1.[Create the data source definition](search-how-to-index-sql-database.md) as you would normally for Azure SQL. By default, a managed instance listens on port 3342, but on a virtual network it listens on 1433.
130
128
131
129
Provide the connection string that you copied earlier with an Initial Catalog specified.
132
130
@@ -139,7 +137,7 @@ This article assumes a [REST client](search-get-started-rest.md) and uses the RE
139
137
"description" : "A database for testing Azure AI Search indexes.",
140
138
"type" : "azuresql",
141
139
"credentials" : {
142
-
"connectionString" : "Server=tcp:contoso.public.0000000000.database.windows.net,1433;Persist Security Info=false; User ID=<your user name>; Password=<your password>;MultipleActiveResultsSets=False; Encrypt=True;Connection Timeout=30;Initial Catalog=<your database name>"
140
+
"connectionString" : "Server=tcp:contoso.a1b22c333d44.database.windows.net,1433;Persist Security Info=false; User ID=<your user name>; Password=<your password>;MultipleActiveResultsSets=False; Encrypt=True;Connection Timeout=30;Initial Catalog=<your database name>"
143
141
},
144
142
"container" : {
145
143
"name" : "Name of table or view to index",
@@ -182,7 +180,7 @@ You can use [**Search explorer**](search-explorer.md) in Azure portal to check t
182
180
183
181
If you ran the indexer in the previous step and successfully indexed content from your managed instance, then the test was successful. However, if the indexer fails or there's no content in the index, you can modify your objects and repeat testing by choosing any client that can invoke an outbound request from an indexer.
184
182
185
-
An easy choice is [running an indexer](search-howto-run-reset-indexers.md) in Azure portal, but you can also try a [REST client](search-get-started-rest.md) and REST APIs for more precision. Assuming that your search service isn't also configured for a private connection, the REST client connection to Search can be over the public internet.
183
+
An easy choice is [running an indexer](search-howto-run-reset-indexers.md) in Azure portal, but you can also try a [REST client](search-get-started-rest.md) and REST APIs for more precision. Assuming that your search service isn't also configured for a private connection, the REST client connection to Azure AI Search can be over the public internet.
0 commit comments