You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/model-inference/faq.yml
+27-9Lines changed: 27 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -24,18 +24,19 @@ sections:
24
24
Both Azure OpenAI Service and Azure AI model inference are part of the Azure AI services family and build on top of the same security and enterprise promise of Azure.
25
25
26
26
While Azure AI model inference focus on inference, Azure OpenAI Service can be used with more advanced APIs like batch, fine-tuning, assistants, and files.
27
-
- question: |
28
-
What's the difference between OpenAI and Azure OpenAI?
29
-
answer: |
30
-
Azure AI Models and Azure OpenAI Service give customers access to advanced language models from OpenAI with the security and enterprise promise of Azure. Azure OpenAI codevelops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other.
31
-
32
-
Customers get the security capabilities of Microsoft Azure while running the same models as OpenAI. It offers private networking, regional availability, and responsible AI content filtering.
33
-
34
-
Learn more about the [Azure OpenAI service](../../ai-services/openai/overview.md).
35
27
- question: |
36
28
What's the difference between Azure AI services and Azure AI Foundry?
37
29
answer: |
38
30
Azure AI services are a suite of AI services that provide prebuilt APIs for common AI scenarios. Azure AI Services is part of the Azure AI Foundry platform. Azure AI services can be used in Azure AI Foundry portal to enhance your models with prebuilt AI capabilities.
31
+
- question: |
32
+
What's the difference between Serverless API Endpoints and Azure AI model inference?
33
+
answer: |
34
+
Both features allow you to deploy Models-as-a-Service models in Azure AI Foundry. However, there are some differences between them:
35
+
- *Resource involved*: Serverless API Endpoints are deployed within an AI project resource, while Azure AI model inference is part of the Azure AI services resource.
36
+
- *Deployment options*: Serverless API Endpoints allow regional deployments, while Azure AI model inference allows deployments under a global capacity.
37
+
- *Models*: Azure AI model inference also supports deploying Azure OpenAI models.
38
+
- *Endpoint*: Serverless API Endpoints creates one endpoint and credential per deployment, while Azure AI model inference creates one endpoint and credential per resource.
39
+
- *Model router*: Azure AI model inference allows you to switch between models without changing your code using a model router.
39
40
- name: Models
40
41
questions:
41
42
- question: |
@@ -44,6 +45,10 @@ sections:
44
45
Azure AI model inference in AI services supports all the models in the Azure AI catalog having pay-as-you-go billing. For more information, see [the Models article](concepts/models.md).
45
46
46
47
The Azure AI model catalog contains a wider list of models, however, those models require compute quota from your subscription. They also need to have a project or AI hub where to host the deployment. For more information, see [deployment options in Azure AI Foundry](../../ai-studio/concepts/deployments-overview.md).
48
+
- question: |
49
+
My company hasn't approved specific models for use. How can I prevent users from deploying them?
50
+
answer: |
51
+
You can restrict the models available for deployment in Azure AI services by using the Azure policies. Models are listed in the catalog but any attempt to deploy them is blocked. Read [Control AI model deployment with custom policies](how-to/configure-deployment-policies.md).
47
52
- name: SDKs and programming languages
48
53
questions:
49
54
- question: |
@@ -94,10 +99,23 @@ sections:
94
99
You can set up a spending limit in the [Azure portal](https://portal.azure.com) under **Azure Cost Management + Billing**. This limit prevents you from spending more than the limit you set. Once spending limit is reached, the subscription will be disabled and you won't be able to use the endpoint until the next billing cycle.
95
100
- name: Data and Privacy
96
101
questions:
102
+
- question: |
103
+
How are third-party models available?
104
+
answer: |
105
+
Third-party models available for deployment in Azure AI Services with pay-as-you-go billing (for example, Meta AI models or Mistral models) are offered by the model provider but hosted in Microsoft-managed Azure infrastructure and accessed via API in the Azure AI model inference endpoint. Model providers define the license terms and set the price for use of their models, while Azure AI Services service manages the hosting infrastructure, makes the inference APIs available, and acts as the data processor for prompts submitted and content output by models deployed. Read about [Data privacy, and security for third-party models](../../ai-studio/how-to/concept-data-privacy.md).
106
+
- question: |
107
+
How is data processed by the Global-Standard deployment type?
108
+
answer: |
109
+
For model deployments under Azure AI Services resources, prompts and outputs are processed using Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global standard provides the highest default quota and eliminates the need to load balance across multiple resources. Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure location. Learn more about [data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
97
110
- question: |
98
111
Do you use my company data to train any of the models?
99
112
answer: |
100
-
Azure AI model inference doesn't use customer data to retrain models, and customer data is never shared with model providers.
113
+
Azure AI model inference doesn't use customer data to retrain models, and customer data is never shared with model providers.
114
+
- question: |
115
+
Is data shared with model providers?
116
+
answer: |
117
+
Microsoft acts as the data processor for prompts and outputs sent to, and generated by, a model deployment under Azure AI services resources. Microsoft doesn't share these prompts and outputs with the model provider. Also, Microsoft doesn't use these prompts and outputs to train or improve Microsoft models, the model provider's models, or any third party's models.
118
+
As explained during the deployment process for Models-as-a-Service models, Microsoft might share customer contact information and transaction details (including the usage volume associated with the offering) with the model publisher so that the publisher can contact customers regarding the model. Learn more about information available to model publishers in [Access insights for the Microsoft commercial marketplace in Partner Center](/partner-center/analytics).
Copy file name to clipboardExpand all lines: articles/ai-services/document-intelligence/concept/incremental-classifier.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: laujan
6
6
manager: nitinme
7
7
ms.service: azure-ai-document-intelligence
8
8
ms.topic: conceptual
9
-
ms.date: 11/19/2024
9
+
ms.date: 02/27/2025
10
10
ms.author: vikurpad
11
11
ms.custom:
12
12
monikerRange: '>=doc-intel-4.0.0'
@@ -50,13 +50,13 @@ Incremental training is useful when you want to improve the quality of a custom
50
50
51
51
### Create an incremental classifier build request
52
52
53
-
The incremental classifier build request is similar to the [`classify document` build request](/rest/api/aiservices/document-classifiers?view=rest-aiservices-v4.0%20(2024-02-29-preview)&preserve-view=true) but includes the new `baseClassifierId` property. The `baseClassifierId` is set to the existing classifier that you want to extend. You also need to provide the `docTypes` for the different document types in the sample set. By providing a `docType` that exists in the baseClassifier, the samples provided in the request are added to the samples provided when the base classifier was trained. New `docType` values added in the incremental training are only added to the new classifier. The process to specify the samples remains unchanged. For more information, *see*[training a classifier model](../train/custom-classifier.md#training-a-model).
53
+
The incremental classifier build request is similar to the [`classify document` build request](/rest/api/aiservices/document-classifiers?view=rest-aiservices-v4.0%20(2024-11-30)&preserve-view=true) but includes the new `baseClassifierId` property. The `baseClassifierId` is set to the existing classifier that you want to extend. You also need to provide the `docTypes` for the different document types in the sample set. By providing a `docType` that exists in the baseClassifier, the samples provided in the request are added to the samples provided when the base classifier was trained. New `docType` values added in the incremental training are only added to the new classifier. The process to specify the samples remains unchanged. For more information, *see*[training a classifier model](../train/custom-classifier.md#training-a-model).
54
54
55
55
### Sample POST request
56
56
57
57
***Sample `POST` request to build an incremental document classifier***
0 commit comments