You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/content-filtering.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ author: PatrickFarley
26
26
27
27
The content filtering system is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md), and it works by running both the prompt input and completion output through a set of classification models designed to detect and prevent the output of harmful content. Variations in API configurations and application design might affect completions and thus filtering behavior.
28
28
29
-
With Azure OpenAI model deployments, you can use the default content filter or create your own content filter (described later on). Models available through **standard deployments** have content filtering enabled by default. To learn more about the default content filter enabled for standard deployments, see [Content safety for Azure Direct Models](model-catalog-content-safety.md).
29
+
With Azure OpenAI model deployments, you can use the default content filter or create your own content filter (described later on). Models available through **standard deployments** have content filtering enabled by default. To learn more about the default content filter enabled for standard deployments, see [Content safety for Models Sold Directly by Azure ](model-catalog-content-safety.md).
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/fine-tuning-overview.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -92,7 +92,7 @@ For more information on fine-tuning using a managed compute (preview), see [Fine
92
92
93
93
For details about Azure OpenAI in Azure AI Foundry Models that are available for fine-tuning, see the [Azure OpenAI in Foundry Models documentation](../../ai-services/openai/concepts/models.md#fine-tuning-models) or the [Azure OpenAI models table](#fine-tuning-azure-openai-models) later in this guide.
94
94
95
-
For the Azure OpenAI Service models that you can fine tune, supported regions for fine-tuning include North Central US, Sweden Central, and more.
95
+
For the Azure OpenAI models that you can fine tune, supported regions for fine-tuning include North Central US, Sweden Central, and more.
*[Models sold directly by Azure](#models-sold-directly-by-azure)
34
+
*[Models from Partners and Community](#models-from-partners-and-community)
35
35
36
36
Understanding the distinction between these categories helps you choose the right models based on your specific requirements and strategic goals.
37
37
38
-
## Azure Direct Models
38
+
## Models Sold Directly by Azure
39
39
40
-
Azure Direct Models are models that are hosted and sold by Microsoft under Microsoft Product Terms. These models have undergone rigorous evaluation and are deeply integrated into Azure's AI ecosystem. They offer enhanced integration, optimized performance, and direct Microsoft support, including enterprise-grade Service Level Agreements (SLAs).
40
+
These are models that are hosted and sold by Microsoft under Microsoft Product Terms. These models have undergone rigorous evaluation and are deeply integrated into Azure’s AI ecosystem. The models come from a variety of top providers and they offer enhanced integration, optimized performance, and direct Microsoft support, including enterprise-grade Service Level Agreements (SLAs).
41
41
42
-
Characteristics of Azure Direct Models:
42
+
Characteristics of models sold directly by Azure:
43
43
44
44
- Official first-party support from Microsoft
45
45
- High level of integration with Azure services and infrastructure
46
46
- Extensive performance benchmarking and validation
47
47
- Adherence to Microsoft's Responsible AI standards
48
48
- Enterprise-grade scalability, reliability, and security
49
49
50
-
Azure Direct Models also have the benefit of flexible Provisioned Throughput, meaning you can use your quota and reservations across any of these models.
50
+
These Models also have the benefit of fungible Provisioned Throughput, meaning you can flexibly use your quota and reservations across any of these models.
51
51
52
-
## Azure Ecosystem Models
52
+
## Models from Partners and Community
53
53
54
-
Models constitute the vast majority of the Azure AI Foundry Models. These models are provided by trusted third-party organizations, partners, research labs, and community contributors. These models offer specialized and diverse AI capabilities, covering a wide array of scenarios, industries, and innovations.
54
+
These models constitute the vast majority of the Azure AI Foundry Models. These models are provided by trusted third-party organizations, partners, research labs, and community contributors. These models offer specialized and diverse AI capabilities, covering a wide array of scenarios, industries, and innovations.
55
55
56
-
Characteristics of Azure Ecosystem Models:
56
+
Characteristics of Models from Partners and Community:
57
57
* Developed and supported by external partners and community contributors
58
58
* Diverse range of specialized models catering to niche or broad use cases
59
59
* Typically validated by providers themselves, with integration guidelines provided by Azure
@@ -62,28 +62,17 @@ Characteristics of Azure Ecosystem Models:
62
62
63
63
Models are deployable as Managed Compute or Standard (pay-go) deployment options. The model provider selects how the models are deployable.
64
64
65
-
## Choosing between Azure Direct and Azure Ecosystem Models
66
-
65
+
## Choosing Between direct models and partner & community models
67
66
68
67
When selecting models from Azure AI Foundry Models, consider the following:
69
-
***Use Case and Requirements**: Azure Direct Models are ideal for scenarios requiring deep Azure integration, guaranteed support, and enterprise SLAs. Azure Ecosystem Models excel in specialized use cases and innovation-led scenarios.
70
-
***Support Expectations**: Azure Direct Models come with robust Microsoft-provided support and maintenance. Azure Ecosystem Models are supported by their providers, with varying levels of SLA and support structures.
71
-
***Innovation and Specialization**: Azure Ecosystem Models offer rapid access to specialized innovations and niche capabilities often developed by leading research labs and emerging AI providers.
72
-
73
-
## Accessing Azure Ecosystem Models
68
+
***Use Case and Requirements**: Models sold directly by Azure are ideal for scenarios requiring deep Azure integration, guaranteed support, and enterprise SLAs. Models from Partners and Community excel in specialized use cases and innovation-led scenarios.
69
+
***Support Expectations**: Models sold directly by Azure come with robust Microsoft-provided support and maintenance. These models are supported by their providers, with varying levels of SLA and support structures.
70
+
***Innovation and Specialization**: Models from Partners and Community offer rapid access to specialized innovations and niche capabilities often developed by leading research labs and emerging AI providers.
74
71
75
-
Azure Ecosystem Models are accessible through Azure AI Foundry, providing:
76
-
* Comprehensive details about the model's capabilities and integration requirements.
77
-
* Community ratings, usage data, and qualitative feedback to guide your decisions.
78
-
* Clear integration guidelines to help incorporate these models seamlessly into your Azure workflows.
79
-
80
-
For more detailed guidance and exploration of available models, visit the [Azure AI Foundry documentation](/azure/ai-foundry/).
81
-
82
-
Azure AI Foundry remains committed to providing a robust ecosystem, enabling customers to easily access the best AI innovations from Microsoft and our trusted partners.
83
72
84
73
## Model collections
85
74
86
-
The model catalog organizes models into different collections:
75
+
The model catalog organizes models into different collections, including:
87
76
88
77
***Azure OpenAI models exclusively available on Azure**: Flagship Azure OpenAI models available through an integration with Azure OpenAI in Foundry Models. Microsoft supports these models and their use according to the product terms and [SLA for Azure OpenAI](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services).
89
78
@@ -108,7 +97,7 @@ On the **model catalog filters**, you'll find:
108
97
***Batch**: best suited for cost-optimized batch jobs, and not latency. No playground support is provided for the batch deployment.
109
98
***Managed compute**: this option allows you to deploy a model on an Azure virtual machine. You will be billed for hosting and inferencing.
110
99
***Inference tasks**: you can filter models based on the inference task type.
111
-
***Finetune tasks**: you can filter models based on the finetune task type.
100
+
***Fine-tune tasks**: you can filter models based on the fine-tune task type.
112
101
***Licenses**: you can filter models based on the license type.
113
102
114
103
On the **model card**, you'll find:
@@ -248,7 +237,7 @@ To set the public network access flag for the Azure AI Foundry hub:
248
237
249
238
* If you have an Azure AI Foundry hub with a private endpoint created before July 11, 2024, standard deployments added to projects in this hub won't follow the networking configuration of the hub. Instead, you need to create a new private endpoint for the hub and create a new standard deployment in the project so that the new deployments can follow the hub's networking configuration.
250
239
251
-
* If you have an Azure AI Foundry hub with MaaS deployments created before July 11, 2024, and you enable a private endpoint on this hub, the existing sstandard deployments won't follow the hub's networking configuration. For standard deployments in the hub to follow the hub's networking configuration, you need to create the deployments again.
240
+
* If you have an Azure AI Foundry hub with MaaS deployments created before July 11, 2024, and you enable a private endpoint on this hub, the existing standard deployments won't follow the hub's networking configuration. For standard deployments in the hub to follow the hub's networking configuration, you need to create the deployments again.
252
241
253
242
* Currently, [Azure OpenAI On Your Data](/azure/ai-services/openai/concepts/use-your-data) support isn't available for standard deployments in private hubs, because private hubs have the public network access flag disabled.
@@ -34,7 +34,7 @@ Content filtering occurs synchronously as the service processes prompts to gener
34
34
- When you first deploy a language model
35
35
- Later, by selecting the content filtering toggle on the deployment details page
36
36
37
-
Suppose you decide to use an API other than the [Azure AI Model Inference API](/azure/ai-studio/reference/reference-model-inference-api) to work with a model that is deployed via a standard deployment. In such a situation, content filtering (preview) isn't enabled unless you implement it separately by using Azure AI Content Safety. To get started with Azure AI Content Safety, see [Quickstart: Analyze text content](/azure/ai-services/content-safety/quickstart-text). You run a higher risk of exposing users to harmful content if you don't use content filtering (preview) when working with models that are deployed via standard deployments.
37
+
Suppose you decide to use an API other than the [Foundry Models API](/azure/ai-studio/reference/reference-model-inference-api) to work with a model that is deployed via a standard deployment. In such a situation, content filtering (preview) isn't enabled unless you implement it separately by using Azure AI Content Safety. To get started with Azure AI Content Safety, see [Quickstart: Analyze text content](/azure/ai-services/content-safety/quickstart-text). You run a higher risk of exposing users to harmful content if you don't use content filtering (preview) when working with models that are deployed via standard deployments.
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/model-lifecycle-retirement.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ reviewer: fkriti
19
19
Azure AI Foundry Models in the model catalog are continually refreshed with newer and more capable models. As part of this process, model providers might deprecate and retire their older models, and you might need to update your applications to use a newer model. This document communicates information about the model lifecycle and deprecation timelines and explains how you're informed of model lifecycle stages.
20
20
21
21
> [!IMPORTANT]
22
-
> This article describes deprecation and retirement only for Azure Direct models and Azure Ecosystem models models in Foundry Models. For information about deprecation and retirement for Azure OpenAI in Foundry Models, see the [Azure OpenAI models lifecycle](../../ai-services/openai/concepts/model-retirements.md?context=/azure/ai-foundry/context/context) documentation.
22
+
> This article describes deprecation and retirement only for Models Sold Directly by Azure and Models from Partners and Community in Foundry Models. For information about deprecation and retirement for Azure OpenAI in Foundry Models, see the [Azure OpenAI models lifecycle](../../ai-services/openai/concepts/model-retirements.md?context=/azure/ai-foundry/context/context) documentation.
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/concept-data-privacy.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ Deploying models to managed compute deploys model weights to dedicated virtual m
35
35
36
36
You manage the infrastructure for these managed compute resources. Azure data, privacy, and security commitments apply. To learn more about Azure compliance offerings applicable to Azure AI Foundry, see the [Azure Compliance Offerings page](https://servicetrust.microsoft.com/DocumentPage/7adf2d9e-d7b5-4e71-bad8-713e6a183cf3).
37
37
38
-
Although containers for **Azure Direct Models** are scanned for vulnerabilities that could exfiltrate data, not all models available through the model catalog are scanned. To reduce the risk of data exfiltration, you can [help protect your deployment by using virtual networks](configure-managed-network.md). You can also use [Azure Policy](../../ai-services/policy-reference.md) to regulate the models that your users can deploy.
38
+
Although containers for **Models Sold Directly by Azure** are scanned for vulnerabilities that could exfiltrate data, not all models available through the model catalog are scanned. To reduce the risk of data exfiltration, you can [help protect your deployment by using virtual networks](configure-managed-network.md). You can also use [Azure Policy](../../ai-services/policy-reference.md) to regulate the models that your users can deploy.
39
39
40
40
:::image type="content" source="../media/explore/subscription-service-cycle.png" alt-text="Diagram that shows the platform service life cycle." lightbox="../media/explore/subscription-service-cycle.png":::
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/configure-managed-network.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ You need to configure following network isolation configurations.
29
29
- Choose network isolation mode. You have two options: allow internet outbound mode or allow only approved outbound mode.
30
30
- If you use Visual Studio Code integration with allow only approved outbound mode, create FQDN outbound rules described in the [use Visual Studio Code](#scenario-use-visual-studio-code) section.
31
31
- If you use HuggingFace models in Models with allow only approved outbound mode, create FQDN outbound rules described in the [use HuggingFace models](#scenario-use-huggingface-models) section.
32
-
- If you use one of the open-source models with allow only approved outbound mode, create FQDN outbound rules described in the [Azure Direct Models](#scenario-azure-direct-models) section.
32
+
- If you use one of the open-source models with allow only approved outbound mode, create FQDN outbound rules described in the [Models Sold Directly by Azure](#scenario-models-sold-directly-by-azure) section.
33
33
34
34
## Network isolation architecture and isolation modes
35
35
@@ -812,7 +812,7 @@ If you plan to use __HuggingFace models__ with the hub, add outbound _FQDN_ rule
812
812
* cnd.auth0.com
813
813
* cdn-lfs.huggingface.co
814
814
815
-
### Scenario: Azure Direct Models
815
+
### Scenario: Models Sold Directly by Azure
816
816
817
817
These models involve dynamic installation of dependencies at runtime, and require outbound _FQDN_ rules to allow traffic to the following hosts:
[Certain models in the model catalog](deploy-models-serverless-availability.md) can be deployed as a standard deployments. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription.
22
+
[Certain models in the model catalog](deploy-models-serverless-availability.md) can be deployed as a standard deployment. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription.
23
23
24
24
This article uses a Meta Llama model deployment for illustration. However, you can use the same steps to deploy any of the [models in the model catalog that are available for standard deployment](deploy-models-serverless-availability.md).
25
25
@@ -522,7 +522,7 @@ In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
522
522
You can select the deployment, and note the endpoint's _Target URI_ and _Key_. Use them to call the deployment and generate predictions.
523
523
524
524
> [!NOTE]
525
-
> When using the [Azure portal](https://portal.azure.com), standard deployment aren't displayed by default on the resource group. Use the **Show hidden types** option to display them on the resource group.
525
+
> When using the [Azure portal](https://portal.azure.com), standard deployments aren't displayed by default on the resource group. Use the **Show hidden types** option to display them on the resource group.
526
526
527
527
# [Azure CLI](#tab/cli)
528
528
@@ -555,7 +555,7 @@ In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
555
555
556
556
## Use the standard deployment
557
557
558
-
Models deployed in Azure Machine Learning and Azure AI Foundry in standard deployments support the [Foundry Models API](../../ai-foundry/model-inference/reference/reference-model-inference-api.md) that exposes a common set of capabilities for foundational models and that can be used by developers to consume predictions from a diverse set of models in a uniform and consistent way.
558
+
Models deployed in Azure Machine Learning and Azure AI Foundry in standard deployments support the [Azure AI Foundry Models API](../../ai-foundry/model-inference/reference/reference-model-inference-api.md) that exposes a common set of capabilities for foundational models and that can be used by developers to consume predictions from a diverse set of models in a uniform and consistent way.
559
559
560
560
Read more about the [capabilities of this API](../../ai-foundry/model-inference/reference/reference-model-inference-api.md#capabilities) and how [you can use it when building applications](../../ai-foundry/model-inference/reference/reference-model-inference-api.md#getting-started).
0 commit comments