Skip to content

Commit d7f6c2c

Browse files
committed
Foundry Models rebrand
1 parent 4155005 commit d7f6c2c

File tree

4 files changed

+11
-11
lines changed

4 files changed

+11
-11
lines changed

articles/ai-foundry/concepts/model-catalog-content-safety.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ Content filtering occurs synchronously as the service processes prompts to gener
3434
- When you first deploy a language model
3535
- Later, by selecting the content filtering toggle on the deployment details page
3636

37-
Suppose you decide to use an API other than the [Azure AI Model Inference API](/azure/ai-studio/reference/reference-model-inference-api) to work with a model that is deployed via a standard deployment. In such a situation, content filtering (preview) isn't enabled unless you implement it separately by using Azure AI Content Safety. To get started with Azure AI Content Safety, see [Quickstart: Analyze text content](/azure/ai-services/content-safety/quickstart-text). You run a higher risk of exposing users to harmful content if you don't use content filtering (preview) when working with models that are deployed via standard deployments.
37+
Suppose you decide to use an API other than the [Foundry Models API](/azure/ai-studio/reference/reference-model-inference-api) to work with a model that is deployed via a standard deployment. In such a situation, content filtering (preview) isn't enabled unless you implement it separately by using Azure AI Content Safety. To get started with Azure AI Content Safety, see [Quickstart: Analyze text content](/azure/ai-services/content-safety/quickstart-text). You run a higher risk of exposing users to harmful content if you don't use content filtering (preview) when working with models that are deployed via standard deployments.
3838

3939
[!INCLUDE [content-safety-harm-categories](../includes/content-safety-harm-categories.md)]
4040

articles/ai-foundry/how-to/deploy-models-serverless.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -555,7 +555,7 @@ In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
555555
556556
## Use the standard deployment
557557
558-
Models deployed in Azure Machine Learning and Azure AI Foundry in standard deployments support the [Foundry Models API](../../ai-foundry/model-inference/reference/reference-model-inference-api.md) that exposes a common set of capabilities for foundational models and that can be used by developers to consume predictions from a diverse set of models in a uniform and consistent way.
558+
Models deployed in Azure Machine Learning and Azure AI Foundry in standard deployments support the [Azure AI Foundry Models API](../../ai-foundry/model-inference/reference/reference-model-inference-api.md) that exposes a common set of capabilities for foundational models and that can be used by developers to consume predictions from a diverse set of models in a uniform and consistent way.
559559
560560
Read more about the [capabilities of this API](../../ai-foundry/model-inference/reference/reference-model-inference-api.md#capabilities) and how [you can use it when building applications](../../ai-foundry/model-inference/reference/reference-model-inference-api.md#getting-started).
561561

articles/ai-foundry/model-inference/how-to/monitor-models.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,37 +1,37 @@
11
---
2-
title: Monitor model deployments in Azure AI model inference
3-
description: Learn how to use Azure Monitor tools like Log Analytics to capture and analyze metrics and data logs for your Azure AI model inference.
2+
title: Monitor model deployments in Azure AI Foundry Models
3+
description: Learn how to use Azure Monitor tools like Log Analytics to capture and analyze metrics and data logs for Foundry Models.
44
author: santiagxf
55
ms.author: fasantia
66
ms.service: azure-ai-model-inference
77
ms.topic: how-to
88
ms.date: 4/30/2025
99
---
1010

11-
# Monitor model deployments in Azure AI model inference
11+
# Monitor model deployments in Azure AI Foundry Models
1212

1313
[!INCLUDE [Feature preview](../includes/feature-preview.md)]
1414

15-
When you have critical applications and business processes that rely on Azure resources, you need to monitor and get alerts for your system. The Azure Monitor service collects and aggregates metrics and logs from every component of your system, including Azure AI model inference model deployments. You can use this information to view availability, performance, and resilience, and get notifications of issues.
15+
When you have critical applications and business processes that rely on Azure resources, you need to monitor and get alerts for your system. The Azure Monitor service collects and aggregates metrics and logs from every component of your system, including Foundry Models deployments. You can use this information to view availability, performance, and resilience, and get notifications of issues.
1616

17-
This document explains how you can use metrics and logs to monitor model deployments in Azure AI model inference.
17+
This document explains how you can use metrics and logs to monitor model deployments in Foundry Models.
1818

1919
## Prerequisites
2020

21-
To use monitoring capabilities for model deployments in Azure AI model inference, you need the following:
21+
To use monitoring capabilities for model deployments in Foundry Models, you need the following:
2222

2323
* An Azure AI services resource. For more information, see [Create an Azure AI Services resource](quickstart-create-resources.md).
2424

2525
> [!TIP]
26-
> If you are using Serverless API Endpoints and you want to take advantage of monitoring capabilities explained in this document, [migrate your Serverless API Endpoints to Azure AI model inference](quickstart-ai-project.md).
26+
> If you are using Serverless API Endpoints and you want to take advantage of monitoring capabilities explained in this document, [migrate your Serverless API Endpoints to Foundry Models](quickstart-ai-project.md).
2727
2828
* At least one model deployment.
2929

3030
* Access to diagnostic information for the resource.
3131

3232
## Metrics
3333

34-
Azure Monitor collects metrics from Azure AI model inference automatically. **No configuration is required**. These metrics are:
34+
Azure Monitor collects metrics from Foundry Models automatically. **No configuration is required**. These metrics are:
3535

3636
* Stored in the Azure Monitor time-series metrics database.
3737
* Lightweight and capable of supporting near real-time alerting.

articles/ai-foundry/model-inference/includes/use-chat-reasoning/csharp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ ChatCompletionsClient client = new ChatCompletionsClient(
3737
```
3838

3939
> [!TIP]
40-
> Verify that you have deployed the model to Azure AI Services resource with the Azure AI model inference API. `Deepseek-R1` is also available as standard deployments. However, those endpoints don't take the parameter `model` as explained in this tutorial. You can verify that by going to [Azure AI Foundry portal]() > Models + endpoints, and verify that the model is listed under the section **Azure AI Services**.
40+
> Verify that you have deployed the model to Azure AI Foundry resource with the Foundry Models API. `Deepseek-R1` is also available as standard deployments. However, those endpoints don't take the parameter `model` as explained in this tutorial. You can verify that by going to [Azure AI Foundry portal]() > Models + endpoints, and verify that the model is listed under the section **Azure AI Services**.
4141
4242
If you have configured the resource to with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
4343

0 commit comments

Comments
 (0)