Skip to content

Commit bd75d25

Browse files
committed
fix term
1 parent 79e0b88 commit bd75d25

28 files changed

+39
-39
lines changed

articles/ai-foundry/concepts/content-filtering.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ author: PatrickFarley
2626

2727
The content filtering system is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md), and it works by running both the prompt input and completion output through a set of classification models designed to detect and prevent the output of harmful content. Variations in API configurations and application design might affect completions and thus filtering behavior.
2828

29-
With Azure OpenAI model deployments, you can use the default content filter or create your own content filter (described later on). Models available through **serverless APIs** have content filtering enabled by default. To learn more about the default content filter enabled for serverless APIs, see [Guidelines & controls for models curated by Azure AI in the model catalog](model-catalog-content-safety.md).
29+
With Azure OpenAI model deployments, you can use the default content filter or create your own content filter (described later on). Models available through **serverless APIs** have content filtering enabled by default. To learn more about the default content filter enabled for serverless APIs, see [Guardrails & controls for models curated by Azure AI in the model catalog](model-catalog-content-safety.md).
3030

3131
## Language support
3232

articles/ai-foundry/concepts/model-catalog-content-safety.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
2-
title: Guidelines & controls for models curated by Azure AI in the model catalog
2+
title: Guardrails & controls for models curated by Azure AI in the model catalog
33
titleSuffix: Azure AI Foundry
4-
description: Learn about Guidelines & controls for models deployed using serverless APIs, using Azure AI Foundry.
4+
description: Learn about Guardrails & controls for models deployed using serverless APIs, using Azure AI Foundry.
55
manager: scottpolly
66
ms.service: azure-ai-foundry
77
ms.topic: conceptual
@@ -13,18 +13,18 @@ reviewer: ositanachi
1313
ms.custom:
1414
---
1515

16-
# Guidelines & controls for models curated by Azure AI in the model catalog
16+
# Guardrails & controls for models curated by Azure AI in the model catalog
1717

1818
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
1919

20-
In this article, learn about Guidelines & controls capabilities for models from the model catalog deployed using serverless APIs.
20+
In this article, learn about Guardrails & controls capabilities for models from the model catalog deployed using serverless APIs.
2121

2222

2323
## Content filter defaults
2424

2525
Azure AI uses a default configuration of [Azure AI Content Safety](/azure/ai-services/content-safety/overview) content filters to detect harmful content across four categories including hate and fairness, self-harm, sexual, and violence for models deployed via serverless APIs. To learn more about content filtering (preview), see [Understand harm categories](#understand-harm-categories).
2626

27-
The default content filtering configuration for text models is set to filter at the medium severity threshold, filtering any detected content at this level or higher. For image models, the default content filtering configuration is set at the low configuration threshold, filtering at this level or higher. For models deployed using the [Azure AI model inference service](../../ai-foundry/model-inference/how-to/configure-content-filters.md), you can create configurable filters by selecting the **Content filters** tab within the **Safety + security** page of the Azure AI Foundry portal.
27+
The default content filtering configuration for text models is set to filter at the medium severity threshold, filtering any detected content at this level or higher. For image models, the default content filtering configuration is set at the low configuration threshold, filtering at this level or higher. For models deployed using the [Azure AI model inference service](../../ai-foundry/model-inference/how-to/configure-content-filters.md), you can create configurable filters by selecting the **Content filters** tab within the **Guardrails & controls** page of the Azure AI Foundry portal.
2828

2929
> [!TIP]
3030
> Content filtering (preview) isn't available for certain model types that are deployed via serverless APIs. These model types include embedding models and time series models.

articles/ai-foundry/concepts/models-featured.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -364,5 +364,5 @@ For examples of how to use Stability AI models, see the following examples:
364364
- [Deploy models as serverless APIs](../how-to/deploy-models-serverless.md)
365365
- [Model catalog and collections in Azure AI Foundry portal](../how-to/model-catalog-overview.md)
366366
- [Region availability for models in serverless API endpoints](../how-to/deploy-models-serverless-availability.md)
367-
- [Guidelines & controls for models curated by Azure AI in the model catalog](model-catalog-content-safety.md)
367+
- [Guardrails & controls for models curated by Azure AI in the model catalog](model-catalog-content-safety.md)
368368

articles/ai-foundry/how-to/deploy-models-gretel-navigator.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -233,7 +233,7 @@ result = client.complete(
233233
```
234234

235235

236-
### Apply Guidelines and controls
236+
### Apply Guardrails and controls
237237

238238
The Azure AI model inference API supports [Azure AI Content Safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI Content Safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
239239

@@ -477,7 +477,7 @@ The following example request shows other parameters that you can specify in the
477477
}
478478
```
479479

480-
### Apply Guidelines & controls
480+
### Apply Guardrails & controls
481481

482482
The Azure AI model inference API supports [Azure AI Content Safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI Content Safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
483483

articles/ai-foundry/how-to/develop/evaluate-sdk.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -508,7 +508,7 @@ Output:
508508

509509
```
510510

511-
The result of the Guidelines & controls evaluators for a query and response pair is a dictionary containing:
511+
The result of the Guardrails & controls evaluators for a query and response pair is a dictionary containing:
512512

513513
- `{metric_name}` provides a severity label for that content risk ranging from Very low, Low, Medium, and High. To learn more about the descriptions of each content risk and severity scale, see [Evaluation and monitoring metrics for generative AI](../../concepts/evaluation-metrics-built-in.md).
514514
- `{metric_name}_score` has a range between 0 and 7 severity level that maps to a severity label given in `{metric_name}`.

articles/ai-foundry/how-to/flow-process-image.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -113,8 +113,8 @@ Assume you want to build a chatbot that can answer any questions about the image
113113
In this example, `{{question}}` refers to the chat input, which is a list of texts and images.
114114
1. In *Outputs*, change the value of "answer" to the name of your vision tool's output, for example, `${gpt_vision.output}`.
115115
:::image type="content" source="../media/prompt-flow/how-to-process-image/chat-output-definition.png" alt-text="Screenshot of chat output type configuration." lightbox = "../media/prompt-flow/how-to-process-image/chat-output-definition.png":::
116-
1. (Optional) You can add any custom logic to the flow to process the GPT-4V output. For example, you can add Guidelines & controls tool to detect if the answer contains any inappropriate content, and return a final answer to the user.
117-
:::image type="content" source="../media/prompt-flow/how-to-process-image/chat-flow-postprocess.png" alt-text="Screenshot of processing gpt-4v output with Guidelines & controls tool." lightbox = "../media/prompt-flow/how-to-process-image/chat-flow-postprocess.png":::
116+
1. (Optional) You can add any custom logic to the flow to process the GPT-4V output. For example, you can add Guardrails & controls tool to detect if the answer contains any inappropriate content, and return a final answer to the user.
117+
:::image type="content" source="../media/prompt-flow/how-to-process-image/chat-flow-postprocess.png" alt-text="Screenshot of processing gpt-4v output with Guardrails & controls tool." lightbox = "../media/prompt-flow/how-to-process-image/chat-flow-postprocess.png":::
118118
1. Now you can **test the chatbot**. Open the chat window, and input any questions with images. The chatbot will answer the questions based on the image and text inputs. The chat input value is automatically backfilled from the input in the chat window. You can find the texts with images in the chat box which is translated into a list of texts and images.
119119
:::image type="content" source="../media/prompt-flow/how-to-process-image/chatbot-test.png" alt-text="Screenshot of chatbot interaction with images." lightbox = "../media/prompt-flow/how-to-process-image/chatbot-test.png":::
120120

articles/ai-foundry/how-to/model-catalog-overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ Features | Managed compute | Serverless API (pay-per-token)
6868
--|--|--
6969
Deployment experience and billing | Model weights are deployed to dedicated virtual machines with managed compute. A managed compute, which can have one or more deployments, makes available a REST API for inference. You're billed for the virtual machine core hours that the deployments use. | Access to models is through a deployment that provisions an API to access the model. The API provides access to the model that Microsoft hosts and manages, for inference. You're billed for inputs and outputs to the APIs, typically in tokens. Pricing information is provided before you deploy.
7070
API authentication | Keys and Microsoft Entra authentication. | Keys only.
71-
Guidelines & controls | Use Azure AI Content Safety service APIs. | Azure AI Content Safety filters are available integrated with inference APIs. Azure AI Content Safety filters are billed separately.
71+
Guardrails & controls | Use Azure AI Content Safety service APIs. | Azure AI Content Safety filters are available integrated with inference APIs. Azure AI Content Safety filters are billed separately.
7272
Network isolation | [Configure managed networks for Azure AI Foundry hubs](configure-managed-network.md). | Managed compute follow your hub's public network access (PNA) flag setting. For more information, see the [Network isolation for models deployed via Serverless APIs](#network-isolation-for-models-deployed-via-serverless-apis) section later in this article.
7373

7474
### Available models for supported deployment options
@@ -119,7 +119,7 @@ Learn more about deploying models:
119119

120120
The *prompt flow* feature in Azure Machine Learning offers a great experience for prototyping. You can use models deployed with managed compute in prompt flow with the [Open Model LLM tool](/azure/machine-learning/prompt-flow/tools-reference/open-model-llm-tool). You can also use the REST API exposed by managed compute in popular LLM tools like LangChain with the [Azure Machine Learning extension](https://python.langchain.com/docs/integrations/chat/azureml_chat_endpoint/).
121121

122-
### Guidelines & controls for models deployed as managed compute
122+
### Guardrails & controls for models deployed as managed compute
123123

124124
The [Azure AI Content Safety](../../ai-services/content-safety/overview.md) service is available for use with managed compute to screen for various categories of harmful content, such as sexual content, violence, hate, and self-harm. You can also use the service to screen for advanced threats such as jailbreak risk detection and protected material text detection.
125125

@@ -162,7 +162,7 @@ In Azure AI Foundry portal, you can use vector indexes and retrieval-augmented g
162162

163163
Pay-per-token billing is available only to users whose Azure subscription belongs to a billing account in a country/region where the model provider has made the offer available. If the offer is available in the relevant region, the user then must have a project resource in the Azure region where the model is available for deployment or fine-tuning, as applicable. See [Region availability for models in serverless API endpoints | Azure AI Foundry](deploy-models-serverless-availability.md) for detailed information.
164164

165-
### Guidelines & controls for models deployed via serverless APIs
165+
### Guardrails & controls for models deployed via serverless APIs
166166

167167
[!INCLUDE [content-safety-serverless-models](../includes/content-safety-serverless-models.md)]
168168

articles/ai-foundry/how-to/prompt-flow-tools/content-safety-tool.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ ms.collection: ce-skilling-ai-copilot, ce-skilling-fresh-tier1
1616
ms.update-cycle: 180-days
1717
---
1818

19-
# Guidelines & controls tool for flows in Azure AI Foundry portal
19+
# Guardrails & controls tool for flows in Azure AI Foundry portal
2020

2121
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
2222

articles/ai-foundry/includes/content-safety-serverless-apis-note.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,4 @@ ms.custom: include file
1313
---
1414

1515
> [!NOTE]
16-
> Azure AI Content Safety is currently available for models deployed as standard deployment, but not to models deployed via managed compute. To learn more about Azure AI Content Safety for models deployed as standard deployment, see [Guidelines & controls for models curated by Azure AI in the model catalog](../concepts/model-catalog-content-safety.md).
16+
> Azure AI Content Safety is currently available for models deployed as standard deployment, but not to models deployed via managed compute. To learn more about Azure AI Content Safety for models deployed as standard deployment, see [Guardrails & controls for models curated by Azure AI in the model catalog](../concepts/model-catalog-content-safety.md).

articles/ai-foundry/includes/content-safety-serverless-models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.custom: include file
1313
# Also used in Azure Machine Learning documentation
1414
---
1515

16-
For language models deployed via standard deployment, Azure AI implements a default configuration of [Azure AI Content Safety](../../ai-services/content-safety/overview.md) text moderation filters that detect harmful content such as hate, self-harm, sexual, and violent content. To learn more about content filtering (preview), see [Guidelines & controls for models curated by Azure AI in the model catalog](../concepts/model-catalog-content-safety.md).
16+
For language models deployed via standard deployment, Azure AI implements a default configuration of [Azure AI Content Safety](../../ai-services/content-safety/overview.md) text moderation filters that detect harmful content such as hate, self-harm, sexual, and violent content. To learn more about content filtering (preview), see [Guardrails & controls for models curated by Azure AI in the model catalog](../concepts/model-catalog-content-safety.md).
1717

1818
> [!TIP]
1919
> Content filtering (preview) is not available for certain model types that are deployed via serverless APIs. These model types include embedding models and time series models.

0 commit comments

Comments
 (0)