Skip to content

Commit 648ac2b

Browse files
Merge pull request #280044 from ositanachi/patch-5
Update concept-model-catalog.md
2 parents d826592 + b873ea7 commit 648ac2b

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

articles/machine-learning/concept-model-catalog.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,9 @@ Phi-3-medium-4k-instruct, Phi-3-medium-128k-instruct | [Microsoft Managed Count
153153

154154
[!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
155155

156-
Azure Machine Learning implements a default configuration of [Azure AI Content Safety](../ai-services/content-safety/overview.md) text moderation filters for harmful content (hate, self-harm, sexual, and violence) for language models deployed with MaaS. To learn more about content filtering (preview), see [harm categories in Azure AI Content Safety](../ai-services/content-safety/concepts/harm-categories.md). Content filtering (preview) occurs synchronously as the service processes prompts to generate content, and you may be billed separately as per [AACS pricing](https://azure.microsoft.com/pricing/details/cognitive-services/content-safety/) for such use. You can disable content filtering (preview) for individual serverless endpoints when you first deploy a language model or in the deployment details page by selecting the content filtering toggle. You may be at higher risk of exposing users to harmful content if you turn off content filters.
156+
For language models deployed to MaaS, Azure Machine Learning implements a default configuration of [Azure AI Content Safety](../ai-services/content-safety/overview.md) text moderation filters that detect harmful content such as hate, self-harm, sexual, and violent content. To learn more about content filtering (preview), see [harm categories in Azure AI Content Safety](../ai-services/content-safety/concepts/harm-categories.md).
157+
158+
Content filtering (preview) occurs synchronously as the service processes prompts to generate content, and you might be billed separately as per [AACS pricing](https://azure.microsoft.com/pricing/details/cognitive-services/content-safety/) for such use. You can disable content filtering (preview) for individual serverless endpoints either at the time when you first deploy a language model or in the deployment details page by selecting the content filtering toggle. If you use a model in MaaS via an API other than the [Azure AI Model Inference API](../ai-studio/reference/reference-model-inference-api.md), content filtering isn't enabled unless you implement it separately by using [Azure AI Content Safety](../ai-services/content-safety/quickstart-text.md). If you use a model in MaaS without content filtering, you run a higher risk of exposing users to harmful content.
157159

158160
## Learn more
159161

0 commit comments

Comments
 (0)