Skip to content

Commit 2b4643c

Browse files
authored
Update articles/ai-studio/how-to/model-catalog-overview.md
1 parent 724820d commit 2b4643c

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

articles/ai-studio/how-to/model-catalog-overview.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -135,6 +135,7 @@ Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Co
135135
Azure AI Studio implements a default configuration of [Azure AI Content Safety](../../ai-services/content-safety/overview.md) text moderation filters for harmful content (hate, self-harm, sexual, and violence) in language models deployed with MaaS. To learn more about content filtering (preview), see [harm categories in Azure AI Content Safety](../../ai-services/content-safety/concepts/harm-categories.md). Content filtering (preview) occurs synchronously as the service processes prompts to generate content, and you may be billed separately as per [AACS pricing](https://azure.microsoft.com/pricing/details/cognitive-services/content-safety/) for such use. You can disable content filtering for individual serverless endpoints when you first deploy a language model or in the deployment details page by clicking the content filtering toggle. You may be at higher risk of exposing users to harmful content if you turn off content filters.
136136

137137

138+
138139
## Next steps
139140

140141
- [Explore Azure AI foundation models in Azure AI Studio](models-foundation-azure-ai.md)

0 commit comments

Comments
 (0)