Skip to content

Commit d65ca0e

Browse files
committed
update preview note for deepseek model
1 parent c63d54c commit d65ca0e

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

articles/ai-studio/how-to/deploy-models-deepseek.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ zone_pivot_groups: azure-ai-model-catalog-samples-chat
2121
In this article, you learn about DeepSeek-R1 and how to use them.
2222
DeepSeek-R1 excels at reasoning tasks using a step-by-step training process, such as language, scientific reasoning, and coding tasks. It features 671B total parameters with 37B active parameters, and 128k context length.
2323

24+
[!INCLUDE [models-preview](../includes/models-preview.md)]
2425

2526

2627
::: zone pivot="programming-language-python"
@@ -240,7 +241,7 @@ print_stream(result)
240241

241242
### Apply content safety
242243

243-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
244+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
244245

245246
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
246247

@@ -507,7 +508,7 @@ for await (const event of sses) {
507508
508509
### Apply content safety
509510
510-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
511+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
511512
512513
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
513514
@@ -800,7 +801,7 @@ StreamMessageAsync(client).GetAwaiter().GetResult();
800801
801802
### Apply content safety
802803
803-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
804+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
804805
805806
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
806807
@@ -1086,7 +1087,7 @@ The last message in the stream has `finish_reason` set, indicating the reason fo
10861087

10871088
### Apply content safety
10881089

1089-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
1090+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
10901091

10911092
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
10921093

0 commit comments

Comments
 (0)