Skip to content

Commit 5d29d17

Browse files
authored
Merge pull request #734 from msakande/content-filtering-preview
add preview indicator for content filtering
2 parents 4ac88bf + 361ad57 commit 5d29d17

21 files changed

+56
-56
lines changed

articles/ai-studio/how-to/deploy-models-cohere-command.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -462,7 +462,7 @@ response = client.complete(
462462

463463
### Apply content safety
464464

465-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
465+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
466466

467467
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
468468

@@ -943,7 +943,7 @@ var result = await client.path("/chat/completions").post({
943943
944944
### Apply content safety
945945
946-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
946+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
947947
948948
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
949949
@@ -1455,7 +1455,7 @@ response = client.Complete(requestOptions);
14551455
14561456
### Apply content safety
14571457
1458-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
1458+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
14591459
14601460
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
14611461
@@ -2084,7 +2084,7 @@ View the response from the model:
20842084

20852085
### Apply content safety
20862086

2087-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
2087+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
20882088

20892089
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
20902090

articles/ai-studio/how-to/deploy-models-jais.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -244,7 +244,7 @@ response = client.complete(
244244

245245
### Apply content safety
246246

247-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
247+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
248248

249249
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
250250

@@ -512,7 +512,7 @@ var response = await client.path("/chat/completions").post({
512512
513513
### Apply content safety
514514
515-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
515+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
516516
517517
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
518518
@@ -802,7 +802,7 @@ Console.WriteLine($"Response: {response.Value.Choices[0].Message.Content}");
802802
803803
### Apply content safety
804804
805-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
805+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
806806
807807
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
808808
@@ -1125,7 +1125,7 @@ extra-parameters: pass-through
11251125

11261126
### Apply content safety
11271127

1128-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
1128+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
11291129

11301130
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
11311131

articles/ai-studio/how-to/deploy-models-jamba.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@ The `document` object has the following fields:
217217
- `id` (optional; str) - unique identifier. will be linked to in citations. up to 128 characters.
218218
- `content` (required; str) - the content of the document
219219
- `metadata` (optional; array of **Metadata)**
220-
- `key` (required; str) - type of metadata, like author’, ‘date’, ‘url, etc. Should be things the model understands.
220+
- `key` (required; str) - type of metadata, like 'author', 'date', 'url', etc. Should be things the model understands.
221221
- `value` (required; str) - value of the metadata
222222

223223
#### Request example
@@ -410,7 +410,7 @@ Quota is managed per deployment. Each deployment has a rate limit of 200,000 tok
410410

411411
## Content filtering
412412

413-
Models deployed as a serverless API are protected by Azure AI content safety. With Azure AI content safety enabled, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [Azure AI Content Safety](/azure/ai-services/content-safety/overview).
413+
Models deployed as a serverless API are protected by Azure AI content safety. With Azure AI content safety enabled, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [Azure AI Content Safety](/azure/ai-services/content-safety/overview).
414414

415415
## Related content
416416

articles/ai-studio/how-to/deploy-models-llama.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -323,7 +323,7 @@ The following extra parameters can be passed to Meta Llama models:
323323

324324
### Apply content safety
325325

326-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
326+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
327327

328328
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
329329

@@ -665,7 +665,7 @@ The following extra parameters can be passed to Meta Llama models:
665665
666666
### Apply content safety
667667
668-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
668+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
669669
670670
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
671671
@@ -1025,7 +1025,7 @@ The following extra parameters can be passed to Meta Llama models:
10251025

10261026
### Apply content safety
10271027

1028-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
1028+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
10291029

10301030
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
10311031

@@ -1410,7 +1410,7 @@ The following extra parameters can be passed to Meta Llama chat models:
14101410
14111411
### Apply content safety
14121412
1413-
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
1413+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
14141414
14151415
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
14161416

0 commit comments

Comments
 (0)