You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/content-filter.md
+168Lines changed: 168 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -628,6 +628,174 @@ For details on the inference REST API endpoints for Azure OpenAI and how to crea
628
628
}
629
629
```
630
630
631
+
## Streaming
632
+
633
+
Azure OpenAI Service includes a content filtering system that works alongside core models. The following section describes the AOAI streaming experience and options in the context of content filters.
634
+
635
+
### Default
636
+
637
+
The content filtering system is integrated and enabled by default for all customers. In the default streaming scenario, completion content is buffered, the content filtering system runs on the buffered content, and – depending on content filtering configuration – content is either returned to the user if it does not violate the content filtering policy (Microsoft default or custom user configuration), or it’s immediately blocked which returns a content filtering error, without returning harmful completion content. Thisprocess is repeated until the end of the stream. Content was fully vetted according to the content filtering policy before returned to the user. Content is not returned token-by-token inthis case, but in “content chunks” of the respective buffer size.
638
+
639
+
### Asynchronous modified filter
640
+
641
+
Customers who have been approved for modified content filters can choose Asynchronous Modified Filter as an additional option, providing a newstreamingexperience. Inthis case, content filters are run asynchronously, completion content is returned immediately with a smooth token-by-token streaming experience. No content is buffered, the content filters run asynchronously, which allows for zero latency inthis context.
642
+
643
+
> [!NOTE]
644
+
> Customers must be aware that while the feature improves latency, it can bring a trade-off in terms of the safety and real-time vetting of smaller sections of model output. Because content filters are run asynchronously, content moderation messages and the content filtering signal in case of a policy violation are delayed, which means some sections of harmful content that would otherwise have been filtered immediately could be displayed to the user.
645
+
646
+
**Annotations**: Annotations and content moderation messages are continuously returned during the stream. We strongly recommend to consume annotations and implement additional AI content safety mechanisms, such as redacting content or returning additional safety information to the user.
647
+
648
+
**Content filtering signal**: The content filtering error signal is delayed; in case of a policy violation, it’s returned as soon as it’s available, and the stream is stopped. The content filtering signal is guaranteed within ~1,000-character windows in case of a policy violation.
649
+
650
+
Approval for Modified Content Filtering is required for access to Streaming – Asynchronous Modified Filter. The application can be found [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu). To enable it via Azure OpenAI Studio please follow the instructions [here](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select “Asynchronous Modified Filter” in the Streaming section, as shown in the below screenshot.
| Access | Enabled by default, no action needed |Customers approved for Modified Content Filtering can configure directly via Azure OpenAI Studio (as part of a content filtering configuration; applied on deployment-level) |
|Modality and Availability |Text; all GPT-models |Text; all GPT-models except gpt-4-vision |
660
+
|Streaming experience |Content is buffered and returned in chunks |Zero latency (no buffering, filters run asynchronously) |
661
+
|Content filtering signal |Immediate filtering signal |Delayed filtering signal (in up to ~1,000 char increments) |
662
+
|Content filtering configurations |Supports default and any customer-defined filter setting (including optional models) |Supports default and any customer-defined filter setting (including optional models) |
663
+
664
+
### Annotations and sample response stream
665
+
666
+
#### Prompt annotation message
667
+
668
+
This is the same as default annotations.
669
+
670
+
```json
671
+
data: {
672
+
"id": "",
673
+
"object": "",
674
+
"created": 0,
675
+
"model": "",
676
+
"prompt_filter_results": [
677
+
{
678
+
"prompt_index": 0,
679
+
"content_filter_results": { ... }
680
+
}
681
+
],
682
+
"choices": [],
683
+
"usage": null
684
+
}
685
+
```
686
+
687
+
#### Completion token message
688
+
689
+
Completion messages are forwarded immediately. No moderation is performed first, and no annotations are provided initially.
690
+
691
+
```json
692
+
data: {
693
+
"id": "chatcmpl-7rAJvsS1QQCDuZYDDdQuMJVMV3x3N",
694
+
"object": "chat.completion.chunk",
695
+
"created": 1692905411,
696
+
"model": "gpt-35-turbo",
697
+
"choices": [
698
+
{
699
+
"index": 0,
700
+
"finish_reason": null,
701
+
"delta": {
702
+
"content": "Color"
703
+
}
704
+
}
705
+
],
706
+
"usage": null
707
+
}
708
+
```
709
+
710
+
#### Annotation message
711
+
712
+
The text field will always be an empty string, indicating no newtokens. Annotations will only be relevant to already-sent tokens. There may be multiple Annotation Messages referring to the same tokens.
713
+
714
+
“start_offset” and “end_offset” are low-granularity offsets intext (with0 at beginning of prompt) which the annotation is relevant to.
715
+
716
+
“check_offset” represents how much text has been fully moderated. It is an exclusive lower bound on the end_offsets of future annotations. It is nondecreasing.
717
+
718
+
```json
719
+
data: {
720
+
"id": "",
721
+
"object": "",
722
+
"created": 0,
723
+
"model": "",
724
+
"choices": [
725
+
{
726
+
"index": 0,
727
+
"finish_reason": null,
728
+
"content_filter_results": { ... },
729
+
"content_filter_raw": [ ... ],
730
+
"content_filter_offsets": {
731
+
"check_offset": 44,
732
+
"start_offset": 44,
733
+
"end_offset": 198
734
+
}
735
+
}
736
+
],
737
+
"usage": null
738
+
}
739
+
```
740
+
741
+
742
+
### Sample response stream
743
+
744
+
Below is a real chat completion response using Asynchronous Modified Filter. Note how prompt annotations are not changed; completion tokens are sent without annotations; and newannotation messages are sent without tokens, instead associated with certain content filter offsets.
As part of your application design, consider the following best practices to deliver a positive experience with your application while minimizing potential harms:
0 commit comments