You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Configure Transmit Security with Azure Active Directory B2C for risk detection and prevention
19
+
# Configure Transmit Security with Azure Active Directory B2C for Fraud Prevention
20
20
21
21
In this tutorial, learn to integrate Azure Active Directory B2C (Azure AD B2C) authentication with [Transmit Security's Detection and Response Services (DRS)](https://transmitsecurity.com/platform/detection-and-response). Transmit Security allows you to detect risk in customer interactions on digital channels, and to enable informed identity and trust decisions across the consumer experience.
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/language-support.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,8 @@ ms.author: pafarley
14
14
15
15
# Language support for Azure AI Content Safety
16
16
17
-
Some capabilities of Azure AI Content Safety support multiple languages; any capabilities not listed here only support English.
17
+
> [!IMPORTANT]
18
+
> Azure AI Content Safety features not listed in this article, such as Prompt Shields, Protected material detection, Groundedness detection, and Custom categories (rapid) only support English.
|[Analyze text](/rest/api/cognitiveservices/contentsafety/text-operations/analyze-text) API | Scans text for sexual content, violence, hate, and self harm with multi-severity levels. |
49
-
|[Analyze image](/rest/api/cognitiveservices/contentsafety/image-operations/analyze-image) API | Scans images for sexual content, violence, hate, and self harm with multi-severity levels. |
50
48
|[Prompt Shields](/rest/api/cognitiveservices/contentsafety/text-operations/detect-text-jailbreak) (preview) | Scans text for the risk of a [User input attack](./concepts/jailbreak-detection.md) on a Large Language Model. [Quickstart](./quickstart-jailbreak.md)|
51
49
|[Groundedness detection](/rest/api/cognitiveservices/contentsafety/text-groundedness-detection-operations/detect-groundedness-options) (preview) | Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. [Quickstart](./quickstart-groundedness.md)|
52
50
|[Protected material text detection](/rest/api/cognitiveservices/contentsafety/text-operations/detect-text-protected-material) (preview) | Scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). [Quickstart](./quickstart-protected-material.md)|
53
51
| Custom categories (rapid) API (preview) | Lets you define [emerging harmful content patterns](./concepts/custom-categories-rapid.md) and scan text and images for matches. [How-to guide](./how-to/custom-categories-rapid.md)|
52
+
|[Analyze text](/rest/api/cognitiveservices/contentsafety/text-operations/analyze-text) API | Scans text for sexual content, violence, hate, and self harm with multi-severity levels. |
53
+
|[Analyze image](/rest/api/cognitiveservices/contentsafety/image-operations/analyze-image) API | Scans images for sexual content, violence, hate, and self harm with multi-severity levels. |
54
+
54
55
55
56
## Content Safety Studio
56
57
@@ -98,9 +99,29 @@ Currently, Azure AI Content Safety has an **F0 and S0** pricing tier. See the Az
98
99
99
100
### Input requirements
100
101
101
-
The default maximum length for text submissions is 10K characters. If you need to analyze longer blocks of text, you can split the input text (for example, by punctuation or spacing) across multiple related submissions.
102
+
See the following list for the input requirements for each feature.
103
+
104
+
<!--
105
+
| | Analyze text API | Analyze image API | Prompt Shields<br>(preview) | Groundedness<br>detection (preview) | Protected material<br>detection (preview) |
106
+
|-------|---|----------|----------|-----|-----|
107
+
| Input requirements: | Default maximum length: 10K characters (split longer texts as needed). | Maximum image file size: 4 MB<br>Dimensions between 50x50 and 2048x2048 pixels.<br>Images can be in JPEG, PNG, GIF, BMP, TIFF, or WEBP formats. | Maximum prompt length: 10K characters.<br>Up to five documents with a total of 10D characters. | Maximum 55,000 characters for grounding sources per API call.<br>Maximum text and query length: 7,500 characters. | Default maximum: 1K characters.<br>Minimum: 111 characters (for scanning LLM completions, not user prompts). | -->
108
+
109
+
-**Analyze text API**:
110
+
- Default maximum length: 10K characters (split longer texts as needed).
111
+
-**Analyze image API**:
112
+
- Maximum image file size: 4 MB
113
+
- Dimensions between 50 x 50 and 2048 x 2048 pixels.
114
+
- Images can be in JPEG, PNG, GIF, BMP, TIFF, or WEBP formats.
115
+
-**Prompt Shields (preview)**:
116
+
- Maximum prompt length: 10K characters.
117
+
- Up to five documents with a total of 10K characters.
118
+
-**Groundedness detection (preview)**:
119
+
- Maximum length for grounding sources: 55,000 characters (per API call).
120
+
- Maximum text and query length: 7,500 characters.
121
+
-**Protected material detection (preview)**:
122
+
- Default maximum length: 1K characters.
123
+
- Minimum length: 111 characters (for scanning LLM completions, not user prompts).
102
124
103
-
The maximum size for image submissions is 4 MB, and image dimensions must be between 50 x 50 pixels and 2,048 x 2,048 pixels. Images can be in JPEG, PNG, GIF, BMP, TIFF, or WEBP formats.
104
125
105
126
### Language support
106
127
@@ -112,7 +133,7 @@ For more information, see [Language support](/azure/ai-services/content-safety/l
112
133
113
134
To use the Content Safety APIs, you must create your Azure AI Content Safety resource in the supported regions. Currently, the Content Safety features are available in the following Azure regions:
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/use-your-data-securely.md
-39Lines changed: 0 additions & 39 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -334,47 +334,8 @@ Make sure your sign-in credential has `Cognitive Services OpenAI Contributor` ro
334
334
335
335
### Ingestion API
336
336
337
-
338
337
See the [ingestion API reference article](/rest/api/azureopenai/ingestion-jobs?context=/azure/ai-services/openai/context/context) for details on the request and response objects used by the ingestion API.
339
338
340
-
More notes:
341
-
342
-
*`JOB_NAME` in the API path will be used as the index name in Azure AI Search.
343
-
* Use the `Authorization` header rather than api-key.
344
-
* Explicitly set `storageEndpoint` header.
345
-
* Use `ResourceId=` format for `storageConnectionString` header, so Azure OpenAI and Azure AI Search use managed identity to authenticate the storage account, which is required to bypass network restrictions.
346
-
***Do not** set the `searchServiceAdminKey` header. The system-assigned identity of the Azure OpenAI resource is used to authenticate Azure AI Search.
347
-
***Do not** set `embeddingEndpoint` or `embeddingKey`. Instead, use the `embeddingDeploymentName` header to enable text vectorization.
0 commit comments