You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/risks-safety-monitor.md
+9-7Lines changed: 9 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,7 @@ Adjust your content filter configuration to further align with business needs an
45
45
46
46
The **Potentially abusive user detection** pane leverages user-level abuse reporting to show information about users whose behavior has resulted in blocked content. The goal is to help you get a view of the sources of harmful content so you can take responsive actions to ensure the model is being used in a responsible way.
47
47
48
-
<!--
48
+
49
49
To use Potentially abusive user detection, you need:
50
50
- A content filter configuration applied to your deployment.
51
51
- You must be sending user ID information in your Chat Completion requests (see the _user_ parameter of the [Completions API](/azure/ai-services/openai/reference#completions), for example).
@@ -55,22 +55,25 @@ To use Potentially abusive user detection, you need:
55
55
56
56
### Set up your Azure Data Explorer database
57
57
58
-
In order to protect the data privacy of user information and manage the permission of the data, we support the option for our customers to bring their own storage to store potentially abusive user detection insights in a compliant way and with full control. Follow these steps to enable it:
58
+
In order to protect the data privacy of user information and manage the permission of the data, we support the option for our customers to bring their own storage to get the detailed potentially abusive user detection insights (including user GUID and statistics on harmful request by category) stored in a compliant way and with full control. Follow these steps to enable it:
59
59
1. In Azure OpenAI Studio, navigate to the model deployment that you'd like to set up user abuse analysis with, and select **Add a data store**.
60
-
1. Fill in the required information and select **add**. We recommend you create a new database to store the analysis results.
61
-
1. After you connect the data store, take the following steps to grant permission:
60
+
1. Fill in the required information and select **Save**. We recommend you create a new database to store the analysis results.
61
+
1. After you connect the data store, take the following steps to grant permission to write analysis results to the connected database:
62
62
1. Go to your Azure OpenAI resource's page in the Azure portal, and choose the **Identity** tab.
63
63
1. Turn the status to **On** for system assigned identity, and copy the ID that's generated.
64
64
1. Go to your Azure Data Explorer resource in the Azure portal, choose **databases**, and then choose the specific database you created to store user analysis results.
65
65
1. Select **permissions**, and add an **admin** role to the database.
66
66
1. Paste the Azure OpenAI identity generated in the earlier step, and select the one searched. Now your Azure OpenAI resource's identity is authorized to read/write to the storage account.
67
-
-->
67
+
1. Grant access to the connected Azure Data Explorer database to the users who need to view the analysis results:
68
+
1. Go to the Azure Data Explorer resource you’ve connected, choose **access control** and add a **reader** role of the Azure Data Explorer cluster for the users who need to access the results.
69
+
1. Choose **databases** and choose the specific database that's connected to store user-level abuse analysis results. Choose **permissions** and add the **reader** role of the database for the users who need to access the results.
70
+
68
71
69
72
### Report description
70
73
71
74
The potentially abusive user detection relies on the user information that customers send with their Azure OpenAI API calls, together with the request content. The following insights are shown:
72
75
-**Total potentially abusive user count**: This view shows the number of detected potentially abusive users over time. These are users for whom a pattern of abuse was detected and who might introduce high risk.
73
-
<!-- - **Potentially abusive users list**: This view is a detailed list of detected potentially abusive users. It gives the following information for each user:
76
+
-**Potentially abusive users list**: This view is a detailed list of detected potentially abusive users. It gives the following information for each user:
74
77
-**UserGUID**: This is sent by the customer through "user" field in Azure OpenAI APIs.
75
78
-**Abuse score**: This is a figure generated by the model analyzing each user's requests and behavior. The score is normalized to 0-1. A higher score indicates a higher abuse risk.
76
79
-**Abuse score trend**: The change in **Abuse score** during the selected time range.
@@ -83,7 +86,6 @@ The potentially abusive user detection relies on the user information that custo
83
86
### Recommended actions
84
87
85
88
Combine this data with enriched signals to validate whether the detected users are truly abusive or not. If they are, then take responsive action such as throttling or suspending the user to ensure the responsible use of your application.
0 commit comments