Skip to content

Commit d6bd121

Browse files
committed
enable new feature
1 parent 5b43b57 commit d6bd121

File tree

1 file changed

+9
-7
lines changed

1 file changed

+9
-7
lines changed

articles/ai-services/openai/how-to/risks-safety-monitor.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ Adjust your content filter configuration to further align with business needs an
4545

4646
The **Potentially abusive user detection** pane leverages user-level abuse reporting to show information about users whose behavior has resulted in blocked content. The goal is to help you get a view of the sources of harmful content so you can take responsive actions to ensure the model is being used in a responsible way.
4747

48-
<!--
48+
4949
To use Potentially abusive user detection, you need:
5050
- A content filter configuration applied to your deployment.
5151
- You must be sending user ID information in your Chat Completion requests (see the _user_ parameter of the [Completions API](/azure/ai-services/openai/reference#completions), for example).
@@ -55,22 +55,25 @@ To use Potentially abusive user detection, you need:
5555

5656
### Set up your Azure Data Explorer database
5757

58-
In order to protect the data privacy of user information and manage the permission of the data, we support the option for our customers to bring their own storage to store potentially abusive user detection insights in a compliant way and with full control. Follow these steps to enable it:
58+
In order to protect the data privacy of user information and manage the permission of the data, we support the option for our customers to bring their own storage to get the detailed potentially abusive user detection insights (including user GUID and statistics on harmful request by category) stored in a compliant way and with full control. Follow these steps to enable it:
5959
1. In Azure OpenAI Studio, navigate to the model deployment that you'd like to set up user abuse analysis with, and select **Add a data store**.
60-
1. Fill in the required information and select **add**. We recommend you create a new database to store the analysis results.
61-
1. After you connect the data store, take the following steps to grant permission:
60+
1. Fill in the required information and select **Save**. We recommend you create a new database to store the analysis results.
61+
1. After you connect the data store, take the following steps to grant permission to write analysis results to the connected database:
6262
1. Go to your Azure OpenAI resource's page in the Azure portal, and choose the **Identity** tab.
6363
1. Turn the status to **On** for system assigned identity, and copy the ID that's generated.
6464
1. Go to your Azure Data Explorer resource in the Azure portal, choose **databases**, and then choose the specific database you created to store user analysis results.
6565
1. Select **permissions**, and add an **admin** role to the database.
6666
1. Paste the Azure OpenAI identity generated in the earlier step, and select the one searched. Now your Azure OpenAI resource's identity is authorized to read/write to the storage account.
67-
-->
67+
1. Grant access to the connected Azure Data Explorer database to the users who need to view the analysis results:
68+
1. Go to the Azure Data Explorer resource you’ve connected, choose **access control** and add a **reader** role of the Azure Data Explorer cluster for the users who need to access the results.
69+
1. Choose **databases** and choose the specific database that's connected to store user-level abuse analysis results. Choose **permissions** and add the **reader** role of the database for the users who need to access the results.
70+
6871

6972
### Report description
7073

7174
The potentially abusive user detection relies on the user information that customers send with their Azure OpenAI API calls, together with the request content. The following insights are shown:
7275
- **Total potentially abusive user count**: This view shows the number of detected potentially abusive users over time. These are users for whom a pattern of abuse was detected and who might introduce high risk.
73-
<!-- - **Potentially abusive users list**: This view is a detailed list of detected potentially abusive users. It gives the following information for each user:
76+
- **Potentially abusive users list**: This view is a detailed list of detected potentially abusive users. It gives the following information for each user:
7477
- **UserGUID**: This is sent by the customer through "user" field in Azure OpenAI APIs.
7578
- **Abuse score**: This is a figure generated by the model analyzing each user's requests and behavior. The score is normalized to 0-1. A higher score indicates a higher abuse risk.
7679
- **Abuse score trend**: The change in **Abuse score** during the selected time range.
@@ -83,7 +86,6 @@ The potentially abusive user detection relies on the user information that custo
8386
### Recommended actions
8487

8588
Combine this data with enriched signals to validate whether the detected users are truly abusive or not. If they are, then take responsive action such as throttling or suspending the user to ensure the responsible use of your application.
86-
-->
8789

8890
## Next steps
8991

0 commit comments

Comments
 (0)