You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/develop/simulator-interaction-data.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -71,7 +71,7 @@ Prepare the text for generating the input to the simulator:
71
71
-**Page retrieval**: Fetches the Wikipedia page for the identified title.
72
72
-**Text extraction**: Extracts the first 5,000 characters of the page summary to use as input for the simulator.
73
73
74
-
### Specify application Prompty
74
+
### Specify the application Prompty file
75
75
76
76
The following `application.prompty` file specifies how a chat application behaves:
77
77
@@ -111,7 +111,7 @@ given the conversation history:
111
111
{{ conversation_history }}
112
112
```
113
113
114
-
### Specify target callback that you want to simulate against
114
+
### Specify the target callback to simulate against
115
115
116
116
You can bring any application endpoint to simulate against by specifying a target callback function. The following example shows an application that's an LLM with a Prompty file (`application.prompty`):
117
117
@@ -147,17 +147,17 @@ async def callback(
147
147
}
148
148
```
149
149
150
-
The callback function above processes each message generated by the simulator.
150
+
The preceding callback function processes each message that the simulator generates.
151
151
152
152
### Functionality
153
153
154
-
- Retrieves the latest user message.
155
-
- Loads a prompt flow from `application.prompty`.
156
-
- Generates a response by using the prompt flow.
157
-
- Formats the response to adhere to the OpenAI chat protocol.
158
-
- Appends the assistant's response to the messages list.
154
+
- Retrieves the latest user message
155
+
- Loads a prompt flow from `application.prompty`
156
+
- Generates a response by using the prompt flow
157
+
- Formats the response to adhere to the OpenAI chat protocol
158
+
- Appends the assistant's response to the messages list
159
159
160
-
With the simulator initialized, you can now run it to generate synthetic conversations based on the provided text.
160
+
With the simulator initialized, you can now run it to generate synthetic conversations based on the provided text:
161
161
162
162
```python
163
163
model_config = {
@@ -322,7 +322,7 @@ azure_ai_project = {
322
322
> [!NOTE]
323
323
> Adversarial simulation uses the Azure AI safety evaluation service and is currently available only in the following regions: East US 2, France Central, UK South, Sweden Central.
324
324
325
-
### Specify target callback to simulate against for adversarial simulator
325
+
### Specify the target callback to simulate against for the adversarial simulator
326
326
327
327
You can bring any application endpoint to the adversarial simulator. The `AdversarialSimulator` class supports sending service-hosted queries and receiving responses with a callback function, as defined in the following code block. The `AdversarialSimulator` class adheres to the [OpenAI messages protocol](https://platform.openai.com/docs/api-reference/messages/object#messages/object-content).
328
328
@@ -335,7 +335,7 @@ async def callback(
335
335
query = messages["messages"][0]["content"]
336
336
context =None
337
337
338
-
# Add file contents for summarization or re-write.
@@ -431,7 +431,7 @@ The outputs consist of two lists:
431
431
432
432
Run two evaluation runs with`ContentSafetyEvaluator`and measure the differences between the two datasets' defect rates.
433
433
434
-
*Evaluating indirect attack*is an AI-assisted metric and doesn't require comparative measurement like evaluating direct attacks. You can generate an indirect attack jailbreak-injected dataset with the following, and then evaluate with `IndirectAttackEvaluator`.
434
+
*Evaluating indirect attack*is an AI-assisted metric and doesn't require comparative measurement like evaluating direct attacks. You can generate an indirect attack jailbreak-injected dataset with the following code, and then evaluate with `IndirectAttackEvaluator`.
0 commit comments