Skip to content

Commit b5ca411

Browse files
authored
Update simulator-interaction-data.md
1 parent d93aaf8 commit b5ca411

File tree

1 file changed

+13
-13
lines changed

1 file changed

+13
-13
lines changed

articles/ai-foundry/how-to/develop/simulator-interaction-data.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@ Prepare the text for generating the input to the simulator:
7171
- **Page retrieval**: Fetches the Wikipedia page for the identified title.
7272
- **Text extraction**: Extracts the first 5,000 characters of the page summary to use as input for the simulator.
7373

74-
### Specify application Prompty
74+
### Specify the application Prompty file
7575

7676
The following `application.prompty` file specifies how a chat application behaves:
7777

@@ -111,7 +111,7 @@ given the conversation history:
111111
{{ conversation_history }}
112112
```
113113

114-
### Specify target callback that you want to simulate against
114+
### Specify the target callback to simulate against
115115

116116
You can bring any application endpoint to simulate against by specifying a target callback function. The following example shows an application that's an LLM with a Prompty file (`application.prompty`):
117117

@@ -147,17 +147,17 @@ async def callback(
147147
}
148148
```
149149

150-
The callback function above processes each message generated by the simulator.
150+
The preceding callback function processes each message that the simulator generates.
151151

152152
### Functionality
153153

154-
- Retrieves the latest user message.
155-
- Loads a prompt flow from `application.prompty`.
156-
- Generates a response by using the prompt flow.
157-
- Formats the response to adhere to the OpenAI chat protocol.
158-
- Appends the assistant's response to the messages list.
154+
- Retrieves the latest user message
155+
- Loads a prompt flow from `application.prompty`
156+
- Generates a response by using the prompt flow
157+
- Formats the response to adhere to the OpenAI chat protocol
158+
- Appends the assistant's response to the messages list
159159

160-
With the simulator initialized, you can now run it to generate synthetic conversations based on the provided text.
160+
With the simulator initialized, you can now run it to generate synthetic conversations based on the provided text:
161161

162162
```python
163163
model_config = {
@@ -322,7 +322,7 @@ azure_ai_project = {
322322
> [!NOTE]
323323
> Adversarial simulation uses the Azure AI safety evaluation service and is currently available only in the following regions: East US 2, France Central, UK South, Sweden Central.
324324
325-
### Specify target callback to simulate against for adversarial simulator
325+
### Specify the target callback to simulate against for the adversarial simulator
326326

327327
You can bring any application endpoint to the adversarial simulator. The `AdversarialSimulator` class supports sending service-hosted queries and receiving responses with a callback function, as defined in the following code block. The `AdversarialSimulator` class adheres to the [OpenAI messages protocol](https://platform.openai.com/docs/api-reference/messages/object#messages/object-content).
328328

@@ -335,7 +335,7 @@ async def callback(
335335
query = messages["messages"][0]["content"]
336336
context = None
337337

338-
# Add file contents for summarization or re-write.
338+
# Add file contents for summarization or rewrite.
339339
if 'file_content' in messages["template_parameters"]:
340340
query += messages["template_parameters"]['file_content']
341341

@@ -431,7 +431,7 @@ The outputs consist of two lists:
431431

432432
Run two evaluation runs with `ContentSafetyEvaluator` and measure the differences between the two datasets' defect rates.
433433

434-
*Evaluating indirect attack* is an AI-assisted metric and doesn't require comparative measurement like evaluating direct attacks. You can generate an indirect attack jailbreak-injected dataset with the following, and then evaluate with `IndirectAttackEvaluator`.
434+
*Evaluating indirect attack* is an AI-assisted metric and doesn't require comparative measurement like evaluating direct attacks. You can generate an indirect attack jailbreak-injected dataset with the following code, and then evaluate with `IndirectAttackEvaluator`.
435435

436436
```python
437437
indirect_attack_simulator=IndirectAttackSimulator(azure_ai_project=azure_ai_project, credential=credential)
@@ -548,7 +548,7 @@ outputs = await simulator(
548548

549549
To convert your messages format to JSON Lines (JSONL) format, use the helper function `to_json_lines()` on your output.
550550

551-
#### Convert to question answer pairs
551+
#### Convert to question/answer pairs
552552

553553
To convert a single turn chat format to `Question and Answering` pair format, use the helper function `to_eval_qr_json_lines()` on your output.
554554

0 commit comments

Comments
 (0)