Skip to content

Commit cff114a

Browse files
authored
Update simulator-interaction-data.md
1 parent 3e68b19 commit cff114a

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

articles/ai-foundry/how-to/develop/simulator-interaction-data.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -180,7 +180,7 @@ The `Simulator` class offers extensive customization options. With these options
180180

181181
#### Query and response generation Prompty customization
182182

183-
The `query_response_generating_prompty_override` allows you to customize how query-response pairs are generated from input text. This capability is useful when you want to control the format or content of the generated responses as input to your simulator.
183+
The `query_response_generating_prompty_override` parameter allows you to customize how query-response pairs are generated from input text. This capability is useful when you want to control the format or content of the generated responses as input to your simulator.
184184

185185
```python
186186
current_dir = os.path.dirname(__file__)
@@ -209,7 +209,7 @@ for output in outputs:
209209

210210
#### Simulation Prompty customization
211211

212-
The `Simulator` class uses a default Prompty that instructs the LLM on how to simulate a user interacting with your application. The `user_simulating_prompty_override` enables you to override the default behavior of the simulator. By adjusting these parameters, you can tune the simulator to produce responses that align with your specific requirements, enhancing the realism and variability of the simulations.
212+
The `Simulator` class uses a default Prompty that instructs the LLM on how to simulate a user interacting with your application. The `user_simulating_prompty_override` parameter enables you to override the default behavior of the simulator. By adjusting these parameters, you can tune the simulator to produce responses that align with your specific requirements, enhancing the realism and variability of the simulations.
213213

214214
```python
215215
user_simulator_prompty_kwargs = {
@@ -324,7 +324,7 @@ azure_ai_project = {
324324
325325
### Specify target callback to simulate against for adversarial simulator
326326

327-
You can bring any application endpoint to the adversarial simulator. The `AdversarialSimulator` class supports sending service-hosted queries and receiving responses with a callback function, as defined in the following code block. The `AdversarialSimulator` adheres to the [OpenAI messages protocol](https://platform.openai.com/docs/api-reference/messages/object#messages/object-content).
327+
You can bring any application endpoint to the adversarial simulator. The `AdversarialSimulator` class supports sending service-hosted queries and receiving responses with a callback function, as defined in the following code block. The `AdversarialSimulator` class adheres to the [OpenAI messages protocol](https://platform.openai.com/docs/api-reference/messages/object#messages/object-content).
328328

329329
```python
330330
async def callback(
@@ -385,7 +385,7 @@ By default, we run simulations asynchronously. We enable optional parameters:
385385

386386
## Supported adversarial simulation scenarios
387387

388-
The `AdversarialSimulator` supports a range of scenarios, hosted in the service, to simulate against your target application or function:
388+
The `AdversarialSimulator` class supports a range of scenarios, hosted in the service, to simulate against your target application or function:
389389

390390
| Scenario | Scenario enumeration | Maximum number of simulations | Use this dataset for evaluating |
391391
|-------------------------------|------------------------------|---------|---------------------|
@@ -408,7 +408,7 @@ Evaluating vulnerability toward the following types of jailbreak attacks is supp
408408
- **Direct attack jailbreak**: This type of attack, also known as a user prompt injected attack (UPIA), injects prompts in the user role turn of conversations or queries to generative AI applications.
409409
- **Indirect attack jailbreak**: This type of attack, also known as a cross domain prompt injected attack (XPIA), injects prompts in the returned documents or context of the user's query to generative AI applications.
410410

411-
*Evaluating direct attack* is a comparative measurement that uses the Azure AI Content Safety evaluators as a control. It isn't its own AI-assisted metric. Run `ContentSafetyEvaluator` on two different, red-teamed datasets generated by `AdversarialSimulator`:
411+
*Evaluating direct attack* is a comparative measurement that uses the Azure AI Content Safety evaluators as a control. It isn't its own AI-assisted metric. Run `ContentSafetyEvaluator` on two different, red-teamed datasets generated by the `AdversarialSimulator` class:
412412

413413
- Baseline adversarial test dataset using one of the previous scenario enumerations for evaluating hateful and unfair content, sexual content, violent content, and self-harm-related content
414414
- Adversarial test dataset with direct attack jailbreak injections in the first turn:
@@ -431,7 +431,7 @@ The outputs consist of two lists:
431431

432432
Run two evaluation runs with `ContentSafetyEvaluator` and measure the differences between the two datasets' defect rates.
433433

434-
*Evaluating indirect attack* is an AI-assisted metric and doesn't require comparative measurement like evaluating direct attacks. You can generate an indirect attack jailbreak-injected dataset with the following, and then evaluate with the `IndirectAttackEvaluator`.
434+
*Evaluating indirect attack* is an AI-assisted metric and doesn't require comparative measurement like evaluating direct attacks. You can generate an indirect attack jailbreak-injected dataset with the following, and then evaluate with `IndirectAttackEvaluator`.
435435

436436
```python
437437
indirect_attack_simulator=IndirectAttackSimulator(azure_ai_project=azure_ai_project, credential=credential)
@@ -445,9 +445,9 @@ outputs = await indirect_attack_simulator(
445445

446446
### Output
447447

448-
The `output` is a `JSON` array of messages and adheres to the OpenAI messages protocol. You can learn more [in this OpenAI resource](https://platform.openai.com/docs/api-reference/messages/object#messages/object-content).
448+
The output is a JSON array of messages and adheres to the OpenAI messages protocol. You can learn more [in this OpenAI resource](https://platform.openai.com/docs/api-reference/messages/object#messages/object-content).
449449

450-
The `messages` in `output` is a list of role-based turns. For each turn, it contains the following elements:
450+
The `messages` output is a list of role-based turns. For each turn, it contains the following elements:
451451

452452
- `content`: The content of an interaction.
453453
- `role`: Either the user (simulated agent) or assistant, and any required citations or context from either the simulated user or the chat application.
@@ -506,7 +506,7 @@ For single-turn simulations, use the helper function `to_eval_qr_json_lines()` t
506506

507507
#### Multi-language adversarial simulation
508508

509-
The `AdversarialSimulator` uses the [ISO standard](https://www.andiamo.co.uk/resources/iso-language-codes/) and supports the following languages:
509+
The `AdversarialSimulator` class uses the [ISO standard](https://www.andiamo.co.uk/resources/iso-language-codes/) and supports the following languages:
510510

511511
| Language | ISO language code |
512512
|--------------------|-------------------|
@@ -534,7 +534,7 @@ outputs = await simulator(
534534

535535
#### Set the randomization seed
536536

537-
By default, the `AdversarialSimulator` randomizes interactions in every simulation. You can set a `randomization_seed` parameter to produce the same set of conversation starters every time for reproducibility.
537+
By default, the `AdversarialSimulator` class randomizes interactions in every simulation. You can set a `randomization_seed` parameter to produce the same set of conversation starters every time for reproducibility.
538538

539539
```python
540540
outputs = await simulator(
@@ -566,7 +566,7 @@ This function can stop a conversation if the conversation meets certain criteria
566566

567567
The scenario simulator supports retry logic. The default maximum number of retries in case the last API call failed is 3. The default number of seconds to sleep between consequent retries in case the last API call failed is 3.
568568

569-
Users can also define their own `api_call_retry_sleep_sec` and `api_call_retry_max_count` and pass it in while running the function call in `simulate()`.
569+
Users can also define their own `api_call_retry_sleep_sec` and `api_call_retry_max_count` values and pass the values in while running the function call in `simulate()`.
570570

571571
## Related content
572572

0 commit comments

Comments
 (0)