Skip to content

Commit 42e52ac

Browse files
authored
Revert edits
1 parent cfef4e8 commit 42e52ac

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

articles/cognitive-services/openai/concepts/red-teaming.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Microsoft has conducted red teaming exercises and implemented safety systems (in
2525
- Identify and mitigate shortcomings in the existing default filters or mitigation strategies.
2626
- Provide feedback on failures so we can make improvements.
2727

28-
Here's how you can get started in your process of red teaming LLMs. Advance planning is critical to a productive red teaming exercise.
28+
Here is how you can get started in your process of red teaming LLMs. Advance planning is critical to a productive red teaming exercise.
2929

3030
## Getting started
3131

@@ -41,13 +41,13 @@ Having red teamers with an adversarial mindset and security-testing experience i
4141

4242
**Remember that handling potentially harmful content can be mentally taxing.**
4343

44-
You'll need to take care of your red teamers, not only by limiting the amount of time they spend on an assignment, but also by letting them know they can opt out at any time. Also, avoid burnout by switching red teamers’ assignments to different focus areas.
44+
You will need to take care of your red teamers, not only by limiting the amount of time they spend on an assignment, but also by letting them know they can opt out at any time. Also, avoid burnout by switching red teamers’ assignments to different focus areas.
4545

4646
### Planning your red teaming
4747

4848
#### Where to test
4949

50-
Because a system is developed using an LLM base model, you may need to test at several different layers:
50+
Because a system is developed using a LLM base model, you may need to test at several different layers:
5151

5252
- The LLM base model with its [safety system](./content-filter.md) in place to identify any gaps that may need to be addressed in the context of your application system. (Testing is usually through an API endpoint.)
5353
- Your application system. (Testing is usually through a UI.)
@@ -57,12 +57,12 @@ Because a system is developed using an LLM base model, you may need to test at s
5757

5858
Consider conducting iterative red teaming in at least two phases:
5959

60-
1. Open-ended red teaming, where red teamers are encouraged to discover various harms. This can help you develop a taxonomy of harms to guide further testing. Note that developing a taxonomy of undesired LLM outputs for your application system is crucial to being able to measure the success of specific mitigation efforts.
60+
1. Open-ended red teaming, where red teamers are encouraged to discover a variety of harms. This can help you develop a taxonomy of harms to guide further testing. Note that developing a taxonomy of undesired LLM outputs for your application system is crucial to being able to measure the success of specific mitigation efforts.
6161
2. Guided red teaming, where red teamers are assigned to focus on specific harms listed in the taxonomy while staying alert for any new harms that may emerge. Red teamers can also be instructed to focus testing on specific features of a system for surfacing potential harms.
6262

6363
Be sure to:
6464

65-
- Provide your red teamers with clear instructions for what harms or system features they'll be testing.
65+
- Provide your red teamers with clear instructions for what harms or system features they will be testing.
6666
- Give your red teamers a place for recording their findings. For example, this could be a simple spreadsheet specifying the types of data that red teamers should provide, including basics such as:
6767
- The type of harm that was surfaced.
6868
- The input prompt that triggered the output.
@@ -72,7 +72,7 @@ Be sure to:
7272

7373
### Reporting red teaming findings
7474

75-
You'll want to summarize and report red teaming top findings at regular intervals to key stakeholders, including teams involved in the measurement and mitigation of LLM failures so that the findings can inform critical decision making and prioritizations.
75+
You will want to summarize and report red teaming top findings at regular intervals to key stakeholders, including teams involved in the measurement and mitigation of LLM failures so that the findings can inform critical decision making and prioritizations.
7676

7777
## Next steps
7878

0 commit comments

Comments
 (0)