Skip to content

Commit 1edf2fc

Browse files
committed
update
1 parent 86db219 commit 1edf2fc

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

articles/ai-services/openai/concepts/models.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Azure OpenAI in Azure AI Foundry Models models
2+
title: Azure OpenAI in Azure AI Foundry Models
33
titleSuffix: Azure OpenAI
44
description: Learn about the different model capabilities that are available with Azure OpenAI.
55
ms.service: azure-ai-openai
@@ -12,7 +12,7 @@ ms.author: mbullwin #chrhoder#
1212
recommendations: false
1313
---
1414

15-
# Azure OpenAI in Azure AI Foundry Models models
15+
# Azure OpenAI in Azure AI Foundry Models
1616

1717
Azure OpenAI is powered by a diverse set of models with different capabilities and price points. Model availability varies by region and cloud. For Azure Government model availability, please refer to [Azure Government OpenAI Service](../azure-government.md).
1818

articles/ai-services/openai/concepts/red-teaming.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ The term *red teaming* has historically described systematic adversarial attacks
2323

2424
Red teaming is a best practice in the responsible development of systems and features using LLMs. While not a replacement for systematic measurement and mitigation work, red teamers help to uncover and identify harms and, in turn, enable measurement strategies to validate the effectiveness of mitigations.
2525

26-
While Microsoft has conducted red teaming exercises and implemented safety systems (including [content filters](./content-filter.md) and other [mitigation strategies](./prompt-engineering.md)) for its Azure OpenAI in Azure AI Foundry Models models (see this [Overview of responsible AI practices](/legal/cognitive-services/openai/overview)), the context of each LLM application will be unique and you also should conduct red teaming to:
26+
While Microsoft has conducted red teaming exercises and implemented safety systems (including [content filters](./content-filter.md) and other [mitigation strategies](./prompt-engineering.md)) for its Azure OpenAI in Azure AI Foundry Models (see this [Overview of responsible AI practices](/legal/cognitive-services/openai/overview)), the context of each LLM application will be unique and you also should conduct red teaming to:
2727

2828
- Test the LLM base model and determine whether there are gaps in the existing safety systems, given the context of your application.
2929

0 commit comments

Comments
 (0)