Skip to content

Commit c012f55

Browse files
committed
Azure AI services rebrand for 2nd batch of files
1 parent 5680c3c commit c012f55

22 files changed

+35
-35
lines changed

articles/ai-foundry/how-to/online-evaluation.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -32,15 +32,15 @@ After your application is instrumented to send trace data to Application Insight
3232
> [!NOTE]
3333
> Online evaluation supports the same metrics as Azure AI Evaluation. For more information on how evaluation works and which evaluation metrics are supported, see [Evaluate your Generative AI application with the Azure AI Evaluation SDK](./develop/evaluate-sdk.md).
3434
35-
For example, lets say you have a deployed chat application that receives many customer questions on a daily basis. You want to continuously evaluate the quality of the responses from your application. You set up an online evaluation schedule with a daily recurrence. You configure the evaluators: **Groundedness**, **Coherence**, and **Fluency**. Every day, the service computes the evaluation scores for these metrics and writes the data back to Application Insights for each trace that was collected during the recurrence time window (in this example, the past 24 hours). Then, the data can be queried from each trace and made accessible in Azure AI Foundry and Azure Monitor Application Insights.
35+
For example, let's say you have a deployed chat application that receives many customer questions on a daily basis. You want to continuously evaluate the quality of the responses from your application. You set up an online evaluation schedule with a daily recurrence. You configure the evaluators: **Groundedness**, **Coherence**, and **Fluency**. Every day, the service computes the evaluation scores for these metrics and writes the data back to Application Insights for each trace that was collected during the recurrence time window (in this example, the past 24 hours). Then, the data can be queried from each trace and made accessible in Azure AI Foundry and Azure Monitor Application Insights.
3636

3737
The evaluation results written back to each trace within Application Insights follow the following conventions. A unique span is added to each trace for each evaluation metric:
3838

3939
| Property | Application Insights Table | Fields for a given operation_ID | Example value |
4040
|-------------------------------------------|----------------------------|-----------------------------------------------|--------------------------------------|
41-
| Evaluation metric | traces, AppTraces | `customDimensions[event.name]` | `gen_ai.evaluation.relevance` |
42-
| Evaluation metric score | traces, AppTraces | `customDimensions[gen_ai.evaluation.score]` | `3` |
43-
| Evaluation metric comment (if applicable) | traces, AppTraces | `message` | `{comment”: “I like the response}` |
41+
| Evaluation metric | traces, AppTraces | `customDimensions["event.name"]` | `gen_ai.evaluation.relevance` |
42+
| Evaluation metric score | traces, AppTraces | `customDimensions["gen_ai.evaluation.score"]` | `3` |
43+
| Evaluation metric comment (if applicable) | traces, AppTraces | `message` | `{"comment": "I like the response"}` |
4444

4545
Now that you understand how online evaluation works and how it connects to Azure Monitor Application Insights, the next step is to set up the service.
4646

@@ -217,11 +217,11 @@ app_insights_config = ApplicationInsightsConfiguration(
217217
query=KUSTO_QUERY
218218
)
219219

220-
# Connect to your Azure OpenAI Service resource. You must use a GPT model deployment for this example.
220+
# Connect to your Azure OpenAI in Azure AI Foundry Models resource. You must use a GPT model deployment for this example.
221221
deployment_name = "gpt-4"
222222
api_version = "2024-08-01-preview"
223223

224-
# This is your Azure OpenAI Service connection name, which can be found in your Azure AI Foundry project under the 'Models + Endpoints' tab.
224+
# This is your Azure OpenAI connection name, which can be found in your Azure AI Foundry project under the 'Models + Endpoints' tab.
225225
default_connection = project_client.connections._get_connection(
226226
"aoai_connection_name"
227227
)

articles/ai-foundry/how-to/prompt-flow-tools/azure-open-ai-gpt-4v-tool.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ The prompt flow Azure OpenAI GPT-4 Turbo with Vision tool enables you to use you
3333

3434
:::image type="content" source="../../media/prompt-flow/azure-openai-gpt-4-vision-tool.png" alt-text="Screenshot that shows the Azure OpenAI GPT-4 Turbo with Vision tool added to a flow in Azure AI Foundry portal." lightbox="../../media/prompt-flow/azure-openai-gpt-4-vision-tool.png":::
3535

36-
1. Select the connection to your Azure OpenAI Service. For example, you can select the **Default_AzureOpenAI** connection. For more information, see [Prerequisites](#prerequisites).
36+
1. Select the connection to your Azure OpenAI in Azure AI Foundry Models. For example, you can select the **Default_AzureOpenAI** connection. For more information, see [Prerequisites](#prerequisites).
3737
1. Enter values for the Azure OpenAI GPT-4 Turbo with Vision tool input parameters described in the [Inputs table](#inputs). For example, you can use this example prompt:
3838

3939
```jinja

articles/ai-foundry/how-to/prompt-flow-tools/prompt-flow-tools-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ The following table provides an index of tools in prompt flow.
2323

2424
| Tool name | Description | Package name |
2525
|------|-----------|-------------|
26-
| [LLM](./llm-tool.md) | Use large language models (LLM) with the Azure OpenAI Service for tasks such as text completion or chat. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
26+
| [LLM](./llm-tool.md) | Use large language models (LLM) with Azure OpenAI in Azure AI Foundry Models for tasks such as text completion or chat. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
2727
| [Prompt](./prompt-tool.md) | Craft a prompt by using Jinja as the templating language. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
2828
| [Python](./python-tool.md) | Run Python code. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
2929
| [Azure OpenAI GPT-4 Turbo with Vision](./azure-open-ai-gpt-4v-tool.md) | Use an Azure OpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |

articles/ai-foundry/how-to/troubleshoot-deploy-and-monitor.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,9 +27,9 @@ This article provides instructions on how to troubleshoot your deployments and m
2727
For the general deployment error code reference, see [Troubleshooting online endpoints deployment and scoring](/azure/machine-learning/how-to-troubleshoot-online-endpoints) in the Azure Machine Learning documentation. Much of the information there also apply to Azure AI Foundry deployments.
2828

2929

30-
### Error: Use of Azure OpenAI models in Azure Machine Learning requires Azure OpenAI Services resources
30+
### Error: Use of Azure OpenAI models in Azure Machine Learning requires Azure OpenAI in Azure AI Foundry Models resources
3131

32-
The full error message states: "Use of Azure OpenAI models in Azure Machine Learning requires Azure OpenAI Services resources. This subscription or region doesn't have access to this model."
32+
The full error message states: "Use of Azure OpenAI models in Azure Machine Learning requires Azure OpenAI in Azure AI Foundry Models resources. This subscription or region doesn't have access to this model."
3333

3434
This error means that you might not have access to the particular Azure OpenAI model. For example, your subscription might not have access to the latest GPT model yet or this model isn't offered in the region you want to deploy to. You can learn more about it on [Azure OpenAI in Azure AI Foundry Models](../../ai-services/openai/concepts/models.md?context=/azure/ai-foundry/context/context).
3535

articles/ai-foundry/includes/create-content-filter.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Follow these steps to create a content filter:
4646

4747
:::image type="content" source="../media/content-safety/content-filter/create-content-filter-deployment.png" alt-text="Screenshot of the option to select a deployment when creating a content filter." lightbox="../media/content-safety/content-filter/create-content-filter-deployment.png":::
4848

49-
Content filtering configurations are created at the hub level in the [Azure AI Foundry portal](https://ai.azure.com). Learn more about configurability in the [Azure OpenAI Service documentation](/azure/ai-services/openai/how-to/content-filters).
49+
Content filtering configurations are created at the hub level in the [Azure AI Foundry portal](https://ai.azure.com). Learn more about configurability in the [Azure OpenAI in Azure AI Foundry Models documentation](/azure/ai-services/openai/how-to/content-filters).
5050

5151

5252
1. On the **Review** page, review the settings and then select **Create filter**.

articles/ai-foundry/includes/create-env-file-tutorial.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.date: 11/03/2024
1010
ms.custom: include, ignite-2024
1111
---
1212

13-
Your project connection string is required to call the Azure OpenAI service from your code. In this quickstart, you save this value in a `.env` file, which is a file that contains environment variables that your application can read.
13+
Your project connection string is required to call Azure OpenAI in Azure AI Foundry Models from your code. In this quickstart, you save this value in a `.env` file, which is a file that contains environment variables that your application can read.
1414

1515
Create a `.env` file, and paste the following code:
1616

articles/ai-foundry/includes/create-env-file.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.date: 11/03/2024
1010
ms.custom: include, ignite-2024
1111
---
1212

13-
Your project connection string is required to call the Azure OpenAI service from your code. In this quickstart, you save this value in a `.env` file, which is a file that contains environment variables that your application can read.
13+
Your project connection string is required to call Azure OpenAI in Azure AI Foundry Models from your code. In this quickstart, you save this value in a `.env` file, which is a file that contains environment variables that your application can read.
1414

1515
Create a `.env` file, and paste the following code:
1616

articles/ai-foundry/includes/install-cli.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.date: 08/29/2024
1010
ms.custom: include, ignite-2024
1111
---
1212

13-
You install the Azure CLI and sign in from your local development environment, so that you can use your user credentials to call the Azure OpenAI service.
13+
You install the Azure CLI and sign in from your local development environment, so that you can use your user credentials to call Azure OpenAI in Azure AI Foundry Models.
1414

1515
In most cases you can install the Azure CLI from your terminal using the following command:
1616

articles/ai-foundry/includes/install-promptflow.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,5 +20,5 @@ The prompt flow SDK takes a dependency on multiple packages, that you can choose
2020
* ```promptflow-core```: contains the core prompt flow runtime used for executing LLM code
2121
* ```promptflow-tracing```: lightweight library used for emitting OpenTelemetry traces in standards
2222
* ```promptflow-devkit```: contains the prompt flow test bed and trace viewer tools for local development environments
23-
* ```openai```: client libraries for using the Azure OpenAI service
23+
* ```openai```: client libraries for using the Azure OpenAI in Azure AI Foundry Models
2424
* ```python-dotenv```: used to set environment variables by reading them from ```.env``` files

articles/ai-foundry/model-inference/concepts/content-filter.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,13 +14,13 @@ manager: nitinme
1414
# Content filtering for model inference in Azure AI services
1515

1616
> [!IMPORTANT]
17-
> The content filtering system isn't applied to prompts and completions processed by the audio models such as Whisper in Azure OpenAI Service. Learn more about the [Audio models in Azure OpenAI](../../../ai-services/openai/concepts/models.md?tabs=standard-audio#standard-deployment-regional-models-by-endpoint).
17+
> The content filtering system isn't applied to prompts and completions processed by audio models such as Whisper in Azure OpenAI in Azure AI Foundry Models. Learn more about the [Audio models in Azure OpenAI](../../../ai-services/openai/concepts/models.md?tabs=standard-audio#standard-deployment-regional-models-by-endpoint).
1818
1919
Azure AI Foundry Models includes a content filtering system that works alongside core models and it's powered by [Azure AI Content Safety](https://azure.microsoft.com/products/cognitive-services/ai-content-safety). This system works by running both the prompt and completion through an ensemble of classification models designed to detect and prevent the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design might affect completions and thus filtering behavior.
2020

2121
The text content filtering models for the hate, sexual, violence, and self-harm categories were trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
2222

23-
In addition to the content filtering system, Azure OpenAI Service performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed for content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
23+
In addition to the content filtering system, Azure OpenAI performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed for content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
2424

2525
The following sections provide information about the content filtering categories, the filtering severity levels and their configurability, and API scenarios to be considered in application design and implementation.
2626

@@ -306,4 +306,4 @@ The table below outlines the various ways content filtering can appear:
306306

307307
- Learn about [Azure AI Content Safety](https://azure.microsoft.com/products/cognitive-services/ai-content-safety).
308308
- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context).
309-
- Learn more about how data is processed with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
309+
- Learn more about how data is processed with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).

0 commit comments

Comments
 (0)