You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/online-evaluation.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,15 +32,15 @@ After your application is instrumented to send trace data to Application Insight
32
32
> [!NOTE]
33
33
> Online evaluation supports the same metrics as Azure AI Evaluation. For more information on how evaluation works and which evaluation metrics are supported, see [Evaluate your Generative AI application with the Azure AI Evaluation SDK](./develop/evaluate-sdk.md).
34
34
35
-
For example, let’s say you have a deployed chat application that receives many customer questions on a daily basis. You want to continuously evaluate the quality of the responses from your application. You set up an online evaluation schedule with a daily recurrence. You configure the evaluators: **Groundedness**, **Coherence**, and **Fluency**. Every day, the service computes the evaluation scores for these metrics and writes the data back to Application Insights for each trace that was collected during the recurrence time window (in this example, the past 24 hours). Then, the data can be queried from each trace and made accessible in Azure AI Foundry and Azure Monitor Application Insights.
35
+
For example, let's say you have a deployed chat application that receives many customer questions on a daily basis. You want to continuously evaluate the quality of the responses from your application. You set up an online evaluation schedule with a daily recurrence. You configure the evaluators: **Groundedness**, **Coherence**, and **Fluency**. Every day, the service computes the evaluation scores for these metrics and writes the data back to Application Insights for each trace that was collected during the recurrence time window (in this example, the past 24 hours). Then, the data can be queried from each trace and made accessible in Azure AI Foundry and Azure Monitor Application Insights.
36
36
37
37
The evaluation results written back to each trace within Application Insights follow the following conventions. A unique span is added to each trace for each evaluation metric:
38
38
39
39
| Property | Application Insights Table | Fields for a given operation_ID | Example value |
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/prompt-flow-tools/azure-open-ai-gpt-4v-tool.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ The prompt flow Azure OpenAI GPT-4 Turbo with Vision tool enables you to use you
33
33
34
34
:::image type="content" source="../../media/prompt-flow/azure-openai-gpt-4-vision-tool.png" alt-text="Screenshot that shows the Azure OpenAI GPT-4 Turbo with Vision tool added to a flow in Azure AI Foundry portal." lightbox="../../media/prompt-flow/azure-openai-gpt-4-vision-tool.png":::
35
35
36
-
1. Select the connection to your Azure OpenAI Service. For example, you can select the **Default_AzureOpenAI** connection. For more information, see [Prerequisites](#prerequisites).
36
+
1. Select the connection to your Azure OpenAI in Azure AI Foundry Models. For example, you can select the **Default_AzureOpenAI** connection. For more information, see [Prerequisites](#prerequisites).
37
37
1. Enter values for the Azure OpenAI GPT-4 Turbo with Vision tool input parameters described in the [Inputs table](#inputs). For example, you can use this example prompt:
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/prompt-flow-tools/prompt-flow-tools-overview.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ The following table provides an index of tools in prompt flow.
23
23
24
24
| Tool name | Description | Package name |
25
25
|------|-----------|-------------|
26
-
|[LLM](./llm-tool.md)| Use large language models (LLM) with the Azure OpenAI Service for tasks such as text completion or chat. |[promptflow-tools](https://pypi.org/project/promptflow-tools/)|
26
+
|[LLM](./llm-tool.md)| Use large language models (LLM) with Azure OpenAI in Azure AI Foundry Models for tasks such as text completion or chat. |[promptflow-tools](https://pypi.org/project/promptflow-tools/)|
27
27
|[Prompt](./prompt-tool.md)| Craft a prompt by using Jinja as the templating language. |[promptflow-tools](https://pypi.org/project/promptflow-tools/)|
28
28
|[Python](./python-tool.md)| Run Python code. |[promptflow-tools](https://pypi.org/project/promptflow-tools/)|
29
29
|[Azure OpenAI GPT-4 Turbo with Vision](./azure-open-ai-gpt-4v-tool.md)| Use an Azure OpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them. |[promptflow-tools](https://pypi.org/project/promptflow-tools/)|
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/troubleshoot-deploy-and-monitor.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,9 +27,9 @@ This article provides instructions on how to troubleshoot your deployments and m
27
27
For the general deployment error code reference, see [Troubleshooting online endpoints deployment and scoring](/azure/machine-learning/how-to-troubleshoot-online-endpoints) in the Azure Machine Learning documentation. Much of the information there also apply to Azure AI Foundry deployments.
28
28
29
29
30
-
### Error: Use of Azure OpenAI models in Azure Machine Learning requires Azure OpenAI Services resources
30
+
### Error: Use of Azure OpenAI models in Azure Machine Learning requires Azure OpenAI in Azure AI Foundry Models resources
31
31
32
-
The full error message states: "Use of Azure OpenAI models in Azure Machine Learning requires Azure OpenAI Services resources. This subscription or region doesn't have access to this model."
32
+
The full error message states: "Use of Azure OpenAI models in Azure Machine Learning requires Azure OpenAI in Azure AI Foundry Models resources. This subscription or region doesn't have access to this model."
33
33
34
34
This error means that you might not have access to the particular Azure OpenAI model. For example, your subscription might not have access to the latest GPT model yet or this model isn't offered in the region you want to deploy to. You can learn more about it on [Azure OpenAI in Azure AI Foundry Models](../../ai-services/openai/concepts/models.md?context=/azure/ai-foundry/context/context).
Copy file name to clipboardExpand all lines: articles/ai-foundry/includes/create-content-filter.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,7 +46,7 @@ Follow these steps to create a content filter:
46
46
47
47
:::image type="content" source="../media/content-safety/content-filter/create-content-filter-deployment.png" alt-text="Screenshot of the option to select a deployment when creating a content filter." lightbox="../media/content-safety/content-filter/create-content-filter-deployment.png":::
48
48
49
-
Content filtering configurations are created at the hub level in the [Azure AI Foundry portal](https://ai.azure.com). Learn more about configurability in the [Azure OpenAI Service documentation](/azure/ai-services/openai/how-to/content-filters).
49
+
Content filtering configurations are created at the hub level in the [Azure AI Foundry portal](https://ai.azure.com). Learn more about configurability in the [Azure OpenAI in Azure AI Foundry Models documentation](/azure/ai-services/openai/how-to/content-filters).
50
50
51
51
52
52
1. On the **Review** page, review the settings and then select **Create filter**.
Copy file name to clipboardExpand all lines: articles/ai-foundry/includes/create-env-file-tutorial.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.date: 11/03/2024
10
10
ms.custom: include, ignite-2024
11
11
---
12
12
13
-
Your project connection string is required to call the Azure OpenAI service from your code. In this quickstart, you save this value in a `.env` file, which is a file that contains environment variables that your application can read.
13
+
Your project connection string is required to call Azure OpenAI in Azure AI Foundry Models from your code. In this quickstart, you save this value in a `.env` file, which is a file that contains environment variables that your application can read.
14
14
15
15
Create a `.env` file, and paste the following code:
Copy file name to clipboardExpand all lines: articles/ai-foundry/includes/create-env-file.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.date: 11/03/2024
10
10
ms.custom: include, ignite-2024
11
11
---
12
12
13
-
Your project connection string is required to call the Azure OpenAI service from your code. In this quickstart, you save this value in a `.env` file, which is a file that contains environment variables that your application can read.
13
+
Your project connection string is required to call Azure OpenAI in Azure AI Foundry Models from your code. In this quickstart, you save this value in a `.env` file, which is a file that contains environment variables that your application can read.
14
14
15
15
Create a `.env` file, and paste the following code:
Copy file name to clipboardExpand all lines: articles/ai-foundry/includes/install-cli.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.date: 08/29/2024
10
10
ms.custom: include, ignite-2024
11
11
---
12
12
13
-
You install the Azure CLI and sign in from your local development environment, so that you can use your user credentials to call the Azure OpenAI service.
13
+
You install the Azure CLI and sign in from your local development environment, so that you can use your user credentials to call Azure OpenAI in Azure AI Foundry Models.
14
14
15
15
In most cases you can install the Azure CLI from your terminal using the following command:
Copy file name to clipboardExpand all lines: articles/ai-foundry/model-inference/concepts/content-filter.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,13 +14,13 @@ manager: nitinme
14
14
# Content filtering for model inference in Azure AI services
15
15
16
16
> [!IMPORTANT]
17
-
> The content filtering system isn't applied to prompts and completions processed by the audio models such as Whisper in Azure OpenAI Service. Learn more about the [Audio models in Azure OpenAI](../../../ai-services/openai/concepts/models.md?tabs=standard-audio#standard-deployment-regional-models-by-endpoint).
17
+
> The content filtering system isn't applied to prompts and completions processed by audio models such as Whisper in Azure OpenAI in Azure AI Foundry Models. Learn more about the [Audio models in Azure OpenAI](../../../ai-services/openai/concepts/models.md?tabs=standard-audio#standard-deployment-regional-models-by-endpoint).
18
18
19
19
Azure AI Foundry Models includes a content filtering system that works alongside core models and it's powered by [Azure AI Content Safety](https://azure.microsoft.com/products/cognitive-services/ai-content-safety). This system works by running both the prompt and completion through an ensemble of classification models designed to detect and prevent the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design might affect completions and thus filtering behavior.
20
20
21
21
The text content filtering models for the hate, sexual, violence, and self-harm categories were trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
22
22
23
-
In addition to the content filtering system, Azure OpenAI Service performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed for content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
23
+
In addition to the content filtering system, Azure OpenAI performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed for content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
24
24
25
25
The following sections provide information about the content filtering categories, the filtering severity levels and their configurability, and API scenarios to be considered in application design and implementation.
26
26
@@ -306,4 +306,4 @@ The table below outlines the various ways content filtering can appear:
306
306
307
307
- Learn about [Azure AI Content Safety](https://azure.microsoft.com/products/cognitive-services/ai-content-safety).
308
308
- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context).
309
-
- Learn more about how data is processed with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
309
+
- Learn more about how data is processed with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
0 commit comments