You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/personally-identifiable-information/overview.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,11 +18,11 @@ PII detection is one of the features offered by [Azure AI Language](../overview.
18
18
19
19
## What's new
20
20
21
-
The Text PII and Conversational PII detection preview API (version `2024-11-15-preview`) now supports the option to mask detected sensitive entities with a label beyond just redaction characters. Customers have the option to specify if personally identifiable information content such as names and phone numbers, i.e. `“John Doe received a call from 424-878-9192”`, are masked with a redaction character, i.e. `“******** received a call from ************”`, or masked with an entity label, i.e. `“[PERSON_1] received a call from [PHONENUMBER_1]”`. More on how to specify the redaction policy style for your outputs can be found in our [how-to guides](how-to-call.md).
21
+
The Text PII and Conversational PII detection preview API (version `2024-11-15-preview`) now supports the option to mask detected sensitive entities with a label beyond just redaction characters. Customers have the option to specify if personally identifiable information content such as names and phone numbers, i.e. `"John Doe received a call from 424-878-9192"`, are masked with a redaction character, i.e. `"******** received a call from ************"`, or masked with an entity label, i.e. `"[PERSON_1] received a call from [PHONENUMBER_1]"`. More on how to specify the redaction policy style for your outputs can be found in our [how-to guides](how-to-call.md).
22
22
23
23
The Conversational PII detection models (both version `2024-11-01-preview` and `GA`) have been updated to provide enhanced AI quality and accuracy. The numeric identifier entity type now also includes Drivers License and Medicare Beneficiary Identifier.
24
24
25
-
As of June 2024, we now provide General Availability support for the Conversational PII service (English-language only). Customers can now redact transcripts, chats, and other text written in a conversational style (i.e. text with “um”s, “ah”s, multiple speakers, and the spelling out of words for more clarity) with better confidence in AI quality, Azure SLA support and production environment support, and enterprise-grade security in mind.
25
+
As of June 2024, we now provide General Availability support for the Conversational PII service (English-language only). Customers can now redact transcripts, chats, and other text written in a conversational style (i.e. text with "um"s, "ah"s, multiple speakers, and the spelling out of words for more clarity) with better confidence in AI quality, Azure SLA support and production environment support, and enterprise-grade security in mind.
26
26
27
27
> [!TIP]
28
28
> Try out PII detection [in Azure AI Foundry portal](https://ai.azure.com/explore/language), where you can [utilize a currently existing Language Studio resource or create a new Azure AI Foundry resource](../../../ai-studio/ai-services/connect-ai-services.md)
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/evaluations.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -77,13 +77,13 @@ When you upload and select you evaluation file a preview of the first three line
77
77
78
78
You can choose any existing previously uploaded datasets, or upload a new dataset.
79
79
80
-
### Generate responses (optional)
80
+
### Create responses (optional)
81
81
82
82
The prompt you use in your evaluation should match the prompt you plan to use in production. These prompts provide the instructions for the model to follow. Similar to the playground experiences, you can create multiple inputs to include few-shot examples in your prompt. For more information, see [prompt engineering techniques](../concepts/advanced-prompt-engineering.md) for details on some advanced techniques in prompt design and prompt engineering.
83
83
84
84
You can reference your input data within the prompts by using the `{{input.column_name}}` format, where column_name corresponds to the names of the columns in your input file.
85
85
86
-
Outputs generated during the evaluation will be referenced in subsequent steps using the `{{sample.output_text}}` format.
86
+
Outputs generated during the evaluation will be referenced in subsequent steps using the `{{sample.output_text}}` format.
87
87
88
88
> [!NOTE]
89
89
> You need to use double curly braces to make sure you reference to your data correctly.
@@ -92,9 +92,9 @@ Outputs generated during the evaluation will be referenced in subsequent steps u
92
92
93
93
As part of creating evaluations you'll pick which models to use when generating responses (optional) as well as which models to use when grading models with specific testing criteria.
94
94
95
-
In Azure OpenAI you'll be assigning specific model deployments to use as part of your evaluations. You can compare multiple deployments by creating a separate evaluation configuration for each model. This enables you to define specific prompts for each evaluation, providing better control over the variations required by different models.
95
+
In Azure OpenAI you'll be assigning specific model deployments to use as part of your evaluations. You can compare multiple model deployments in single evaluation run.
96
96
97
-
You can evaluate either a base or a fine-tuned model deployment. The deployments available in your list depend on those you created within your Azure OpenAI resource. If you can't find the desired deployment, you can create a new one from the Azure OpenAI Evaluation page.
97
+
You can evaluate either base or fine-tuned model deployments. The deployments available in your list depend on those you created within your Azure OpenAI resource. If you can't find the desired deployment, you can create a new one from the Azure OpenAI Evaluation page.
98
98
99
99
### Testing criteria
100
100
@@ -109,7 +109,7 @@ Testing criteria is used to assess the effectiveness of each output generated by
109
109
110
110
:::image type="content" source="../media/how-to/evaluations/new-evaluation.png" alt-text="Screenshot of the Azure OpenAI evaluation UX with new evaluation selected." lightbox="../media/how-to/evaluations/new-evaluation.png":::
111
111
112
-
3. Enter a name of your evaluation. By default a random name is automatically generated unless you edit and replace it. > select**Upload new dataset**.
112
+
3. Enter a name of your evaluation. By default a random name is automatically generated unless you edit and replace it. Select**Upload new dataset**.
113
113
114
114
:::image type="content" source="../media/how-to/evaluations/upload.png" alt-text="Screenshot of the Azure OpenAI upload UX." lightbox="../media/how-to/evaluations/upload.png":::
115
115
@@ -132,7 +132,7 @@ Testing criteria is used to assess the effectiveness of each output generated by
132
132
133
133
:::image type="content" source="../media/how-to/evaluations/preview.png" alt-text="Screenshot that shows a preview of an uploaded evaluation file." lightbox="../media/how-to/evaluations/preview.png":::
134
134
135
-
5. Select the toggle for **Generate responses**. Select `{{item.input}}` from the dropdown. This will inject the input fields from our evaluation file into individual prompts for a new model run that we want to able to compare against our evaluation dataset. The model will take that input and generate its own unique outputs which in this case will be stored in a variable called `{{sample.output_text}}`. We'll then use that sample output text later as part of our testing criteria. Alternatively you could provide your own custom system message and individual message examples manually.
135
+
5. Under **Responses** select the **Create** button. Select `{{item.input}}` from the **Create with a template** dropdown. This will inject the input fields from our evaluation file into individual prompts for a new model run that we want to able to compare against our evaluation dataset. The model will take that input and generate its own unique outputs which in this case will be stored in a variable called `{{sample.output_text}}`. We'll then use that sample output text later as part of our testing criteria. Alternatively you could provide your own custom system message and individual message examples manually.
136
136
137
137
6. Select which model you want to generate responses based on your evaluation. If you don't have a model you can create one. For the purpose of this example we're using a standard deployment of `gpt-4o-mini`.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/structured-outputs.md
+6-4Lines changed: 6 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,18 +6,20 @@ services: cognitive-services
6
6
manager: nitinme
7
7
ms.service: azure-ai-openai
8
8
ms.topic: how-to
9
-
ms.date: 12/18/2024
9
+
ms.date: 01/30/2025
10
10
author: mrbullwinkle
11
11
ms.author: mbullwin
12
12
recommendations: false
13
13
---
14
14
15
15
# Structured outputs
16
16
17
-
Structured outputs make a model follow a [JSON Schema](https://json-schema.org/overview/what-is-jsonschema) definition that you provide as part of your inference API call. This is in contrast to the older [JSON mode](./json-mode.md) feature, which guaranteed valid JSON would be generated, but was unable to ensure strict adherence to the supplied schema. Structured outputs is recommended for function calling, extracting structured data, and building complex multi-step workflows.
17
+
Structured outputs make a model follow a [JSON Schema](https://json-schema.org/overview/what-is-jsonschema) definition that you provide as part of your inference API call. This is in contrast to the older [JSON mode](./json-mode.md) feature, which guaranteed valid JSON would be generated, but was unable to ensure strict adherence to the supplied schema. Structured outputs are recommended for function calling, extracting structured data, and building complex multi-step workflows.
18
18
19
19
> [!NOTE]
20
-
> * Currently structured outputs is not supported on [bring your own data](../concepts/use-your-data.md) scenario.
20
+
> Currently structured outputs are not supported with:
21
+
> -[Bring your own data](../concepts/use-your-data.md) scenarios.
22
+
> -`gpt-4o-audio-preview` version: `2024-12-17`.
21
23
22
24
## Supported models
23
25
@@ -280,7 +282,7 @@ Output:
280
282
Structured Outputs for function calling can be enabled with a single parameter, by supplying `strict: true`.
281
283
282
284
> [!NOTE]
283
-
> Structured outputs is not supported with parallel function calls. When using structured outputs set `parallel_tool_calls` to `false`.
285
+
> Structured outputs are not supported with parallel function calls. When using structured outputs set `parallel_tool_calls` to `false`.
0 commit comments