Skip to content

Commit a8e5aa4

Browse files
authored
Merge pull request #2640 from MicrosoftDocs/main
01/30/2025 AM Publishing
2 parents f8876b0 + 608f6b6 commit a8e5aa4

File tree

20 files changed

+421
-42
lines changed

20 files changed

+421
-42
lines changed

articles/ai-services/language-service/personally-identifiable-information/overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,11 +18,11 @@ PII detection is one of the features offered by [Azure AI Language](../overview.
1818

1919
## What's new
2020

21-
The Text PII and Conversational PII detection preview API (version `2024-11-15-preview`) now supports the option to mask detected sensitive entities with a label beyond just redaction characters. Customers have the option to specify if personally identifiable information content such as names and phone numbers, i.e. `John Doe received a call from 424-878-9192`, are masked with a redaction character, i.e. `******** received a call from ************`, or masked with an entity label, i.e. `[PERSON_1] received a call from [PHONENUMBER_1]`. More on how to specify the redaction policy style for your outputs can be found in our [how-to guides](how-to-call.md).
21+
The Text PII and Conversational PII detection preview API (version `2024-11-15-preview`) now supports the option to mask detected sensitive entities with a label beyond just redaction characters. Customers have the option to specify if personally identifiable information content such as names and phone numbers, i.e. `"John Doe received a call from 424-878-9192"`, are masked with a redaction character, i.e. `"******** received a call from ************"`, or masked with an entity label, i.e. `"[PERSON_1] received a call from [PHONENUMBER_1]"`. More on how to specify the redaction policy style for your outputs can be found in our [how-to guides](how-to-call.md).
2222

2323
The Conversational PII detection models (both version `2024-11-01-preview` and `GA`) have been updated to provide enhanced AI quality and accuracy. The numeric identifier entity type now also includes Drivers License and Medicare Beneficiary Identifier.
2424

25-
As of June 2024, we now provide General Availability support for the Conversational PII service (English-language only). Customers can now redact transcripts, chats, and other text written in a conversational style (i.e. text with “um”s, “ah”s, multiple speakers, and the spelling out of words for more clarity) with better confidence in AI quality, Azure SLA support and production environment support, and enterprise-grade security in mind.
25+
As of June 2024, we now provide General Availability support for the Conversational PII service (English-language only). Customers can now redact transcripts, chats, and other text written in a conversational style (i.e. text with "um"s, "ah"s, multiple speakers, and the spelling out of words for more clarity) with better confidence in AI quality, Azure SLA support and production environment support, and enterprise-grade security in mind.
2626

2727
> [!TIP]
2828
> Try out PII detection [in Azure AI Foundry portal](https://ai.azure.com/explore/language), where you can [utilize a currently existing Language Studio resource or create a new Azure AI Foundry resource](../../../ai-studio/ai-services/connect-ai-services.md)

articles/ai-services/openai/how-to/evaluations.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -77,13 +77,13 @@ When you upload and select you evaluation file a preview of the first three line
7777

7878
You can choose any existing previously uploaded datasets, or upload a new dataset.
7979

80-
### Generate responses (optional)
80+
### Create responses (optional)
8181

8282
The prompt you use in your evaluation should match the prompt you plan to use in production. These prompts provide the instructions for the model to follow. Similar to the playground experiences, you can create multiple inputs to include few-shot examples in your prompt. For more information, see [prompt engineering techniques](../concepts/advanced-prompt-engineering.md) for details on some advanced techniques in prompt design and prompt engineering.
8383

8484
You can reference your input data within the prompts by using the `{{input.column_name}}` format, where column_name corresponds to the names of the columns in your input file.
8585

86-
Outputs generated during the evaluation will be referenced in subsequent steps using the `{{sample.output_text}}` format.
86+
Outputs generated during the evaluation will be referenced in subsequent steps using the `{{sample.output_text}}` format.
8787

8888
> [!NOTE]
8989
> You need to use double curly braces to make sure you reference to your data correctly.
@@ -92,9 +92,9 @@ Outputs generated during the evaluation will be referenced in subsequent steps u
9292

9393
As part of creating evaluations you'll pick which models to use when generating responses (optional) as well as which models to use when grading models with specific testing criteria.
9494

95-
In Azure OpenAI you'll be assigning specific model deployments to use as part of your evaluations. You can compare multiple deployments by creating a separate evaluation configuration for each model. This enables you to define specific prompts for each evaluation, providing better control over the variations required by different models.
95+
In Azure OpenAI you'll be assigning specific model deployments to use as part of your evaluations. You can compare multiple model deployments in single evaluation run.
9696

97-
You can evaluate either a base or a fine-tuned model deployment. The deployments available in your list depend on those you created within your Azure OpenAI resource. If you can't find the desired deployment, you can create a new one from the Azure OpenAI Evaluation page.
97+
You can evaluate either base or fine-tuned model deployments. The deployments available in your list depend on those you created within your Azure OpenAI resource. If you can't find the desired deployment, you can create a new one from the Azure OpenAI Evaluation page.
9898

9999
### Testing criteria
100100

@@ -109,7 +109,7 @@ Testing criteria is used to assess the effectiveness of each output generated by
109109

110110
:::image type="content" source="../media/how-to/evaluations/new-evaluation.png" alt-text="Screenshot of the Azure OpenAI evaluation UX with new evaluation selected." lightbox="../media/how-to/evaluations/new-evaluation.png":::
111111

112-
3. Enter a name of your evaluation. By default a random name is automatically generated unless you edit and replace it. > select **Upload new dataset**.
112+
3. Enter a name of your evaluation. By default a random name is automatically generated unless you edit and replace it. Select **Upload new dataset**.
113113

114114
:::image type="content" source="../media/how-to/evaluations/upload.png" alt-text="Screenshot of the Azure OpenAI upload UX." lightbox="../media/how-to/evaluations/upload.png":::
115115

@@ -132,7 +132,7 @@ Testing criteria is used to assess the effectiveness of each output generated by
132132

133133
:::image type="content" source="../media/how-to/evaluations/preview.png" alt-text="Screenshot that shows a preview of an uploaded evaluation file." lightbox="../media/how-to/evaluations/preview.png":::
134134

135-
5. Select the toggle for **Generate responses**. Select `{{item.input}}` from the dropdown. This will inject the input fields from our evaluation file into individual prompts for a new model run that we want to able to compare against our evaluation dataset. The model will take that input and generate its own unique outputs which in this case will be stored in a variable called `{{sample.output_text}}`. We'll then use that sample output text later as part of our testing criteria. Alternatively you could provide your own custom system message and individual message examples manually.
135+
5. Under **Responses** select the **Create** button. Select `{{item.input}}` from the **Create with a template** dropdown. This will inject the input fields from our evaluation file into individual prompts for a new model run that we want to able to compare against our evaluation dataset. The model will take that input and generate its own unique outputs which in this case will be stored in a variable called `{{sample.output_text}}`. We'll then use that sample output text later as part of our testing criteria. Alternatively you could provide your own custom system message and individual message examples manually.
136136

137137
6. Select which model you want to generate responses based on your evaluation. If you don't have a model you can create one. For the purpose of this example we're using a standard deployment of `gpt-4o-mini`.
138138

articles/ai-services/openai/how-to/structured-outputs.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,18 +6,20 @@ services: cognitive-services
66
manager: nitinme
77
ms.service: azure-ai-openai
88
ms.topic: how-to
9-
ms.date: 12/18/2024
9+
ms.date: 01/30/2025
1010
author: mrbullwinkle
1111
ms.author: mbullwin
1212
recommendations: false
1313
---
1414

1515
# Structured outputs
1616

17-
Structured outputs make a model follow a [JSON Schema](https://json-schema.org/overview/what-is-jsonschema) definition that you provide as part of your inference API call. This is in contrast to the older [JSON mode](./json-mode.md) feature, which guaranteed valid JSON would be generated, but was unable to ensure strict adherence to the supplied schema. Structured outputs is recommended for function calling, extracting structured data, and building complex multi-step workflows.
17+
Structured outputs make a model follow a [JSON Schema](https://json-schema.org/overview/what-is-jsonschema) definition that you provide as part of your inference API call. This is in contrast to the older [JSON mode](./json-mode.md) feature, which guaranteed valid JSON would be generated, but was unable to ensure strict adherence to the supplied schema. Structured outputs are recommended for function calling, extracting structured data, and building complex multi-step workflows.
1818

1919
> [!NOTE]
20-
> * Currently structured outputs is not supported on [bring your own data](../concepts/use-your-data.md) scenario.
20+
> Currently structured outputs are not supported with:
21+
> - [Bring your own data](../concepts/use-your-data.md) scenarios.
22+
> - `gpt-4o-audio-preview` version: `2024-12-17`.
2123
2224
## Supported models
2325

@@ -280,7 +282,7 @@ Output:
280282
Structured Outputs for function calling can be enabled with a single parameter, by supplying `strict: true`.
281283

282284
> [!NOTE]
283-
> Structured outputs is not supported with parallel function calls. When using structured outputs set `parallel_tool_calls` to `false`.
285+
> Structured outputs are not supported with parallel function calls. When using structured outputs set `parallel_tool_calls` to `false`.
284286
285287
# [Python (Microsoft Entra ID)](#tab/python-secure)
286288

37.6 KB
Loading
232 KB
Loading
107 KB
Loading
29.1 KB
Loading

articles/ai-services/translator/toc.yml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -284,6 +284,11 @@ items:
284284
- name: What is Microsoft Translator Pro?
285285
displayName: app service, mobile app, ios, speech-to-speech
286286
href: translator-pro/overview.md
287+
- name: Language support
288+
displayName: iso,language code, locale
289+
href: translator-pro/language-support.md
290+
- name: FAQ
291+
href: translator-pro/faq.yml
287292
- name: Responsible AI and compliance
288293
items:
289294
- name: Transparency note

0 commit comments

Comments
 (0)