Skip to content

Commit 6800b2e

Browse files
committed
Merge branch 'main' of https://github.com/microsoftdocs/azure-ai-docs-pr into llm-devops
2 parents e9b0c12 + afee2a2 commit 6800b2e

36 files changed

+697
-405
lines changed

articles/ai-services/.openpublishing.redirection.ai-services.json

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -769,6 +769,16 @@
769769
"source_path_from_root": "/articles/ai-services/speech-service/video-translation-studio.md",
770770
"redirect_url": "/azure/ai-services/speech-service/video-translation-get-started",
771771
"redirect_document_id": true
772+
},
773+
{
774+
"source_path_from_root": "/articles/ai-services/qnamaker/how-to/migrate-to-openai.md",
775+
"redirect_url": "/azure/ai-services/qnamaker/overview/overview",
776+
"redirect_document_id": true
777+
},
778+
{
779+
"source_path_from_root": "/articles/ai-services/language-service/question-answering/how-to/azure-openai-integration.md",
780+
"redirect_url": "/azure/ai-services/language-service/question-answering/overview",
781+
"redirect_document_id": true
772782
}
773783
]
774784
}

articles/ai-services/computer-vision/includes/quickstarts-sdk/identity-java-sdk.md

Lines changed: 1 addition & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -51,19 +51,7 @@ Get started with facial recognition using the Face client library for Java. Foll
5151
<dependency>
5252
<groupId>com.azure</groupId>
5353
<artifactId>azure-ai-vision-face</artifactId>
54-
<version>1.0.0-beta.1</version>
55-
</dependency>
56-
<!-- https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient -->
57-
<dependency>
58-
<groupId>org.apache.httpcomponents</groupId>
59-
<artifactId>httpclient</artifactId>
60-
<version>4.5.13</version>
61-
</dependency>
62-
<!-- https://mvnrepository.com/artifact/com.google.code.gson/gson -->
63-
<dependency>
64-
<groupId>com.google.code.gson</groupId>
65-
<artifactId>gson</artifactId>
66-
<version>2.11.0</version>
54+
<version>1.0.0-beta.2</version>
6755
</dependency>
6856
</dependencies>
6957
</project>

articles/ai-services/computer-vision/tutorials/liveness.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -445,7 +445,7 @@ The high-level steps involved in liveness with verification orchestration are il
445445

446446
var sessionClient = new FaceSessionClient(endpoint, credential);
447447

448-
var createContent = new CreateLivenessSessionContent(LivenessOperationMode.Passive)
448+
var createContent = new CreateLivenessWithVerifySessionContent(LivenessOperationMode.Passive)
449449
{
450450
DeviceCorrelationId = "723d6d03-ef33-40a8-9682-23a1feb7bccd"
451451
};
@@ -472,7 +472,7 @@ The high-level steps involved in liveness with verification orchestration are il
472472
.credential(new AzureKeyCredential(accountKey))
473473
.buildClient();
474474

475-
CreateLivenessSessionContent parameters = new CreateLivenessSessionContent(LivenessOperationMode.PASSIVE)
475+
CreateLivenessWithVerifySessionContent parameters = new CreateLivenessWithVerifySessionContent(LivenessOperationMode.PASSIVE)
476476
.setDeviceCorrelationId("723d6d03-ef33-40a8-9682-23a1feb7bccd")
477477
.setSendResultsToClient(false);
478478

@@ -500,7 +500,7 @@ The high-level steps involved in liveness with verification orchestration are il
500500
reference_image_content = fd.read()
501501

502502
created_session = await face_session_client.create_liveness_with_verify_session(
503-
CreateLivenessSessionContent(
503+
CreateLivenessWithVerifySessionContent(
504504
liveness_operation_mode=LivenessOperationMode.PASSIVE,
505505
device_correlation_id="723d6d03-ef33-40a8-9682-23a1feb7bccd",
506506
),

articles/ai-services/content-safety/concepts/custom-categories.md

Lines changed: 0 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -16,38 +16,6 @@ ms.author: pafarley
1616

1717
Azure AI Content Safety lets you create and manage your own content moderation categories for enhanced moderation and filtering that matches your specific policies or use cases.
1818

19-
## Custom categories Training Pipeline Overview
20-
![image](https://github.com/user-attachments/assets/2e097136-0e37-4b5e-ba59-cafcfd733d72)
21-
22-
### Pipeline Components
23-
The training pipeline is designed to leverage a combination of universal data assets, user-provided inputs, and advanced GPT model fine-tuning techniques to produce high-quality models tailored to specific tasks.
24-
#### Data Assets
25-
Filtered Universal Data: This component gathers datasets from multiple domains to create a comprehensive and diverse dataset collection. The goal is to have a robust data foundation that provides a variety of contexts for model training.
26-
User Inputs
27-
Customer Task Metadata: Metadata provided by customers, which defines the specific requirements and context of the task they wish the model to perform.
28-
Customer Demonstrations: Sample demonstrations provided by customers that illustrate the expected output or behavior for the model. These demonstrations help optimize the model’s response based on real-world expectations.
29-
30-
#### Optimized Customer Prompt
31-
Based on the customer metadata and demonstrations, an optimized prompt is generated. This prompt refines the inputs provided to the model, aligning it closely with customer needs and enhancing the model’s task performance.
32-
33-
#### GPTX Synthetic Task-Specific Dataset
34-
Using the optimized prompt and filtered universal data, a synthetic, task-specific dataset is created. This dataset is tailored to the specific task requirements, enabling the model to understand and learn the desired behaviors and patterns.
35-
### Model Training and Fine-Tuning
36-
37-
#### Model Options: The pipeline supports multiple language models (LM), including Zcode, SLM, or any other language model (LM) suitable for the task.
38-
Task-Specific Fine-Tuned Model: The selected language model is fine-tuned on the synthetic task-specific dataset to produce a model that is highly optimized for the specific task.
39-
User Outputs
40-
41-
#### ONNX Model: The fine-tuned model is converted into an ONNX (Open Neural Network Exchange) model format, ensuring compatibility and efficiency for deployment.
42-
Deployment: The ONNX model is deployed, enabling users to make inference calls and access the model’s predictions. This deployment step ensures that the model is ready for production use in customer applications.
43-
Key Features of the Training Pipeline
44-
45-
#### Task Specificity: The pipeline allows for the creation of models finely tuned to specific customer tasks, thanks to the integration of customer metadata and demonstrations.
46-
- Scalability and Flexibility: The pipeline supports multiple language models, providing flexibility in choosing the model architecture best suited to the task.
47-
- Efficiency in Deployment: The conversion to ONNX format ensures that the final model is lightweight and efficient, optimized for deployment environments.
48-
- Continuous Improvement: By using synthetic datasets generated from diverse universal data sources, the pipeline can continuously improve model quality and applicability across various domains.
49-
50-
5119
## Types of customization
5220

5321
There are multiple ways to define and use custom categories, which are detailed and compared in this section.
@@ -82,8 +50,6 @@ This implementation works on text content and image content.
8250
## How it works
8351

8452
### [Custom categories (standard) API](#tab/standard)
85-
![image](https://github.com/user-attachments/assets/5c377ec4-379b-4b41-884c-13524ca126d0)
86-
8753

8854
The Azure AI Content Safety custom categories feature uses a multi-step process for creating, training, and using custom content classification models. Here's a look at the workflow:
8955

articles/ai-services/content-safety/quickstart-custom-categories.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ curl -X PUT "<your_endpoint>/contentsafety/text/categories/survival-advice?api-v
9696
Replace <your_api_key> and <your_endpoint> with your own values, and also **append the version number you obtained from the last step.** Allow enough time for model training: the end-to-end execution of custom category training can take from around five hours to ten hours. Plan your moderation pipeline accordingly. After you receive the response, store the operation ID (referred to as `id`) in a temporary location. This ID will be necessary for retrieving the build status using the **Get status** API in the next section.
9797

9898
```bash
99-
curl -X POST "<your_endpoint>/contentsafety/text/categories/survival-advice:build?api-version=2024-09-15-preview**&version={version}**" \
99+
curl -X POST "<your_endpoint>/contentsafety/text/categories/survival-advice:build?api-version=2024-09-15-preview&version={version}" \
100100
-H "Ocp-Apim-Subscription-Key: <your_api_key>" \
101101
-H "Content-Type: application/json"
102102
```

articles/ai-services/language-service/custom-text-analytics-for-health/overview.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,10 @@ ms.author: jboback
1212
ms.custom: language-service-custom-ta4h
1313
---
1414

15-
# What is custom Text Analytics for health?
15+
# What is custom Text Analytics for health?
16+
17+
> [!NOTE]
18+
> Custom text analytics for health (preview) will be retired on 10 January 2025, please transition to other custom model training services, such as custom named entity recognition in Azure AI Language, by that date. From now to 10 January 2025, you can continue to use custom text analytics for health (preview) in your existing projects without disruption. You can’t create new projects. On 10 January 2025 – workloads running on custom text analytics for health (preview) will be deleted and associated project data will be lost.
1619
1720
Custom Text Analytics for health is one of the custom features offered by [Azure AI Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models on top of [Text Analytics for health](../text-analytics-for-health/overview.md) for custom healthcare entity recognition tasks.
1821

articles/ai-services/language-service/custom-text-analytics-for-health/quickstart.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,9 @@ zone_pivot_groups: usage-custom-language-features
1515

1616
# Quickstart: custom Text Analytics for health
1717

18+
> [!NOTE]
19+
> Custom text analytics for health (preview) will be retired on 10 January 2025, please transition to other custom model training services, such as custom named entity recognition in Azure AI Language, by that date. From now to 10 January 2025, you can continue to use custom text analytics for health (preview) in your existing projects without disruption. You can’t create new projects. On 10 January 2025 – workloads running on custom text analytics for health (preview) will be deleted and associated project data will be lost.
20+
1821
Use this article to get started with creating a custom Text Analytics for health project where you can train custom models on top of Text Analytics for health for custom entity recognition. A model is artificial intelligence software that's trained to do a certain task. For this system, the models extract healthcare related named entities and are trained by learning from labeled data.
1922

2023
In this article, we use Language Studio to demonstrate key concepts of custom Text Analytics for health. As an example we’ll build a custom Text Analytics for health model to extract the Facility or treatment location from short discharge notes.

articles/ai-services/language-service/question-answering/how-to/azure-openai-integration.md

Lines changed: 0 additions & 76 deletions
This file was deleted.

articles/ai-services/language-service/question-answering/how-to/migrate-qnamaker.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,6 @@ ms.custom: language-service-question-answering
1111

1212
# Migrate from QnA Maker knowledge bases to custom question answering
1313

14-
> [!NOTE]
15-
> You can also migrate to [Azure OpenAI](../../../qnamaker/How-To/migrate-to-openai.md).
16-
1714
Custom question answering, a feature of Azure AI Language was introduced in May 2021 with several new capabilities including enhanced relevance using a deep learning ranker, precise answers, and end-to-end region support. Each custom question answering project is equivalent to a knowledge base in QnA Maker. You can easily migrate knowledge bases from a QnA Maker resource to custom question answering projects within a [language resource](https://aka.ms/create-language-resource). You can also choose to migrate knowledge bases from multiple QnA Maker resources to a specific language resource.
1815

1916
To successfully migrate knowledge bases, **the account performing the migration needs contributor access to the selected QnA Maker and language resource**. When a knowledge base is migrated, the following details are copied to the new custom question answering project:

articles/ai-services/language-service/question-answering/overview.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,6 @@ ms.custom: language-service-question-answering
1313

1414
# What is custom question answering?
1515

16-
> [!NOTE]
17-
> [Azure OpenAI On Your Data](../../openai/concepts/use-your-data.md) utilizes large language models (LLMs) to produce similar results to Custom Question Answering. If you wish to connect an existing Custom Question Answering project to Azure OpenAI On Your Data, please check out our [guide]( how-to/azure-openai-integration.md).
18-
1916
Custom question answering provides cloud-based Natural Language Processing (NLP) that allows you to create a natural conversational layer over your data. It is used to find appropriate answers from customer input or from a project.
2017

2118
Custom question answering is commonly used to build conversational client applications, which include social media applications, chat bots, and speech-enabled desktop applications. This offering includes features like enhanced relevance using a deep learning ranker, precise answers, and end-to-end region support.

0 commit comments

Comments
 (0)