Skip to content

Commit 4971ee4

Browse files
committed
fixing merge conflict
2 parents c872e6e + 3eaa371 commit 4971ee4

File tree

15 files changed

+765
-14
lines changed

15 files changed

+765
-14
lines changed

articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/data-formats.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -166,4 +166,4 @@ Your Labels file should be in the `json` format below to be used when importing
166166
## Next steps
167167
* You can import your labeled data into your project directly. Learn how to [import project](../how-to/create-project.md#import-project)
168168
* See the [how-to article](../how-to/label-data.md) more information about labeling your data.
169-
* <!--When you're done labeling your data, you can [train your model](../how-to/train-model.md).-->
169+
<!--* When you're done labeling your data, you can [train your model](../how-to/train-model.md).-->
Lines changed: 106 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
---
2+
title: Deploy a custom Text Analytics for health model
3+
titleSuffix: Azure Cognitive Services
4+
description: Learn about deploying a model for custom Text Analytics for health.
5+
services: cognitive-services
6+
author: aahill
7+
manager: nitinme
8+
ms.service: cognitive-services
9+
ms.subservice: language-service
10+
ms.topic: how-to
11+
ms.date: 10/12/2022
12+
ms.author: aahi
13+
ms.custom: language-service-custom-ta4h
14+
---
15+
16+
# Deploy a custom text analytics for health model
17+
18+
Once you're satisfied with how your model performs, it's ready to be deployed and used to recognize entities in text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
19+
20+
## Prerequisites
21+
22+
* A successfully [created project](create-project.md) with a configured Azure storage account.
23+
* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
24+
* [Labeled data](label-data.md) and a successfully [trained model](train-model.md).
25+
* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
26+
27+
For more information, see [project development lifecycle](../overview.md#project-development-lifecycle).
28+
29+
## Deploy model
30+
31+
After you've reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named *production* to which you assign the best model you have built so far and use it in your system. You can create another deployment called *staging* to which you can assign the model you're currently working on to be able to test it. You can have a maximum of 10 deployments in your project.
32+
33+
# [Language Studio](#tab/language-studio)
34+
35+
[!INCLUDE [Deploy a model using Language Studio](../includes/language-studio/deploy-model.md)]
36+
37+
# [REST APIs](#tab/rest-api)
38+
39+
### Submit deployment job
40+
41+
[!INCLUDE [deploy model](../includes/rest-api/deploy-model.md)]
42+
43+
### Get deployment job status
44+
45+
[!INCLUDE [get deployment status](../includes/rest-api/get-deployment-status.md)]
46+
47+
---
48+
49+
## Swap deployments
50+
51+
After you are done testing a model assigned to one deployment and you want to assign this model to another deployment you can swap these two deployments. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment, and assigning it to the first deployment. You can use this process to swap your *production* and *staging* deployments when you want to take the model assigned to *staging* and assign it to *production*.
52+
53+
# [Language Studio](#tab/language-studio)
54+
55+
[!INCLUDE [Swap deployments](../includes/language-studio/swap-deployment.md)]
56+
57+
# [REST APIs](#tab/rest-api)
58+
59+
[!INCLUDE [Swap deployments](../includes/rest-api/swap-deployment.md)]
60+
61+
---
62+
63+
64+
## Delete deployment
65+
66+
# [Language Studio](#tab/language-studio)
67+
68+
[!INCLUDE [Delete deployment](../includes/language-studio/delete-deployment.md)]
69+
70+
# [REST APIs](#tab/rest-api)
71+
72+
[!INCLUDE [Delete deployment](../includes/rest-api/delete-deployment.md)]
73+
74+
---
75+
76+
## Assign deployment resources
77+
78+
You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
79+
80+
# [Language Studio](#tab/language-studio)
81+
82+
[!INCLUDE [Assign resource](../../conversational-language-understanding/includes/language-studio/assign-resources.md)]
83+
84+
# [REST APIs](#tab/rest-api)
85+
86+
[!INCLUDE [Assign resource](../../custom-text-classification/includes/rest-api/assign-resources.md)]
87+
88+
---
89+
90+
## Unassign deployment resources
91+
92+
When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
93+
94+
# [Language Studio](#tab/language-studio)
95+
96+
[!INCLUDE [Unassign resource](../../conversational-language-understanding/includes/language-studio/unassign-resources.md)]
97+
98+
# [REST APIs](#tab/rest-api)
99+
100+
[!INCLUDE [Unassign resource](../../custom-text-classification/includes/rest-api/unassign-resources.md)]
101+
102+
---
103+
104+
## Next steps
105+
106+
After you have a deployment, you can use it to [extract entities](call-api.md) from text.
Lines changed: 141 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,141 @@
1+
---
2+
title: Back up and recover your custom Text Analytics for health models
3+
titleSuffix: Azure Cognitive Services
4+
description: Learn how to save and recover your custom Text Analytics for health models.
5+
services: cognitive-services
6+
author: aahill
7+
manager: nitinme
8+
ms.service: cognitive-services
9+
ms.subservice: language-service
10+
ms.topic: conceptual
11+
ms.date: 04/25/2022
12+
ms.author: aahi
13+
ms.custom: language-service-custom-ta4h
14+
---
15+
16+
# Back up and recover your custom Text Analytics for health models
17+
18+
When you create a Language resource, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that affects an entire region. If your solution needs to always be available, then you should design it to fail over into another region. This requires two Azure Language resources in different regions and synchronizing custom models across them.
19+
20+
If your app or business depends on the use of a custom Text Analytics for health model, we recommend that you create a replica of your project in an additional supported region. If a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
21+
22+
Replicating a project means that you export your project metadata and assets, and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./train-model.md) and [deploy](./deploy-model.md) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
23+
24+
In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
25+
26+
## Prerequisites
27+
28+
* Two Azure Language resources in different Azure regions. [Create your resources](./create-project.md#create-a-language-resource) and connect them to an Azure storage account. It's recommended that you connect each of your Language resources to different storage accounts. Each storage account should be located in the same respective regions that your separate Language resources are in. You can follow the [quickstart](../quickstart.md?pivots=rest-api#create-a-new-azure-language-resource-and-azure-storage-account) to create an additional Language resource and storage account.
29+
30+
31+
## Get your resource keys endpoint
32+
33+
Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
34+
35+
[!INCLUDE [Get keys and endpoint Azure Portal](../includes/get-keys-endpoint-azure.md)]
36+
37+
> [!TIP]
38+
> Keep a note of keys and endpoints for both primary and secondary resources as well as the primary and secondary container names. Use these values to replace the following placeholders:
39+
`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{PRIMARY-CONTAINER-NAME}`, `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}`.
40+
> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
41+
42+
## Export your primary project assets
43+
44+
Start by exporting the project assets from the project in your primary resource.
45+
46+
### Submit export job
47+
48+
Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
49+
50+
[!INCLUDE [Export project assets using the REST API](../includes/rest-api/export-project.md)]
51+
52+
### Get export job status
53+
54+
Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
55+
56+
[!INCLUDE [Export project assets using the REST API](../includes/rest-api/get-export-status.md)]
57+
58+
59+
Copy the response body as you will use it as the body for the next import job.
60+
61+
## Import to a new project
62+
63+
Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
64+
65+
### Submit import job
66+
67+
Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}` that you obtained in the first step.
68+
69+
[!INCLUDE [Import project using the REST API](../includes/rest-api/import-project.md)]
70+
71+
### Get import job status
72+
73+
Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
74+
75+
[!INCLUDE [Import project using the REST API](../includes/rest-api/get-import-status.md)]
76+
77+
78+
## Train your model
79+
80+
After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
81+
82+
### Submit training job
83+
84+
Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
85+
86+
[!INCLUDE [train model](../includes/rest-api/train-model.md)]
87+
88+
89+
### Get training status
90+
91+
Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
92+
93+
[!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
94+
95+
## Deploy your model
96+
97+
This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
98+
99+
> [!TIP]
100+
> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
101+
102+
### Submit deployment job
103+
104+
Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
105+
106+
[!INCLUDE [deploy model](../includes/rest-api/deploy-model.md)]
107+
108+
### Get the deployment status
109+
110+
Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
111+
112+
[!INCLUDE [get deploy status](../includes/rest-api/get-deployment-status.md)]
113+
114+
## Changes in calling the runtime
115+
116+
Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
117+
118+
In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
119+
120+
## Check if your projects are out of sync
121+
122+
Maintaining the freshness of both projects is an important part of the process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fails and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice. We recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
123+
124+
### Get project details
125+
126+
Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
127+
Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
128+
129+
[!INCLUDE [get project details](../includes/rest-api/get-project-details.md)]
130+
131+
132+
Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model).
133+
134+
135+
## Next steps
136+
137+
In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
138+
139+
* [Authoring REST API reference](https://aka.ms/ct-authoring-swagger)
140+
141+
* [Runtime prediction REST API reference](https://aka.ms/ct-runtime-swagger)
Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
---
2+
title: How to train your custom Text Analytics for health model
3+
titleSuffix: Azure Cognitive Services
4+
description: Learn about how to train your model for custom Text Analytics for health.
5+
services: cognitive-services
6+
author: aahill
7+
manager: nitinme
8+
ms.service: cognitive-services
9+
ms.subservice: language-service
10+
ms.topic: how-to
11+
ms.date: 05/06/2022
12+
ms.author: aahi
13+
ms.custom: language-service-custom-ta4h
14+
---
15+
16+
# Train your custom Text Analytics for health model
17+
18+
Training is the process where the model learns from your [labeled data](label-data.md). After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to determine if you need to improve your model.
19+
20+
To train a model, you start a training job and only successfully completed jobs create a model. Training jobs expire after seven days, which means you won't be able to retrieve the job details after this time. If your training job completed successfully and a model was created, the model won't be affected. You can only have one training job running at a time, and you can't start other jobs in the same project.
21+
22+
The training times can be anywhere from a few minutes when dealing with few documents, up to several hours depending on the dataset size and the complexity of your schema.
23+
24+
25+
## Prerequisites
26+
27+
* A successfully [created project](create-project.md) with a configured Azure blob storage account
28+
* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
29+
* [Labeled data](label-data.md)
30+
31+
See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
32+
33+
## Data splitting
34+
35+
Before you start the training process, labeled documents in your project are divided into a training set and a testing set. Each one of them serves a different function.
36+
The **training set** is used in training the model, this is the set from which the model learns the labeled entities and what spans of text are to be extracted as entities.
37+
The **testing set** is a blind set that is not introduced to the model during training but only during evaluation.
38+
After model training is completed successfully, the model is used to make predictions from the documents in the testing and based on these predictions [evaluation metrics](../concepts/evaluation-metrics.md) are calculated. Model training and evaluation are only for newly defined entities with learned components; therefore, Text Analytics for health entities are excluded from model training and evaluation due to them being entities with prebuilt components. It's recommended to make sure that all your labeled entities are adequately represented in both the training and testing set.
39+
40+
Custom Text Analytics for health supports two methods for data splitting:
41+
42+
* **Automatically splitting the testing set from training data**:The system splits your labeled data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
43+
44+
> [!NOTE]
45+
> If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
46+
47+
* **Use a manual split of training and testing data**: This method enables users to define which labeled documents should belong to which set. This step is only enabled if you have added documents to your testing set during [data labeling](label-data.md).
48+
49+
## Train model
50+
51+
# [Language studio](#tab/Language-studio)
52+
53+
[!INCLUDE [Train model](../includes/language-studio/train-model.md)]
54+
55+
# [REST APIs](#tab/REST-APIs)
56+
57+
### Start training job
58+
59+
[!INCLUDE [train model](../includes/rest-api/train-model.md)]
60+
61+
### Get training job status
62+
63+
Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it's successfully completed.
64+
65+
[!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
66+
67+
---
68+
69+
### Cancel training job
70+
71+
# [Language Studio](#tab/language-studio)
72+
73+
[!INCLUDE [Cancel training](../includes/language-studio/cancel-training.md)]
74+
75+
# [REST APIs](#tab/rest-api)
76+
77+
[!INCLUDE [Cancel training](../includes/rest-api/cancel-training.md)]
78+
79+
---
80+
81+
## Next steps
82+
83+
After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to optionally improve your model if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [extracting entities](call-api.md) from text.
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
---
2+
services: cognitive-services
3+
author: aahill
4+
manager: nitinme
5+
ms.service: cognitive-services
6+
ms.subservice: language-service
7+
ms.custom: event-tier1-build-2022
8+
ms.topic: include
9+
ms.date: 05/24/2022
10+
ms.author: aahi
11+
---
12+
13+
To cancel a training job from within [Language Studio](https://aka.ms/languageStudio), go to the **Training jobs** page. Select the training job you want to cancel and click on **Cancel** from the top menu.
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
---
2+
titleSuffix: Azure Cognitive Services
3+
services: cognitive-services
4+
author: aahill
5+
manager: nitinme
6+
ms.service: cognitive-services
7+
ms.subservice: language-service
8+
ms.custom: event-tier1-build-2022
9+
ms.topic: include
10+
ms.date: 05/24/2022
11+
ms.author: aahi
12+
---
13+
14+
To delete a deployment from within [Language Studio](https://aka.ms/laguageStudio), go to the **Deploying a model** page. Select the deployment you want to delete and click on **Delete deployment** from the top menu.

0 commit comments

Comments
 (0)