Skip to content

Commit 5ca0ddc

Browse files
authored
Merge pull request #115668 from diberry/diberry/0519-luis-machine-learned-2
[Cogsvcs] LUIS - machine-learned -> learning 2
2 parents 70d6fbc + 2163368 commit 5ca0ddc

10 files changed

+47
-47
lines changed

articles/cognitive-services/LUIS/app-schema-definition.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,8 @@ When you import and export the app, choose either `.json` or `.lu`.
2020

2121
## Version 7.x
2222

23-
* Moving to version 7.x, the entities are represented as nested machine-learned entities.
24-
* Support for authoring nested machine-learned entities with `enableNestedChildren` property on the following authoring APIs:
23+
* Moving to version 7.x, the entities are represented as nested machine-learning entities.
24+
* Support for authoring nested machine-learning entities with `enableNestedChildren` property on the following authoring APIs:
2525
* [Add label](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c08)
2626
* [Add batch label](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c09)
2727
* [Review labels](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c0a)
@@ -59,7 +59,7 @@ When you import and export the app, choose either `.json` or `.lu`.
5959

6060
## Version 6.x
6161

62-
* Moving to version 6.x, use the new [machine-learned entity](reference-entity-machine-learned-entity.md) to represent your entities.
62+
* Moving to version 6.x, use the new [machine-learning entity](reference-entity-machine-learned-entity.md) to represent your entities.
6363

6464
```json
6565
{

articles/cognitive-services/LUIS/howto-add-prebuilt-models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ The easiest way to view the value of a prebuilt model is to query from the publi
7373

7474
## Entities containing a prebuilt entity token
7575

76-
If you have a machine-learned entity that needs a required feature of a prebuilt entity, add a subentity to the machine-learned entity, then add a _required_ feature of a prebuilt entity.
76+
If you have a machine-learning entity that needs a required feature of a prebuilt entity, add a subentity to the machine-learning entity, then add a _required_ feature of a prebuilt entity.
7777

7878
## Next steps
7979
> [!div class="nextstepaction"]

articles/cognitive-services/LUIS/label-entity-example-utterance.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
---
22
title: Label entity example utterance
3-
description: Learn how to label a machine-learned entity with subcomponents in an example utterance in an intent detail page of the LUIS portal.
3+
description: Learn how to label a machine-learning entity with subcomponents in an example utterance in an intent detail page of the LUIS portal.
44
ms.topic: conceptual
55
ms.date: 05/17/2020
6-
#Customer intent: As a new user, I want to label a machine-learned entity in an example utterance.
6+
#Customer intent: As a new user, I want to label a machine-learning entity in an example utterance.
77
---
88

9-
# Label machine-learned entity in an example utterance
9+
# Label machine-learning entity in an example utterance
1010

1111
Labeling an entity in an example utterance gives LUIS an example of what the entity is and where the entity can appear in the utterance.
1212

@@ -33,7 +33,7 @@ Consider the example utterance, `hi, please I want a cheese pizza in 20 minutes`
3333
1. Select the left-most text, then select the right-most text of the entity, then from the in-place menu, pick the entity you want to label with.
3434

3535
> [!div class="mx-imgBorder"]
36-
> ![Label complete machine-learned entity](media/label-utterances/label-steps-in-place-menu.png)
36+
> ![Label complete machine-learning entity](media/label-utterances/label-steps-in-place-menu.png)
3737
3838

3939
## Label entity from Entity Palette
@@ -47,7 +47,7 @@ The entity palette offers an alternative to the previous labeling experience. It
4747
3. In the example utterance, _paint_ the entity with the cursor.
4848

4949
> [!div class="mx-imgBorder"]
50-
> ![Entity palette for machine-learned entity](media/label-utterances/example-1-label-machine-learned-entity-palette-label-action.png)
50+
> ![Entity palette for machine-learning entity](media/label-utterances/example-1-label-machine-learned-entity-palette-label-action.png)
5151
5252
## Adding entity as a feature from the Entity Palette
5353

@@ -70,7 +70,7 @@ Entity roles are labeled using the **Entity palette**.
7070
After labeling, review the example utterance and ensure the selected span of text has been underlined with the chosen entity. The solid line indicates the text has been labeled.
7171

7272
> [!div class="mx-imgBorder"]
73-
> ![Labeled complete machine-learned entity](media/label-utterances/example-1-label-machine-learned-entity-complete-order-labeled.png)
73+
> ![Labeled complete machine-learning entity](media/label-utterances/example-1-label-machine-learned-entity-complete-order-labeled.png)
7474
7575
## Confirm predicted entity
7676

@@ -109,7 +109,7 @@ Non-machine learned entities include prebuilt entities, regular expression entit
109109
Entity prediction errors indicate the predicted entity doesn't match the labeled entity. This is visualized with a caution indicator next to the utterance.
110110

111111
> [!div class="mx-imgBorder"]
112-
> ![Entity palette for machine-learned entity](media/label-utterances/example-utterance-indicates-prediction-error.png)
112+
> ![Entity palette for machine-learning entity](media/label-utterances/example-utterance-indicates-prediction-error.png)
113113
114114
## Next steps
115115

articles/cognitive-services/LUIS/luis-concept-app-iteration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ A version can be exported at the app or version level as well. The only differen
110110

111111
The exported file **doesn't** contain:
112112

113-
* Machine-learned information, because the app is retrained after it's imported
113+
* machine-learning information, because the app is retrained after it's imported
114114
* Contributor information
115115

116116
In order to back up your LUIS app schema, export a version from the [LUIS portal](https://www.luis.ai/applications).

articles/cognitive-services/LUIS/luis-concept-batch-test.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Batch testing validates your active trained version to measure its prediction ac
1919

2020
## Group data for batch test
2121

22-
It is important that utterances used for batch testing are new to LUIS. If you have a data set of utterances, divide the utterances into three sets: example utterances added to an intent, utterances received from the published endpoint, and utterances used to batch test LUIS after it is trained.
22+
It is important that utterances used for batch testing are new to LUIS. If you have a data set of utterances, divide the utterances into three sets: example utterances added to an intent, utterances received from the published endpoint, and utterances used to batch test LUIS after it is trained.
2323

2424
## A data set of utterances
2525

@@ -30,7 +30,7 @@ Submit a batch file of utterances, known as a *data set*, for batch testing. The
3030
|*No duplicate utterances|
3131
|1000 utterances or less|
3232

33-
*Duplicates are considered exact string matches, not matches that are tokenized first.
33+
*Duplicates are considered exact string matches, not matches that are tokenized first.
3434

3535
## Entities allowed in batch tests
3636

@@ -41,7 +41,7 @@ All custom entities in the model appear in the batch test entities filter even i
4141

4242
## Batch file format
4343

44-
The batch file consists of utterances. Each utterance must have an expected intent prediction along with any [machine-learned entities](luis-concept-entity-types.md#types-of-entities) you expect to be detected.
44+
The batch file consists of utterances. Each utterance must have an expected intent prediction along with any [machine-learning entities](luis-concept-entity-types.md#types-of-entities) you expect to be detected.
4545

4646
## Batch syntax template for intents with entities
4747

@@ -52,7 +52,7 @@ Use the following template to start your batch file:
5252
{
5353
"text": "example utterance goes here",
5454
"intent": "intent name goes here",
55-
"entities":
55+
"entities":
5656
[
5757
{
5858
"entity": "entity name 1 goes here",
@@ -69,7 +69,7 @@ Use the following template to start your batch file:
6969
]
7070
```
7171

72-
The batch file uses the **startPos** and **endPos** properties to note the beginning and end of an entity. The values are zero-based and should not begin or end on a space. This is different from the query logs, which use startIndex and endIndex properties.
72+
The batch file uses the **startPos** and **endPos** properties to note the beginning and end of an entity. The values are zero-based and should not begin or end on a space. This is different from the query logs, which use startIndex and endIndex properties.
7373

7474
[!INCLUDE [Entity roles in batch testing - currently not supported](../../../includes/cognitive-services-luis-roles-not-supported-in-batch-testing.md)]
7575

@@ -92,7 +92,7 @@ If you do not want to test entities, include the `entities` property and set the
9292

9393
## Common errors importing a batch
9494

95-
Common errors include:
95+
Common errors include:
9696

9797
> * More than 1,000 utterances
9898
> * An utterance JSON object that doesn't have an entities property. The property can be an empty array.
@@ -107,7 +107,7 @@ LUIS tracks the state of each data set's last test. This includes the size (numb
107107

108108
## Batch test results
109109

110-
The batch test result is a scatter graph, known as an error matrix. This graph is a 4-way comparison of the utterances in the batch file and the current model's predicted intent and entities.
110+
The batch test result is a scatter graph, known as an error matrix. This graph is a 4-way comparison of the utterances in the batch file and the current model's predicted intent and entities.
111111

112112
Data points on the **False Positive** and **False Negative** sections indicate errors, which should be investigated. If all data points are on the **True Positive** and **True Negative** sections, then your app's accuracy is perfect on this data set.
113113

@@ -119,13 +119,13 @@ This chart helps you find utterances that LUIS predicts incorrectly based on its
119119

120120
## Errors in the results
121121

122-
Errors in the batch test indicate intents that are not predicted as noted in the batch file. Errors are indicated in the two red sections of the chart.
122+
Errors in the batch test indicate intents that are not predicted as noted in the batch file. Errors are indicated in the two red sections of the chart.
123123

124-
The false positive section indicates that an utterance matched an intent or entity when it shouldn't have. The false negative indicates an utterance did not match an intent or entity when it should have.
124+
The false positive section indicates that an utterance matched an intent or entity when it shouldn't have. The false negative indicates an utterance did not match an intent or entity when it should have.
125125

126126
## Fixing batch errors
127127

128-
If there are errors in the batch testing, you can either add more utterances to an intent, and/or label more utterances with the entity to help LUIS make the discrimination between intents. If you have added utterances, and labeled them, and still get prediction errors in batch testing, consider adding a [phrase list](luis-concept-feature.md) feature with domain-specific vocabulary to help LUIS learn faster.
128+
If there are errors in the batch testing, you can either add more utterances to an intent, and/or label more utterances with the entity to help LUIS make the discrimination between intents. If you have added utterances, and labeled them, and still get prediction errors in batch testing, consider adding a [phrase list](luis-concept-feature.md) feature with domain-specific vocabulary to help LUIS learn faster.
129129

130130
## Next steps
131131

articles/cognitive-services/LUIS/luis-concept-best-practices.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -95,9 +95,9 @@ Model decomposition has a typical process of:
9595

9696
Once you have created the intent and added example utterances, the following example describes entity decomposition.
9797

98-
Start by identifying complete data concepts you want to extract in an utterance. This is your machine-learned entity. Then decompose the phrase into its parts. This includes identifying subentities, and features.
98+
Start by identifying complete data concepts you want to extract in an utterance. This is your machine-learning entity. Then decompose the phrase into its parts. This includes identifying subentities, and features.
9999

100-
For example if you want to extract an address, the top machine-learned entity could be called `Address`. While creating the address, identify some of its subentities such as street address, city, state, and postal code.
100+
For example if you want to extract an address, the top machine-learning entity could be called `Address`. While creating the address, identify some of its subentities such as street address, city, state, and postal code.
101101

102102
Continue decomposing those elements by:
103103
* Adding a required feature of the postal code as a regular expression entity.
@@ -152,7 +152,7 @@ After the app is published, only add utterances from active learning in the deve
152152

153153
## Don't use few or simple entities
154154

155-
Entities are built for data extraction and prediction. It is important that each intent have machine-learned entities that describe the data in the intent. This helps LUIS predict the intent, even if your client application doesn't need to use the extracted entity.
155+
Entities are built for data extraction and prediction. It is important that each intent have machine-learning entities that describe the data in the intent. This helps LUIS predict the intent, even if your client application doesn't need to use the extracted entity.
156156

157157
## Don't use LUIS as a training platform
158158

articles/cognitive-services/LUIS/luis-concept-data-extraction.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.date: 05/01/2020
99
# Extract data from utterance text with intents and entities
1010
LUIS gives you the ability to get information from a user's natural language utterances. The information is extracted in a way that it can be used by a program, application, or chat bot to take action. In the following sections, learn what data is returned from intents and entities with examples of JSON.
1111

12-
The hardest data to extract is the machine-learned data because it isn't an exact text match. Data extraction of the machine-learned [entities](luis-concept-entity-types.md) needs to be part of the [authoring cycle](luis-concept-app-iteration.md) until you're confident you receive the data you expect.
12+
The hardest data to extract is the machine-learning data because it isn't an exact text match. Data extraction of the machine-learning [entities](luis-concept-entity-types.md) needs to be part of the [authoring cycle](luis-concept-app-iteration.md) until you're confident you receive the data you expect.
1313

1414
## Data location and key usage
1515
LUIS extracts data from the user's utterance at the published [endpoint](luis-glossary.md#endpoint). The **HTTPS request** (POST or GET) contains the utterance as well as some optional configurations such as staging or production environments.
@@ -240,7 +240,7 @@ Some apps need to be able to find new and emerging names such as products or com
240240

241241
## Pattern.any entity data
242242

243-
[Pattern.any](reference-entity-pattern-any.md) is a variable-length placeholder used only in a pattern's template utterance to mark where the entity begins and ends. The entity used in the pattern must be found in order for the pattern to be applied.
243+
[Pattern.any](reference-entity-pattern-any.md) is a variable-length placeholder used only in a pattern's template utterance to mark where the entity begins and ends. The entity used in the pattern must be found in order for the pattern to be applied.
244244

245245
## Sentiment analysis
246246
If Sentiment analysis is configured while [publishing](luis-how-to-publish-app.md#sentiment-analysis), the LUIS json response includes sentiment analysis. Learn more about sentiment analysis in the [Text Analytics](https://docs.microsoft.com/azure/cognitive-services/text-analytics/) documentation.

0 commit comments

Comments
 (0)