You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/LUIS/howto-add-prebuilt-models.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -73,7 +73,7 @@ The easiest way to view the value of a prebuilt model is to query from the publi
73
73
74
74
## Entities containing a prebuilt entity token
75
75
76
-
If you have a machine-learned entity that needs a required feature of a prebuilt entity, add a subentity to the machine-learned entity, then add a _required_ feature of a prebuilt entity.
76
+
If you have a machine-learning entity that needs a required feature of a prebuilt entity, add a subentity to the machine-learning entity, then add a _required_ feature of a prebuilt entity.
@@ -47,7 +47,7 @@ The entity palette offers an alternative to the previous labeling experience. It
47
47
3. In the example utterance, _paint_ the entity with the cursor.
48
48
49
49
> [!div class="mx-imgBorder"]
50
-
> 
50
+
> 
51
51
52
52
## Adding entity as a feature from the Entity Palette
53
53
@@ -70,7 +70,7 @@ Entity roles are labeled using the **Entity palette**.
70
70
After labeling, review the example utterance and ensure the selected span of text has been underlined with the chosen entity. The solid line indicates the text has been labeled.
Entity prediction errors indicate the predicted entity doesn't match the labeled entity. This is visualized with a caution indicator next to the utterance.
110
110
111
111
> [!div class="mx-imgBorder"]
112
-
> 
112
+
> 
Copy file name to clipboardExpand all lines: articles/cognitive-services/LUIS/luis-concept-batch-test.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ Batch testing validates your active trained version to measure its prediction ac
19
19
20
20
## Group data for batch test
21
21
22
-
It is important that utterances used for batch testing are new to LUIS. If you have a data set of utterances, divide the utterances into three sets: example utterances added to an intent, utterances received from the published endpoint, and utterances used to batch test LUIS after it is trained.
22
+
It is important that utterances used for batch testing are new to LUIS. If you have a data set of utterances, divide the utterances into three sets: example utterances added to an intent, utterances received from the published endpoint, and utterances used to batch test LUIS after it is trained.
23
23
24
24
## A data set of utterances
25
25
@@ -30,7 +30,7 @@ Submit a batch file of utterances, known as a *data set*, for batch testing. The
30
30
|*No duplicate utterances|
31
31
|1000 utterances or less|
32
32
33
-
*Duplicates are considered exact string matches, not matches that are tokenized first.
33
+
*Duplicates are considered exact string matches, not matches that are tokenized first.
34
34
35
35
## Entities allowed in batch tests
36
36
@@ -41,7 +41,7 @@ All custom entities in the model appear in the batch test entities filter even i
41
41
42
42
## Batch file format
43
43
44
-
The batch file consists of utterances. Each utterance must have an expected intent prediction along with any [machine-learned entities](luis-concept-entity-types.md#types-of-entities) you expect to be detected.
44
+
The batch file consists of utterances. Each utterance must have an expected intent prediction along with any [machine-learning entities](luis-concept-entity-types.md#types-of-entities) you expect to be detected.
45
45
46
46
## Batch syntax template for intents with entities
47
47
@@ -52,7 +52,7 @@ Use the following template to start your batch file:
52
52
{
53
53
"text": "example utterance goes here",
54
54
"intent": "intent name goes here",
55
-
"entities":
55
+
"entities":
56
56
[
57
57
{
58
58
"entity": "entity name 1 goes here",
@@ -69,7 +69,7 @@ Use the following template to start your batch file:
69
69
]
70
70
```
71
71
72
-
The batch file uses the **startPos** and **endPos** properties to note the beginning and end of an entity. The values are zero-based and should not begin or end on a space. This is different from the query logs, which use startIndex and endIndex properties.
72
+
The batch file uses the **startPos** and **endPos** properties to note the beginning and end of an entity. The values are zero-based and should not begin or end on a space. This is different from the query logs, which use startIndex and endIndex properties.
73
73
74
74
[!INCLUDE [Entity roles in batch testing - currently not supported](../../../includes/cognitive-services-luis-roles-not-supported-in-batch-testing.md)]
75
75
@@ -92,7 +92,7 @@ If you do not want to test entities, include the `entities` property and set the
92
92
93
93
## Common errors importing a batch
94
94
95
-
Common errors include:
95
+
Common errors include:
96
96
97
97
> * More than 1,000 utterances
98
98
> * An utterance JSON object that doesn't have an entities property. The property can be an empty array.
@@ -107,7 +107,7 @@ LUIS tracks the state of each data set's last test. This includes the size (numb
107
107
108
108
## Batch test results
109
109
110
-
The batch test result is a scatter graph, known as an error matrix. This graph is a 4-way comparison of the utterances in the batch file and the current model's predicted intent and entities.
110
+
The batch test result is a scatter graph, known as an error matrix. This graph is a 4-way comparison of the utterances in the batch file and the current model's predicted intent and entities.
111
111
112
112
Data points on the **False Positive** and **False Negative** sections indicate errors, which should be investigated. If all data points are on the **True Positive** and **True Negative** sections, then your app's accuracy is perfect on this data set.
113
113
@@ -119,13 +119,13 @@ This chart helps you find utterances that LUIS predicts incorrectly based on its
119
119
120
120
## Errors in the results
121
121
122
-
Errors in the batch test indicate intents that are not predicted as noted in the batch file. Errors are indicated in the two red sections of the chart.
122
+
Errors in the batch test indicate intents that are not predicted as noted in the batch file. Errors are indicated in the two red sections of the chart.
123
123
124
-
The false positive section indicates that an utterance matched an intent or entity when it shouldn't have. The false negative indicates an utterance did not match an intent or entity when it should have.
124
+
The false positive section indicates that an utterance matched an intent or entity when it shouldn't have. The false negative indicates an utterance did not match an intent or entity when it should have.
125
125
126
126
## Fixing batch errors
127
127
128
-
If there are errors in the batch testing, you can either add more utterances to an intent, and/or label more utterances with the entity to help LUIS make the discrimination between intents. If you have added utterances, and labeled them, and still get prediction errors in batch testing, consider adding a [phrase list](luis-concept-feature.md) feature with domain-specific vocabulary to help LUIS learn faster.
128
+
If there are errors in the batch testing, you can either add more utterances to an intent, and/or label more utterances with the entity to help LUIS make the discrimination between intents. If you have added utterances, and labeled them, and still get prediction errors in batch testing, consider adding a [phrase list](luis-concept-feature.md) feature with domain-specific vocabulary to help LUIS learn faster.
Copy file name to clipboardExpand all lines: articles/cognitive-services/LUIS/luis-concept-best-practices.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -95,9 +95,9 @@ Model decomposition has a typical process of:
95
95
96
96
Once you have created the intent and added example utterances, the following example describes entity decomposition.
97
97
98
-
Start by identifying complete data concepts you want to extract in an utterance. This is your machine-learned entity. Then decompose the phrase into its parts. This includes identifying subentities, and features.
98
+
Start by identifying complete data concepts you want to extract in an utterance. This is your machine-learning entity. Then decompose the phrase into its parts. This includes identifying subentities, and features.
99
99
100
-
For example if you want to extract an address, the top machine-learned entity could be called `Address`. While creating the address, identify some of its subentities such as street address, city, state, and postal code.
100
+
For example if you want to extract an address, the top machine-learning entity could be called `Address`. While creating the address, identify some of its subentities such as street address, city, state, and postal code.
101
101
102
102
Continue decomposing those elements by:
103
103
* Adding a required feature of the postal code as a regular expression entity.
@@ -152,7 +152,7 @@ After the app is published, only add utterances from active learning in the deve
152
152
153
153
## Don't use few or simple entities
154
154
155
-
Entities are built for data extraction and prediction. It is important that each intent have machine-learned entities that describe the data in the intent. This helps LUIS predict the intent, even if your client application doesn't need to use the extracted entity.
155
+
Entities are built for data extraction and prediction. It is important that each intent have machine-learning entities that describe the data in the intent. This helps LUIS predict the intent, even if your client application doesn't need to use the extracted entity.
Copy file name to clipboardExpand all lines: articles/cognitive-services/LUIS/luis-concept-data-extraction.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ ms.date: 05/01/2020
9
9
# Extract data from utterance text with intents and entities
10
10
LUIS gives you the ability to get information from a user's natural language utterances. The information is extracted in a way that it can be used by a program, application, or chat bot to take action. In the following sections, learn what data is returned from intents and entities with examples of JSON.
11
11
12
-
The hardest data to extract is the machine-learned data because it isn't an exact text match. Data extraction of the machine-learned[entities](luis-concept-entity-types.md) needs to be part of the [authoring cycle](luis-concept-app-iteration.md) until you're confident you receive the data you expect.
12
+
The hardest data to extract is the machine-learning data because it isn't an exact text match. Data extraction of the machine-learning[entities](luis-concept-entity-types.md) needs to be part of the [authoring cycle](luis-concept-app-iteration.md) until you're confident you receive the data you expect.
13
13
14
14
## Data location and key usage
15
15
LUIS extracts data from the user's utterance at the published [endpoint](luis-glossary.md#endpoint). The **HTTPS request** (POST or GET) contains the utterance as well as some optional configurations such as staging or production environments.
@@ -240,7 +240,7 @@ Some apps need to be able to find new and emerging names such as products or com
240
240
241
241
## Pattern.any entity data
242
242
243
-
[Pattern.any](reference-entity-pattern-any.md) is a variable-length placeholder used only in a pattern's template utterance to mark where the entity begins and ends. The entity used in the pattern must be found in order for the pattern to be applied.
243
+
[Pattern.any](reference-entity-pattern-any.md) is a variable-length placeholder used only in a pattern's template utterance to mark where the entity begins and ends. The entity used in the pattern must be found in order for the pattern to be applied.
244
244
245
245
## Sentiment analysis
246
246
If Sentiment analysis is configured while [publishing](luis-how-to-publish-app.md#sentiment-analysis), the LUIS json response includes sentiment analysis. Learn more about sentiment analysis in the [Text Analytics](https://docs.microsoft.com/azure/cognitive-services/text-analytics/) documentation.
0 commit comments