You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -55,23 +48,26 @@ V3 made the following changes as part of the move to GA:
55
48
56
49
## Suggested adoption strategy
57
50
58
-
If you use Bot Framework, Bing Spell Check V7, or want to migrate your LUIS app authoring only, continue to use the V2 endpoint.
51
+
If you use Bot Framework, Bing Spell Check V7, or want to migrate your LUIS app authoring only, continue to use the V2 endpoint.
52
+
53
+
If you know none of your client application or integrations (Bot Framework, and Bing Spell Check V7) are impacted and you are comfortable migrating your LUIS app authoring and your prediction endpoint at the same time, begin using the V3 prediction endpoint. The V2 prediction endpoint will still be available and is a good fall-back strategy.
59
54
60
-
If you know none of your client application or integrations (Bot Framework, and Bing Spell Check V7) are impacted and you are comfortable migrating your LUIS app authoring and your prediction endpoint at the same time, begin using the V3 prediction endpoint. The V2 prediction endpoint will still be available and is a good fall-back strategy.
61
55
62
56
## Not supported
63
57
64
-
* Bing Spell Check API is not supported in V3 prediction endpoint - continue to use V2 API prediction endpoint for spelling corrections
58
+
### Bing Spell Check
59
+
60
+
This API is not supported in V3 prediction endpoint - continue to use V2 API prediction endpoint for spelling corrections. If you need spelling correction while using V3 API, have the client application call the [Bing Spell Check](https://docs.microsoft.com/azure/cognitive-services/bing-spell-check/overview) API, and change the text to the correct spelling, prior to sending the text to the LUIS API.
65
61
66
62
## Bot Framework and Azure Bot Service client applications
67
63
68
-
Continue to use the V2 API prediction endpoint until the V4.7 of the Bot Framework is released.
64
+
Continue to use the V2 API prediction endpoint until the V4.7 of the Bot Framework is released.
69
65
70
-
## V2 API Deprecation
66
+
## V2 API Deprecation
71
67
72
-
The V2 prediction API will not be deprecated for at least 9 months after the V3 preview, June 8th, 2020.
68
+
The V2 prediction API will not be deprecated for at least 9 months after the V3 preview, June 8th, 2020.
73
69
74
-
## Endpoint URL changes
70
+
## Endpoint URL changes
75
71
76
72
### Changes by slot name and version name
77
73
@@ -91,15 +87,15 @@ If you want to query by version, you first need to [publish via API](https://wes
91
87
|`production`|
92
88
|`staging`|
93
89
94
-
## Request changes
90
+
## Request changes
95
91
96
92
### Query string changes
97
93
98
94
The V3 API has different query string parameters.
99
95
100
96
|Param name|Type|Version|Default|Purpose|
101
97
|--|--|--|--|--|
102
-
|`log`|boolean|V2 & V3|false|Store query in log file. Default value is false.|
98
+
|`log`|boolean|V2 & V3|false|Store query in log file. Default value is false.|
103
99
|`query`|string|V3 only|No default - it is required in the GET request|**In V2**, the utterance to be predicted is in the `q` parameter. <br><br>**In V3**, the functionality is passed in the `query` parameter.|
104
100
|`show-all-intents`|boolean|V3 only|false|Return all intents with the corresponding score in the **prediction.intents** object. Intents are returned as objects in a parent `intents` object. This allows programmatic access without needing to find the intent in an array: `prediction.intents.give`. In V2, these were returned in an array. |
105
101
|`verbose`|boolean|V2 & V3|false|**In V2**, when set to true, all predicted intents were returned. If you need all predicted intents, use the V3 param of `show-all-intents`.<br><br>**In V3**, this parameter only provides entity metadata details of entity prediction. |
@@ -133,7 +129,7 @@ The V3 API has different query string parameters.
133
129
134
130
## Response changes
135
131
136
-
The query response JSON changed to allow greater programmatic access to the data used most frequently.
132
+
The query response JSON changed to allow greater programmatic access to the data used most frequently.
137
133
138
134
### Top level JSON changes
139
135
@@ -158,7 +154,7 @@ The top JSON properties for V3 are:
158
154
"query": "this is your utterance you want predicted",
* Clear distinction between original utterance, `query`, and returned prediction, `prediction`.
177
173
* Easier programmatic access to predicted data. Instead of enumerating through an array in V2, you can access values by **name** for both intents and entities. For predicted entity roles, the role name is returned because it is unique across the entire app.
178
174
* Data types, if determined, are respected. Numerics are no longer returned as strings.
179
-
* Distinction between first priority prediction information and additional metadata, returned in the `$instance` object.
175
+
* Distinction between first priority prediction information and additional metadata, returned in the `$instance` object.
180
176
181
177
### Entity response changes
182
178
183
179
#### Marking placement of entities in utterances
184
180
185
-
**In V2**, an entity was marked in an utterance with the `startIndex` and `endIndex`.
181
+
**In V2**, an entity was marked in an utterance with the `startIndex` and `endIndex`.
186
182
187
183
**In V3**, the entity is marked with `startIndex` and `entityLength`.
188
184
@@ -192,13 +188,13 @@ If you need entity metadata, the query string needs to use the `verbose=true` fl
192
188
193
189
#### Each predicted entity is represented as an array
194
190
195
-
The `prediction.entities.<entity-name>` object contains an array because each entity can be predicted more than once in the utterance.
191
+
The `prediction.entities.<entity-name>` object contains an array because each entity can be predicted more than once in the utterance.
196
192
197
193
<aname="prebuilt-entities-with-new-json"></a>
198
194
199
195
#### Prebuilt entity changes
200
196
201
-
The V3 response object includes changes to prebuilt entities. Review [specific prebuilt entities](luis-reference-prebuilt-entities.md) to learn more.
197
+
The V3 response object includes changes to prebuilt entities. Review [specific prebuilt entities](luis-reference-prebuilt-entities.md) to learn more.
202
198
203
199
#### List entity prediction changes
204
200
@@ -212,7 +208,7 @@ The JSON for a list entity prediction has changed to be an array of arrays:
212
208
]
213
209
}
214
210
```
215
-
Each interior array corresponds to text inside the utterance. The interior object is an array because the same text can appear in more than one sublist of a list entity.
211
+
Each interior array corresponds to text inside the utterance. The interior object is an array because the same text can appear in more than one sublist of a list entity.
216
212
217
213
When mapping between the `entities` object to the `$instance` object, the order of objects is preserved for the list entity predictions.
In V2, the `entities` array returned all the predicted entities with the entity name being the unique identifier. In V3, if the entity uses roles and the prediction is for an entity role, the primary identifier is the role name. This is possible because entity role names must be unique across the entire app including other model (intent, entity) names.
228
224
@@ -285,11 +281,11 @@ In V3, the same result with the `verbose` flag to return entity metadata:
285
281
286
282
External entities give your LUIS app the ability to identify and label entities during runtime, which can be used as features to existing entities. This allows you to use your own separate and custom entity extractors before sending queries to your prediction endpoint. Because this is done at the query prediction endpoint, you don't need to retrain and publish your model.
287
283
288
-
The client-application is providing its own entity extractor by managing entity matching and determining the location within the utterance of that matched entity and then sending that information with the request.
284
+
The client-application is providing its own entity extractor by managing entity matching and determining the location within the utterance of that matched entity and then sending that information with the request.
289
285
290
286
External entities are the mechanism for extending any entity type while still being used as signals to other models like roles, composite, and others.
291
287
292
-
This is useful for an entity that has data available only at query prediction runtime. Examples of this type of data are constantly changing data or specific per user. You can extend a LUIS contact entity with external information from a user’s contact list.
288
+
This is useful for an entity that has data available only at query prediction runtime. Examples of this type of data are constantly changing data or specific per user. You can extend a LUIS contact entity with external information from a user's contact list.
293
289
294
290
### Entity already exists in app
295
291
@@ -301,7 +297,7 @@ Consider a first utterance in a chat bot conversation where a user enters the fo
301
297
302
298
`Send Hazem a new message`
303
299
304
-
The request from the chat bot to LUIS can pass in information in the POST body about `Hazem` so it is directly matched as one of the user’s contacts.
300
+
The request from the chat bot to LUIS can pass in information in the POST body about `Hazem` so it is directly matched as one of the user's contacts.
305
301
306
302
```json
307
303
"externalEntities": [
@@ -317,7 +313,7 @@ The request from the chat bot to LUIS can pass in information in the POST body a
317
313
]
318
314
```
319
315
320
-
The prediction response includes that external entity, with all the other predicted entities, because it is defined in the request.
316
+
The prediction response includes that external entity, with all the other predicted entities, because it is defined in the request.
321
317
322
318
### Second turn in conversation
323
319
@@ -341,11 +337,11 @@ In the previous utterance, the utterance uses `him` as a reference to `Hazem`. T
341
337
]
342
338
```
343
339
344
-
The prediction response includes that external entity, with all the other predicted entities, because it is defined in the request.
340
+
The prediction response includes that external entity, with all the other predicted entities, because it is defined in the request.
345
341
346
342
### Override existing model predictions
347
343
348
-
The `preferExternalEntities` options property specifies that if the user sends an external entity that overlaps with a predicted entity with the same name, LUIS chooses the entity passed in or the entity existing in the model.
344
+
The `preferExternalEntities` options property specifies that if the user sends an external entity that overlaps with a predicted entity with the same name, LUIS chooses the entity passed in or the entity existing in the model.
349
345
350
346
For example, consider the query `today I'm free`. LUIS detects `today` as a datetimeV2 with the following response:
351
347
@@ -376,7 +372,7 @@ If the user sends the external entity:
376
372
}
377
373
```
378
374
379
-
If the `preferExternalEntities` is set to `false`, LUIS returns a response as if the external entity were not sent.
375
+
If the `preferExternalEntities` is set to `false`, LUIS returns a response as if the external entity were not sent.
380
376
381
377
```JSON
382
378
"datetimeV2": [
@@ -406,22 +402,22 @@ If the `preferExternalEntities` is set to `true`, LUIS returns a response includ
406
402
407
403
#### Resolution
408
404
409
-
The _optional_`resolution` property returns in the prediction response, allowing you to pass in the metadata associated with the external entity, then receive it back out in the response.
405
+
The _optional_`resolution` property returns in the prediction response, allowing you to pass in the metadata associated with the external entity, then receive it back out in the response.
410
406
411
-
The primary purpose is to extend prebuilt entities but it is not limited to that entity type.
407
+
The primary purpose is to extend prebuilt entities but it is not limited to that entity type.
412
408
413
409
The `resolution` property can be a number, a string, an object, or an array:
414
410
415
411
* "Dallas"
416
412
* {"text": "value"}
417
-
* 12345
413
+
* 12345
418
414
*["a", "b", "c"]
419
415
420
416
421
417
422
418
## Dynamic lists passed in at prediction time
423
419
424
-
Dynamic lists allow you to extend an existing trained and published list entity, already in the LUIS app.
420
+
Dynamic lists allow you to extend an existing trained and published list entity, already in the LUIS app.
425
421
426
422
Use this feature when your list entity values need to change periodically. This feature allows you to extend an already trained and published list entity:
427
423
@@ -459,12 +455,12 @@ Send in the following JSON body to add a new sublist with synonyms to the list,
459
455
}
460
456
```
461
457
462
-
The prediction response includes that list entity, with all the other predicted entities, because it is defined in the request.
458
+
The prediction response includes that list entity, with all the other predicted entities, because it is defined in the request.
463
459
464
-
## Deprecation
460
+
## Deprecation
465
461
466
-
The V2 API will not be deprecated for at least 9 months after the V3 preview.
462
+
The V2 API will not be deprecated for at least 9 months after the V3 preview.
467
463
468
464
## Next steps
469
465
470
-
Use the V3 API documentation to update existing REST calls to LUIS [endpoint](https://aka.ms/luis-api-v3) APIs.
466
+
Use the V3 API documentation to update existing REST calls to LUIS [endpoint](https://aka.ms/luis-api-v3) APIs.
0 commit comments