You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory-b2c/whats-new-docs.md
+9-8Lines changed: 9 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: "What's new in Azure Active Directory business-to-customer (B2C)"
3
3
description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)."
4
-
ms.date: 06/05/2024
4
+
ms.date: 07/01/2024
5
5
ms.service: active-directory
6
6
ms.subservice: B2C
7
7
ms.topic: whats-new
@@ -19,6 +19,14 @@ manager: CelesteDG
19
19
20
20
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Microsoft Entra ID](../active-directory/fundamentals/whats-new.md), [Azure AD B2C developer release notes](custom-policy-developer-notes.md) and [What's new in Microsoft Entra External ID](/entra/external-id/whats-new-docs).
21
21
22
+
## June 2024
23
+
24
+
### Updated articles
25
+
26
+
-[Define an OAuth2 custom error technical profile in an Azure Active Directory B2C custom policy](oauth2-error-technical-profile.md) - Error code updates
27
+
-[Configure authentication in a sample Python web app by using Azure AD B2C](configure-authentication-sample-python-web-app.md) - Python version update
28
+
29
+
22
30
## May 2024
23
31
24
32
### New articles
@@ -45,10 +53,3 @@ Welcome to what's new in Azure Active Directory B2C documentation. This article
-[Tutorial: Configure Nok Nok Passport with Azure Active Directory B2C for passwordless FIDO2 authentication](partner-nok-nok.md) - Updated Nok Nok instructions
53
-
-[Configure Transmit Security with Azure Active Directory B2C for passwordless authentication](partner-bindid.md) - Updated Transmit Security instructions
54
-
-[About claim resolvers in Azure Active Directory B2C custom policies](claim-resolver-overview.md) - Updated claim resolvers and user journey
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/conversational-language-understanding/concepts/best-practices.md
+24-10Lines changed: 24 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ You also want to avoid mixing different schema designs. Do not build half of you
43
43
44
44
## Use standard training before advanced training
45
45
46
-
[Standard training](../how-to/train-model.md#training-modes) is free and faster than Advanced training, making it useful to quickly understand the effect of changing your training set or schema while building the model. Once you are satisfied with the schema, consider using advanced training to get the best AIQ out of your model.
46
+
[Standard training](../how-to/train-model.md#training-modes) is free and faster than Advanced training, making it useful to quickly understand the effect of changing your training set or schema while building the model. Once you're satisfied with the schema, consider using advanced training to get the best AIQ out of your model.
47
47
48
48
## Use the evaluation feature
49
49
@@ -73,17 +73,31 @@ To resolve this, you would label a learned component in your training data for a
73
73
If you require the learned component, make sure that *ticket quantity* is only returned when the learned component predicts it in the right context. If you also require the prebuilt component, you can then guarantee that the returned *ticket quantity* entity is both a number and in the correct position.
74
74
75
75
76
-
## Addressing casing inconsistencies
76
+
## Addressing model inconsistencies
77
77
78
-
If you have poor AI quality and determine the casing used in your training data is dissimilar to the testing data, you can use the `normalizeCasing`project setting. This normalizes the casing of utterances when trainingand testing the model. If you've migrated from LUIS, you might recognize that LUIS did this by default.
78
+
If your model is overly sensitive to small grammatical changes, like casing or diacritics, you can systematically manipulate your dataset directly in the Language Studio. To use these features, click on the Settings tab on the left toolbar and locate the **Advanced project settings** section. First, you can ***Enable data transformation for casing***, which normalizes the casing of utterances when training, testing, and implementing your model. If you've migrated from LUIS, you might recognize that LUIS did this normalization by default. To access this feature via the API, set the `"normalizeCasing"` parameter to `true`. See an example below:
79
79
80
80
```json
81
81
{
82
82
"projectFileVersion": "2022-10-01-preview",
83
83
...
84
84
"settings": {
85
-
"confidenceThreshold": 0.5,
85
+
...
86
86
"normalizeCasing": true
87
+
...
88
+
}
89
+
...
90
+
```
91
+
Second, you can also leverage the **Advanced project settings** to ***Enable data augmentation for diacritics*** to generate variations of your training data for possible diacritic variations used in natural language. This feature is available for all languages, but it is especially useful for Germanic and Slavic languages, where users often write words using classic English characters instead of the correct characters. For example, the phrase "Navigate to the sports channel" in French is "Accédez à la chaîne sportive". When this feature is enabled, the phrase "Accedez a la chaine sportive" (without diacritic characters) is also included in the training dataset. If you enable this feature, please note that the utterance count of your training set will increase, and you may need to adjust your training data size accordingly. The current maximum utterance count after augmentation is 25,000. To access this feature via the API, set the `"augmentDiacritics"` parameter to `true`. See an example below:
92
+
93
+
```json
94
+
{
95
+
"projectFileVersion": "2022-10-01-preview",
96
+
...
97
+
"settings": {
98
+
...
99
+
"augmentDiacritics": true
100
+
...
87
101
}
88
102
...
89
103
```
@@ -125,9 +139,9 @@ Once the request is sent, you can track the progress of the training job in Lang
125
139
126
140
Model version 2023-04-15, conversational language understanding provides normalization in the inference layer that doesn't affect training.
127
141
128
-
The normalization layer normalizes the classification confidence scores to a confined range. The range selected currently is from `[-a,a]` where "a" is the square root of the number of intents. As a result, the normalization depends on the number of intents in the app. If there is a very low number of intents, the normalization layer has a very small range to work with. With a fairly large number of intents, the normalization is more effective.
142
+
The normalization layer normalizes the classification confidence scores to a confined range. The range selected currently is from `[-a,a]` where "a" is the square root of the number of intents. As a result, the normalization depends on the number of intents in the app. If there's a very low number of intents, the normalization layer has a very small range to work with. With a fairly large number of intents, the normalization is more effective.
129
143
130
-
If this normalization doesn’t seem to help intents that are out of scope to the extent that the confidence threshold can be used to filter out of scope utterances, it might be related to the number of intents in the app. Consider adding more intents to the app, or if you are using an orchestrated architecture, consider merging apps that belong to the same domain together.
144
+
If this normalization doesn’t seem to help intents that are out of scope to the extent that the confidence threshold can be used to filter out of scope utterances, it might be related to the number of intents in the app. Consider adding more intents to the app, or if you're using an orchestrated architecture, consider merging apps that belong to the same domain together.
131
145
132
146
## Debugging composed entities
133
147
@@ -146,7 +160,7 @@ Data in a conversational language understanding project can have two data sets.
146
160
147
161
## Custom parameters for target apps and child apps
148
162
149
-
If you are using [orchestrated apps](./app-architecture.md), you may want to send custom parameter overrides for various child apps. The `targetProjectParameters` field allows users to send a dictionary representing the parameters for each target project. For example, consider an orchestrator app named `Orchestrator` orchestrating between a conversational language understanding app named `CLU1` and a custom question answering app named `CQA1`. If you want to send a parameter named "top" to the question answering app, you can use the above parameter.
163
+
If you're using [orchestrated apps](./app-architecture.md), you may want to send custom parameter overrides for various child apps. The `targetProjectParameters` field allows users to send a dictionary representing the parameters for each target project. For example, consider an orchestrator app named `Orchestrator` orchestrating between a conversational language understanding app named `CLU1` and a custom question answering app named `CQA1`. If you want to send a parameter named "top" to the question answering app, you can use the above parameter.
Once the request is sent, you can track the progress of the training job in Language Studio as usual.
250
264
251
265
Caveats:
252
-
- The None Score threshold for the app (confidence threshold below which the topIntent is marked as None) when using this recipe should be set to 0. This is because this new recipe attributes a certain portion of the in domain probabiliities to out of domain so that the model is not incorrectly overconfident about in domain utterances. As a result, users may see slightly reduced confidence scores for in domain utterances as compared to the prod recipe.
253
-
- This recipe is not recommended for apps with just two (2) intents, such as IntentA and None, for example.
254
-
- This recipe is not recommended for apps with low number of utterances per intent. A minimum of 25 utterances per intent is highly recommended.
266
+
- The None Score threshold for the app (confidence threshold below which the topIntent is marked as None) when using this recipe should be set to 0. This is because this new recipe attributes a certain portion of the in domain probabilities to out of domain so that the model isn't incorrectly overconfident about in domain utterances. As a result, users may see slightly reduced confidence scores for in domain utterances as compared to the prod recipe.
267
+
- This recipe isn't recommended for apps with just two (2) intents, such as IntentA and None, for example.
268
+
- This recipe isn't recommended for apps with low number of utterances per intent. A minimum of 25 utterances per intent is highly recommended.
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/named-entity-recognition/how-to/skill-parameters.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,9 +24,11 @@ The “inclusionList” parameter allows for you to specify which of the NER ent
24
24
25
25
The “exclusionList” parameter allows for you to specify which of the NER entity tags, listed here [link to Preview API table], you would like excluded in the entity list output in your inference JSON listing out all words and categorizations recognized by the NER service. By default, all recognized entities will be listed.
26
26
27
+
<!--
27
28
## Example
28
29
29
30
To do: work with Bidisha & Mikael to update with a good example
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/question-answering/tutorials/active-learning.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ This tutorial shows you how to enhance your custom question answering project wi
24
24
25
25
These variations when added as alternate questions to the relevant question answer pair, help to optimize the project to answer real world user queries. You can manually add alternate questions to question answer pairs through the editor. At the same time, you can also use the active learning feature to generate active learning suggestions based on user queries. The active learning feature, however, requires that the project receives regular user traffic to generate suggestions.
26
26
27
-
## Enable active learning
27
+
## Use active learning
28
28
29
29
Active learning is turned on by default for custom question answering enabled resources.
* Conversation summarization takes structured text for analysis. For more information, see [data and service limits](../concepts/data-limits.md).
158
-
* Conversation summarization accepts text in English. For more information, see [language support](language-support.md?tabs=conversation-summarization).
158
+
* Conversation summarization works with various spoken languages. For more information, see [language support](language-support.md?tabs=conversation-summarization).
0 commit comments