You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: Question answering uses several Azure sources, each with a different purpose. Understanding how they are used individually allows you to plan for and select the correct pricing tier or know when to change your pricing tier. Understanding how they are used in combination allows you to find and fix problems when they occur.
4
4
ms.service: azure-ai-language
5
5
ms.topic: conceptual
@@ -9,14 +9,14 @@ ms.date: 12/19/2023
9
9
ms.custom: language-service-question-answering
10
10
---
11
11
12
-
# Azure resources for question answering
12
+
# Azure resources for custom question answering
13
13
14
-
Question answering uses several Azure sources, each with a different purpose. Understanding how they are used individually allows you to plan for and select the correct pricing tier or know when to change your pricing tier. Understanding how resources are used _in combination_ allows you to find and fix problems when they occur.
14
+
Custom question answering uses several Azure sources, each with a different purpose. Understanding how they are used individually allows you to plan for and select the correct pricing tier or know when to change your pricing tier. Understanding how resources are used _in combination_ allows you to find and fix problems when they occur.
15
15
16
16
## Resource planning
17
17
18
18
> [!TIP]
19
-
> "Knowledge base" and "project" are equivalent terms in question answering and can be used interchangeably.
19
+
> "Knowledge base" and "project" are equivalent terms in custom question answering and can be used interchangeably.
20
20
21
21
When you first develop a project, in the prototype phase, it is common to have a single resource for both testing and production.
22
22
@@ -32,7 +32,7 @@ Typically there are three parameters you need to consider:
32
32
33
33
***The throughput you need**:
34
34
35
-
* The throughput for question answering is currently capped at 10 text records per second for both management APIs and prediction APIs.
35
+
* The throughput for custom question answering is currently capped at 10 text records per second for both management APIs and prediction APIs.
36
36
37
37
* This should also influence your **Azure AI Search** SKU selection, see more details [here](../../../../search/search-sku-tier.md). Additionally, you may need to adjust Azure AI Search [capacity](../../../../search/search-capacity-planning.md) with replicas.
38
38
@@ -45,7 +45,7 @@ Typically there are three parameters you need to consider:
45
45
46
46
For example, if your tier has 15 allowed indexes, you can publish 14 projects of the same language (one index per published project). The 15th index is used for all the projects for authoring and testing. If you choose to have projects in different languages, then you can only publish seven projects.
47
47
48
-
***Number of documents as sources**: There are no limits to the number of documents you can add as sources in question answering.
48
+
***Number of documents as sources**: There are no limits to the number of documents you can add as sources in custom question answering.
49
49
50
50
The following table gives you some high-level guidelines.
51
51
@@ -58,10 +58,10 @@ The following table gives you some high-level guidelines.
58
58
## Recommended settings
59
59
60
60
61
-
The throughput for question answering is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) SKU of Azure AI Search.
61
+
The throughput for custom question answering is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) SKU of Azure AI Search.
62
62
63
63
64
-
## Keys in question answering
64
+
## Keys in custom question answering
65
65
66
66
Your custom question answering feature deals with two kinds of keys: **authoring keys** and **Azure AI Search keys** used to access the service in the customer’s subscription.
67
67
@@ -70,7 +70,7 @@ Use these keys when making requests to the service through APIs.
70
70
|Name|Location|Purpose|
71
71
|--|--|--|
72
72
|Authoring/Subscription key|[Azure portal](https://azure.microsoft.com/free/cognitive-services/)|These keys are used to access the Language service APIs). These APIs let you edit the questions and answers in your project, and publish your project. These keys are created when you create a new resource.<br><br>Find these keys on the **Azure AI services** resource on the **Keys and Endpoint** page.|
73
-
|Azure AI Search Admin Key|[Azure portal](../../../../search/search-security-api-keys.md)|These keys are used to communicate with the Azure AI Search service deployed in the user’s Azure subscription. When you associate an Azure AI Search resource with the custom question answering feature, the admin key is automatically passed to question answering. <br><br>You can find these keys on the **Azure AI Search** resource on the **Keys** page.|
73
+
|Azure AI Search Admin Key|[Azure portal](../../../../search/search-security-api-keys.md)|These keys are used to communicate with the Azure AI Search service deployed in the user’s Azure subscription. When you associate an Azure AI Search resource with the custom question answering feature, the admin key is automatically passed to custom question answering. <br><br>You can find these keys on the **Azure AI Search** resource on the **Keys** page.|
74
74
75
75
### Find authoring keys in the Azure portal
76
76
@@ -79,7 +79,7 @@ You can view and reset your authoring keys from the Azure portal, where you adde
79
79
1. Go to the language resource in the Azure portal and select the resource that has the *Azure AI services* type:
80
80
81
81
> [!div class="mx-imgBorder"]
82
-
> 
82
+
> 
83
83
84
84
2. Go to **Keys and Endpoint**:
85
85
@@ -92,7 +92,7 @@ In custom question answering, both the management and the prediction services ar
92
92
93
93
## Resource purposes
94
94
95
-
Each Azure resource created with Custom question answering feature has a specific purpose:
95
+
Each Azure resource created with custom question answering feature has a specific purpose:
96
96
97
97
* Language resource (Also referred to as a Text Analytics resource depending on the context of where you are evaluating the resource.)
98
98
* Azure AI Search resource
@@ -120,4 +120,4 @@ With custom question answering, you have a choice to set up your service for pro
120
120
121
121
## Next steps
122
122
123
-
* Learn about the question answering [projects](../How-To/manage-knowledge-base.md)
123
+
* Learn about the custom question answering [projects](../How-To/manage-knowledge-base.md)
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/question-answering/concepts/best-practices.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: Best practices - question answering
2
+
title: Best practices - custom question answering
3
3
description: Use these best practices to improve your project and provide better results to your application/chat bot's end users.
4
4
ms.service: azure-ai-language
5
5
author: jboback
@@ -9,17 +9,17 @@ ms.date: 12/19/2023
9
9
ms.custom: language-service-question-answering
10
10
---
11
11
12
-
# Question answering best practices
12
+
# Custom question answering best practices
13
13
14
14
Use these best practices to improve your project and provide better results to your client application or chat bot's end users.
15
15
16
16
## Extraction
17
17
18
-
Question answering is continually improving the algorithms that extract question answer pairs from content and expanding the list of supported file and HTML formats. In general, FAQ pages should be stand-alone and not combined with other information. Product manuals should have clear headings and preferably an index page.
18
+
Custom question answering is continually improving the algorithms that extract question answer pairs from content and expanding the list of supported file and HTML formats. In general, FAQ pages should be stand-alone and not combined with other information. Product manuals should have clear headings and preferably an index page.
19
19
20
20
## Creating good questions and answers
21
21
22
-
We’ve used the following list of question and answer pairs as representation of a project to highlight best practices when authoring projects for question answering.
22
+
We’ve used the following list of question and answer pairs as representation of a project to highlight best practices when authoring projects for custom question answering.
23
23
24
24
| Question | Answer |
25
25
|----------|----------|
@@ -32,7 +32,7 @@ We’ve used the following list of question and answer pairs as representation o
32
32
33
33
### When should you add alternate questions to question and answer pairs?
34
34
35
-
Question answering employs a transformer-based ranker that takes care of user queries that are semantically similar to the question in the project. For example, consider the following question answer pair:
35
+
Custom question answering employs a transformer-based ranker that takes care of user queries that are semantically similar to the question in the project. For example, consider the following question answer pair:
36
36
37
37
*Question: What is the price of Microsoft Stock?*
38
38
*Answer: $200.*
@@ -53,13 +53,13 @@ There are certain scenarios that require the customer to add an alternate questi
53
53
54
54
Users can add as many alternate questions as they want, but only first 5 will be considered for core ranking. However, the rest will be useful for exact match scenarios. It is also recommended to keep the different intent/distinct alternate questions at the top for better relevance and score.
55
55
56
-
Semantic understanding in question answering should be able to take care of similar alternate questions.
56
+
Semantic understanding in custom question answering should be able to take care of similar alternate questions.
57
57
58
58
The return on investment will start diminishing once you exceed 10 questions. Even if you’re adding more than 10 alternate questions, try to make the initial 10 questions as semantically dissimilar as possible so that all kinds of intents for the answer are captured by these 10 questions. For the project at the beginning of this section, in question answer pair #1, adding alternate questions such as “How can I buy a car”, “I wanna buy a car” aren’t required. Whereas adding alternate questions such as “How to purchase a car”, “What are the options of buying a vehicle” can be useful.
59
59
60
60
### When to add synonyms to a project?
61
61
62
-
Question answering provides the flexibility to use synonyms at the project level, unlike QnA Maker where synonyms are shared across projects for the entire service.
62
+
Custom question answering provides the flexibility to use synonyms at the project level, unlike QnA Maker where synonyms are shared across projects for the entire service.
63
63
64
64
For better relevance, you need to provide a list of acronyms that the end user intends to use interchangeably. The following is a list of acceptable acronyms:
65
65
@@ -81,7 +81,7 @@ Question answering takes casing into account but it's intelligent enough to unde
81
81
82
82
### How are question answer pairs prioritized for multi-turn questions?
83
83
84
-
When a project has hierarchical relationships (either added manually or via extraction) and the previous response was an answer related to other question answer pairs, for the next query we give slight preference to all the children question answer pairs, sibling question answer pairs, and grandchildren question answer pairs in that order. Along with any query, the [Question Answering REST API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) expects a `context` object with the property `previousQnAId`, which denotes the last top answer. Based on this previous `QnAID`, all the related `QnAs` are boosted.
84
+
When a project has hierarchical relationships (either added manually or via extraction) and the previous response was an answer related to other question answer pairs, for the next query we give slight preference to all the children question answer pairs, sibling question answer pairs, and grandchildren question answer pairs in that order. Along with any query, the [custom question answering REST API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) expects a `context` object with the property `previousQnAId`, which denotes the last top answer. Based on this previous `QnAID`, all the related `QnAs` are boosted.
85
85
86
86
### How are accents treated?
87
87
@@ -101,7 +101,7 @@ Chit-chat is supported in [many languages](../how-to/chit-chat.md#language-suppo
101
101
102
102
Chit-chat is supported for several predefined personalities:
@@ -129,7 +129,7 @@ If you add your own chit-chat question answer pairs, make sure to add metadata s
129
129
130
130
## Searching for answers
131
131
132
-
Question answering REST API uses both questions and the answer to search for best answers to a user's query.
132
+
The custom question answering REST API uses both questions and the answer to search for best answers to a user's query.
133
133
134
134
### Searching questions only when answer isn’t relevant
135
135
@@ -147,7 +147,7 @@ The default [confidence score](confidence-score.md) that is used as a threshold
147
147
148
148
### Choosing Ranker type
149
149
150
-
By default, question answering searches through questions and answers. If you want to search through questions only, to generate an answer, use the `RankerType=QuestionOnly` in the POST body of the REST API request.
150
+
By default, custom question answering searches through questions and answers. If you want to search through questions only, to generate an answer, use the `RankerType=QuestionOnly` in the POST body of the REST API request.
151
151
152
152
### Add alternate questions
153
153
@@ -187,7 +187,7 @@ Since these two questions are phrased with very similar words, this similarity c
187
187
188
188
## Collaborate
189
189
190
-
Question answering allows users to collaborate on a project. Users need access to the associated Azure resource group in order to access the projects. Some organizations may want to outsource the project editing and maintenance, and still be able to protect access to their Azure resources. This editor-approver model is done by setting up two identical language resources with identical question answering projects in different subscriptions and selecting one for the edit-testing cycle. Once testing is finished, the project contents are exported and transferred with an [import-export](../how-to/migrate-knowledge-base.md) process to the language resource of the approver that will finally deploy the project and update the endpoint.
190
+
Custom question answering allows users to collaborate on a project. Users need access to the associated Azure resource group in order to access the projects. Some organizations may want to outsource the project editing and maintenance, and still be able to protect access to their Azure resources. This editor-approver model is done by setting up two identical language resources with identical custom question answering projects in different subscriptions and selecting one for the edit-testing cycle. Once testing is finished, the project contents are exported and transferred with an [import-export](../how-to/migrate-knowledge-base.md) process to the language resource of the approver that will finally deploy the project and update the endpoint.
When a user query is matched against a project (also known as a knowledge base), question answering returns relevant answers, along with a confidence score. This score indicates the confidence that the answer is the right match for the given user query.
17
+
When a user query is matched against a project (also known as a knowledge base), custom question answering returns relevant answers, along with a confidence score. This score indicates the confidence that the answer is the right match for the given user query.
18
18
19
19
The confidence score is a number between 0 and 100. A score of 100 is likely an exact match, while a score of 0 means, that no matching answer was found. The higher the score- the greater the confidence in the answer. For a given query, there could be multiple answers returned. In that case, the answers are returned in order of decreasing confidence score.
20
20
@@ -31,7 +31,7 @@ The following table indicates typical confidence associated for a given score.
31
31
32
32
## Choose a score threshold
33
33
34
-
The table above shows the range of scores that can occur when querying with question answering. However, since every project is different, and has different types of words, intents, and goals- we recommend you test and choose the threshold that best works for you. By default the threshold is set to `0`, so that all possible answers are returned. The recommended threshold that should work for most projects, is **50**.
34
+
The table above shows the range of scores that can occur when querying with custom question answering. However, since every project is different, and has different types of words, intents, and goals- we recommend you test and choose the threshold that best works for you. By default the threshold is set to `0`, so that all possible answers are returned. The recommended threshold that should work for most projects, is **50**.
35
35
36
36
When choosing your threshold, keep in mind the balance between **Accuracy** and **Coverage**, and adjust your threshold based on your requirements.
0 commit comments