Skip to content

Commit 0217d05

Browse files
authored
Merge pull request #268042 from jboback/QAtoCQA
Name fixes for CQA
2 parents 969260b + cf763d4 commit 0217d05

40 files changed

+280
-281
lines changed

articles/ai-services/language-service/question-answering/concepts/azure-resources.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Azure resources - question answering
2+
title: Azure resources - custom question answering
33
description: Question answering uses several Azure sources, each with a different purpose. Understanding how they are used individually allows you to plan for and select the correct pricing tier or know when to change your pricing tier. Understanding how they are used in combination allows you to find and fix problems when they occur.
44
ms.service: azure-ai-language
55
ms.topic: conceptual
@@ -9,14 +9,14 @@ ms.date: 12/19/2023
99
ms.custom: language-service-question-answering
1010
---
1111

12-
# Azure resources for question answering
12+
# Azure resources for custom question answering
1313

14-
Question answering uses several Azure sources, each with a different purpose. Understanding how they are used individually allows you to plan for and select the correct pricing tier or know when to change your pricing tier. Understanding how resources are used _in combination_ allows you to find and fix problems when they occur.
14+
Custom question answering uses several Azure sources, each with a different purpose. Understanding how they are used individually allows you to plan for and select the correct pricing tier or know when to change your pricing tier. Understanding how resources are used _in combination_ allows you to find and fix problems when they occur.
1515

1616
## Resource planning
1717

1818
> [!TIP]
19-
> "Knowledge base" and "project" are equivalent terms in question answering and can be used interchangeably.
19+
> "Knowledge base" and "project" are equivalent terms in custom question answering and can be used interchangeably.
2020
2121
When you first develop a project, in the prototype phase, it is common to have a single resource for both testing and production.
2222

@@ -32,7 +32,7 @@ Typically there are three parameters you need to consider:
3232

3333
* **The throughput you need**:
3434

35-
* The throughput for question answering is currently capped at 10 text records per second for both management APIs and prediction APIs.
35+
* The throughput for custom question answering is currently capped at 10 text records per second for both management APIs and prediction APIs.
3636

3737
* This should also influence your **Azure AI Search** SKU selection, see more details [here](../../../../search/search-sku-tier.md). Additionally, you may need to adjust Azure AI Search [capacity](../../../../search/search-capacity-planning.md) with replicas.
3838

@@ -45,7 +45,7 @@ Typically there are three parameters you need to consider:
4545
4646
For example, if your tier has 15 allowed indexes, you can publish 14 projects of the same language (one index per published project). The 15th index is used for all the projects for authoring and testing. If you choose to have projects in different languages, then you can only publish seven projects.
4747

48-
* **Number of documents as sources**: There are no limits to the number of documents you can add as sources in question answering.
48+
* **Number of documents as sources**: There are no limits to the number of documents you can add as sources in custom question answering.
4949

5050
The following table gives you some high-level guidelines.
5151

@@ -58,10 +58,10 @@ The following table gives you some high-level guidelines.
5858
## Recommended settings
5959

6060

61-
The throughput for question answering is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) SKU of Azure AI Search.
61+
The throughput for custom question answering is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) SKU of Azure AI Search.
6262

6363

64-
## Keys in question answering
64+
## Keys in custom question answering
6565

6666
Your custom question answering feature deals with two kinds of keys: **authoring keys** and **Azure AI Search keys** used to access the service in the customer’s subscription.
6767

@@ -70,7 +70,7 @@ Use these keys when making requests to the service through APIs.
7070
|Name|Location|Purpose|
7171
|--|--|--|
7272
|Authoring/Subscription key|[Azure portal](https://azure.microsoft.com/free/cognitive-services/)|These keys are used to access the Language service APIs). These APIs let you edit the questions and answers in your project, and publish your project. These keys are created when you create a new resource.<br><br>Find these keys on the **Azure AI services** resource on the **Keys and Endpoint** page.|
73-
|Azure AI Search Admin Key|[Azure portal](../../../../search/search-security-api-keys.md)|These keys are used to communicate with the Azure AI Search service deployed in the user’s Azure subscription. When you associate an Azure AI Search resource with the custom question answering feature, the admin key is automatically passed to question answering. <br><br>You can find these keys on the **Azure AI Search** resource on the **Keys** page.|
73+
|Azure AI Search Admin Key|[Azure portal](../../../../search/search-security-api-keys.md)|These keys are used to communicate with the Azure AI Search service deployed in the user’s Azure subscription. When you associate an Azure AI Search resource with the custom question answering feature, the admin key is automatically passed to custom question answering. <br><br>You can find these keys on the **Azure AI Search** resource on the **Keys** page.|
7474

7575
### Find authoring keys in the Azure portal
7676

@@ -79,7 +79,7 @@ You can view and reset your authoring keys from the Azure portal, where you adde
7979
1. Go to the language resource in the Azure portal and select the resource that has the *Azure AI services* type:
8080

8181
> [!div class="mx-imgBorder"]
82-
> ![Screenshot of question answering resource list.](../../../qnamaker/media/qnamaker-how-to-setup-service/resources-created-question-answering.png)
82+
> ![Screenshot of custom question answering resource list.](../../../qnamaker/media/qnamaker-how-to-setup-service/resources-created-question-answering.png)
8383
8484
2. Go to **Keys and Endpoint**:
8585

@@ -92,7 +92,7 @@ In custom question answering, both the management and the prediction services ar
9292

9393
## Resource purposes
9494

95-
Each Azure resource created with Custom question answering feature has a specific purpose:
95+
Each Azure resource created with custom question answering feature has a specific purpose:
9696

9797
* Language resource (Also referred to as a Text Analytics resource depending on the context of where you are evaluating the resource.)
9898
* Azure AI Search resource
@@ -120,4 +120,4 @@ With custom question answering, you have a choice to set up your service for pro
120120

121121
## Next steps
122122

123-
* Learn about the question answering [projects](../How-To/manage-knowledge-base.md)
123+
* Learn about the custom question answering [projects](../How-To/manage-knowledge-base.md)

articles/ai-services/language-service/question-answering/concepts/best-practices.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Best practices - question answering
2+
title: Best practices - custom question answering
33
description: Use these best practices to improve your project and provide better results to your application/chat bot's end users.
44
ms.service: azure-ai-language
55
author: jboback
@@ -9,17 +9,17 @@ ms.date: 12/19/2023
99
ms.custom: language-service-question-answering
1010
---
1111

12-
# Question answering best practices
12+
# Custom question answering best practices
1313

1414
Use these best practices to improve your project and provide better results to your client application or chat bot's end users.
1515

1616
## Extraction
1717

18-
Question answering is continually improving the algorithms that extract question answer pairs from content and expanding the list of supported file and HTML formats. In general, FAQ pages should be stand-alone and not combined with other information. Product manuals should have clear headings and preferably an index page.
18+
Custom question answering is continually improving the algorithms that extract question answer pairs from content and expanding the list of supported file and HTML formats. In general, FAQ pages should be stand-alone and not combined with other information. Product manuals should have clear headings and preferably an index page.
1919

2020
## Creating good questions and answers
2121

22-
We’ve used the following list of question and answer pairs as representation of a project to highlight best practices when authoring projects for question answering.
22+
We’ve used the following list of question and answer pairs as representation of a project to highlight best practices when authoring projects for custom question answering.
2323

2424
| Question | Answer |
2525
|----------|----------|
@@ -32,7 +32,7 @@ We’ve used the following list of question and answer pairs as representation o
3232

3333
### When should you add alternate questions to question and answer pairs?
3434

35-
Question answering employs a transformer-based ranker that takes care of user queries that are semantically similar to the question in the project. For example, consider the following question answer pair:
35+
Custom question answering employs a transformer-based ranker that takes care of user queries that are semantically similar to the question in the project. For example, consider the following question answer pair:
3636

3737
*Question: What is the price of Microsoft Stock?*
3838
*Answer: $200.*
@@ -53,13 +53,13 @@ There are certain scenarios that require the customer to add an alternate questi
5353

5454
Users can add as many alternate questions as they want, but only first 5 will be considered for core ranking. However, the rest will be useful for exact match scenarios. It is also recommended to keep the different intent/distinct alternate questions at the top for better relevance and score.
5555

56-
Semantic understanding in question answering should be able to take care of similar alternate questions.
56+
Semantic understanding in custom question answering should be able to take care of similar alternate questions.
5757

5858
The return on investment will start diminishing once you exceed 10 questions. Even if you’re adding more than 10 alternate questions, try to make the initial 10 questions as semantically dissimilar as possible so that all kinds of intents for the answer are captured by these 10 questions. For the project at the beginning of this section, in question answer pair #1, adding alternate questions such as “How can I buy a car”, “I wanna buy a car” aren’t required. Whereas adding alternate questions such as “How to purchase a car”, “What are the options of buying a vehicle” can be useful.
5959

6060
### When to add synonyms to a project?
6161

62-
Question answering provides the flexibility to use synonyms at the project level, unlike QnA Maker where synonyms are shared across projects for the entire service.
62+
Custom question answering provides the flexibility to use synonyms at the project level, unlike QnA Maker where synonyms are shared across projects for the entire service.
6363

6464
For better relevance, you need to provide a list of acronyms that the end user intends to use interchangeably. The following is a list of acceptable acronyms:
6565

@@ -81,7 +81,7 @@ Question answering takes casing into account but it's intelligent enough to unde
8181

8282
### How are question answer pairs prioritized for multi-turn questions?
8383

84-
When a project has hierarchical relationships (either added manually or via extraction) and the previous response was an answer related to other question answer pairs, for the next query we give slight preference to all the children question answer pairs, sibling question answer pairs, and grandchildren question answer pairs in that order. Along with any query, the [Question Answering REST API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) expects a `context` object with the property `previousQnAId`, which denotes the last top answer. Based on this previous `QnAID`, all the related `QnAs` are boosted.
84+
When a project has hierarchical relationships (either added manually or via extraction) and the previous response was an answer related to other question answer pairs, for the next query we give slight preference to all the children question answer pairs, sibling question answer pairs, and grandchildren question answer pairs in that order. Along with any query, the [custom question answering REST API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) expects a `context` object with the property `previousQnAId`, which denotes the last top answer. Based on this previous `QnAID`, all the related `QnAs` are boosted.
8585

8686
### How are accents treated?
8787

@@ -101,7 +101,7 @@ Chit-chat is supported in [many languages](../how-to/chit-chat.md#language-suppo
101101

102102
Chit-chat is supported for several predefined personalities:
103103

104-
|Personality |Question answering dataset file |
104+
|Personality |Custom question answering dataset file |
105105
|---------|-----|
106106
|Professional |[qna_chitchat_professional.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_professional.tsv) |
107107
|Friendly |[qna_chitchat_friendly.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_friendly.tsv) |
@@ -129,7 +129,7 @@ If you add your own chit-chat question answer pairs, make sure to add metadata s
129129

130130
## Searching for answers
131131

132-
Question answering REST API uses both questions and the answer to search for best answers to a user's query.
132+
The custom question answering REST API uses both questions and the answer to search for best answers to a user's query.
133133

134134
### Searching questions only when answer isn’t relevant
135135

@@ -147,7 +147,7 @@ The default [confidence score](confidence-score.md) that is used as a threshold
147147

148148
### Choosing Ranker type
149149

150-
By default, question answering searches through questions and answers. If you want to search through questions only, to generate an answer, use the `RankerType=QuestionOnly` in the POST body of the REST API request.
150+
By default, custom question answering searches through questions and answers. If you want to search through questions only, to generate an answer, use the `RankerType=QuestionOnly` in the POST body of the REST API request.
151151

152152
### Add alternate questions
153153

@@ -187,7 +187,7 @@ Since these two questions are phrased with very similar words, this similarity c
187187

188188
## Collaborate
189189

190-
Question answering allows users to collaborate on a project. Users need access to the associated Azure resource group in order to access the projects. Some organizations may want to outsource the project editing and maintenance, and still be able to protect access to their Azure resources. This editor-approver model is done by setting up two identical language resources with identical question answering projects in different subscriptions and selecting one for the edit-testing cycle. Once testing is finished, the project contents are exported and transferred with an [import-export](../how-to/migrate-knowledge-base.md) process to the language resource of the approver that will finally deploy the project and update the endpoint.
190+
Custom question answering allows users to collaborate on a project. Users need access to the associated Azure resource group in order to access the projects. Some organizations may want to outsource the project editing and maintenance, and still be able to protect access to their Azure resources. This editor-approver model is done by setting up two identical language resources with identical custom question answering projects in different subscriptions and selecting one for the edit-testing cycle. Once testing is finished, the project contents are exported and transferred with an [import-export](../how-to/migrate-knowledge-base.md) process to the language resource of the approver that will finally deploy the project and update the endpoint.
191191

192192
## Active learning
193193

articles/ai-services/language-service/question-answering/concepts/confidence-score.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
2-
title: Confidence score - question answering
2+
title: Confidence score - custom question answering
33
titleSuffix: Azure AI services
4-
description: When a user query is matched against a knowledge base, question answering returns relevant answers, along with a confidence score.
4+
description: When a user query is matched against a knowledge base, custom question answering returns relevant answers, along with a confidence score.
55
#services: cognitive-services
66
manager: nitinme
77
author: jboback
@@ -14,7 +14,7 @@ ms.custom: language-service-question-answering
1414

1515
# Confidence score
1616

17-
When a user query is matched against a project (also known as a knowledge base), question answering returns relevant answers, along with a confidence score. This score indicates the confidence that the answer is the right match for the given user query.
17+
When a user query is matched against a project (also known as a knowledge base), custom question answering returns relevant answers, along with a confidence score. This score indicates the confidence that the answer is the right match for the given user query.
1818

1919
The confidence score is a number between 0 and 100. A score of 100 is likely an exact match, while a score of 0 means, that no matching answer was found. The higher the score- the greater the confidence in the answer. For a given query, there could be multiple answers returned. In that case, the answers are returned in order of decreasing confidence score.
2020

@@ -31,7 +31,7 @@ The following table indicates typical confidence associated for a given score.
3131

3232
## Choose a score threshold
3333

34-
The table above shows the range of scores that can occur when querying with question answering. However, since every project is different, and has different types of words, intents, and goals- we recommend you test and choose the threshold that best works for you. By default the threshold is set to `0`, so that all possible answers are returned. The recommended threshold that should work for most projects, is **50**.
34+
The table above shows the range of scores that can occur when querying with custom question answering. However, since every project is different, and has different types of words, intents, and goals- we recommend you test and choose the threshold that best works for you. By default the threshold is set to `0`, so that all possible answers are returned. The recommended threshold that should work for most projects, is **50**.
3535

3636
When choosing your threshold, keep in mind the balance between **Accuracy** and **Coverage**, and adjust your threshold based on your requirements.
3737

0 commit comments

Comments
 (0)