You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/custom-named-entity-recognition/overview.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: laujan
6
6
manager: nitinme
7
7
ms.service: azure-ai-language
8
8
ms.topic: overview
9
-
ms.date: 04/29/2025
9
+
ms.date: 07/16/2025
10
10
ms.author: lajanuar
11
11
ms.custom: language-service-custom-ner
12
12
---
@@ -75,7 +75,7 @@ As you use custom NER, see the following reference documentation and samples for
75
75
76
76
## Responsible AI
77
77
78
-
An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for custom NER](/azure/ai-foundry/responsible-ai/language-service/cner-transparency-note) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
78
+
An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for custom NER](/azure/ai-foundry/responsible-ai/language-service/transparency-note) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
79
79
80
80
[!INCLUDE [Responsible AI links](../includes/overview-responsible-ai-links.md)]
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/overview.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: laujan
6
6
manager: nitinme
7
7
ms.service: azure-ai-language
8
8
ms.topic: overview
9
-
ms.date: 06/21/2025
9
+
ms.date: 07/16/2025
10
10
ms.author: lajanuar
11
11
---
12
12
@@ -248,6 +248,6 @@ Use Language service containers to deploy API features on-premises. These Docker
248
248
249
249
An AI system includes not only the technology, but also the people who use it, the people affected by it, and the deployment environment. Read the following articles to learn about responsible AI use and deployment in your systems:
250
250
251
-
*[Transparency note for the Language service](/azure/ai-foundry/responsible-ai/text-analytics/transparency-note)
252
-
*[Integration and responsible use](/azure/ai-foundry/responsible-ai/text-analytics/guidance-integration-responsible-use)
253
-
*[Data, privacy, and security](/azure/ai-foundry/responsible-ai/text-analytics/data-privacy)
251
+
*[Transparency note for the Language service](azure/ai-foundry/responsible-ai/language-service/transparency-note)
252
+
*[Integration and responsible use](/azure/ai-foundry/responsible-ai/language-service/guidance-integration-responsible-use)
253
+
*[Data, privacy, and security](/azure/ai-foundry/responsible-ai/language-service/data-privacy)
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/personally-identifiable-information/how-to/redact-document-pii.md
+1-6Lines changed: 1 addition & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: laujan
6
6
manager: nitinme
7
7
ms.service: azure-ai-language
8
8
ms.topic: how-to
9
-
ms.date: 03/05/2025
9
+
ms.date: 07/16/2025
10
10
ms.author: lajanuar
11
11
ms.custom: language-service-pii
12
12
---
@@ -65,11 +65,6 @@ A native document refers to the file format used to create the original document
65
65
> macOS `curl -V`
66
66
> Linux: `curl --version`
67
67
68
-
* If cURL isn't installed, here are installation links for your platform:
69
-
70
-
*[Windows](https://curl.haxx.se/windows/).
71
-
*[Mac or Linux](https://learn2torials.com/thread/how-to-install-curl-on-mac-or-linux-(ubuntu)-or-windows).
72
-
73
68
* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
74
69
75
70
* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to [create containers](#create-azure-blob-storage-containers) in your Azure Blob Storage account for your source and target files:
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/question-answering/concepts/best-practices.md
+22-22Lines changed: 22 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ ms.service: azure-ai-language
5
5
author: laujan
6
6
ms.author: lajanuar
7
7
ms.topic: conceptual
8
-
ms.date: 06/21/2025
8
+
ms.date: 07/16/2025
9
9
ms.custom: language-service-question-answering
10
10
---
11
11
@@ -19,7 +19,7 @@ Custom question answering is continually improving the algorithms that extract q
19
19
20
20
## Creating good questions and answers
21
21
22
-
We’ve used the following list of question and answer pairs as representation of a project to highlight best practices when authoring projects for custom question answering.
22
+
We've used the following list of question and answer pairs as representation of a project to highlight best practices when authoring projects for custom question answering.
23
23
24
24
| Question | Answer |
25
25
|----------|----------|
@@ -39,23 +39,23 @@ Custom question answering employs a transformer-based ranker that takes care of
39
39
40
40
The service can return the expected response for semantically similar queries such as:
41
41
42
-
“How much is Microsoft stock worth?
43
-
“How much is Microsoft share value?”
44
-
“How much does a Microsoft share cost?”
45
-
“What is the market value of a Microsoft stock?”
46
-
“What is the market value of a Microsoft share?”
42
+
"How much is Microsoft stock worth?
43
+
"How much is Microsoft share value?"
44
+
"How much does a Microsoft share cost?"
45
+
"What is the market value of a Microsoft stock?"
46
+
"What is the market value of a Microsoft share?"
47
47
48
-
However, it’s important to understand that the confidence score with which the system returns the correct response will vary based on the input query and how different it is from the original question answer pair.
48
+
However, it's important to understand that the confidence score with which the system returns the correct response will vary based on the input query and how different it is from the original question answer pair.
49
49
50
-
There are certain scenarios that require the customer to add an alternate question. When it’s already verified that for a particular query the correct answer isn’t returned despite being present in the project, we advise adding that query as an alternate question to the intended question answer pair.
50
+
There are certain scenarios that require the customer to add an alternate question. When it's already verified that for a particular query the correct answer isn't returned despite being present in the project, we advise adding that query as an alternate question to the intended question answer pair.
51
51
52
52
### How many alternate questions per question answer pair is optimal?
53
53
54
54
Users can add as many alternate questions as they want, but only first 5 will be considered for core ranking. However, the rest will be useful for exact match scenarios. It is also recommended to keep the different intent/distinct alternate questions at the top for better relevance and score.
55
55
56
56
Semantic understanding in custom question answering should be able to take care of similar alternate questions.
57
57
58
-
The return on investment will start diminishing once you exceed 10 questions. Even if you’re adding more than 10 alternate questions, try to make the initial 10 questions as semantically dissimilar as possible so that all kinds of intents for the answer are captured by these 10 questions. For the project at the beginning of this section, in question answer pair #1, adding alternate questions such as “How can I buy a car”, “I wanna buy a car” aren’t required. Whereas adding alternate questions such as “How to purchase a car”, “What are the options of buying a vehicle” can be useful.
58
+
The return on investment will start diminishing once you exceed 10 questions. Even if you're adding more than 10 alternate questions, try to make the initial 10 questions as semantically dissimilar as possible so that all kinds of intents for the answer are captured by these 10 questions. For the project at the beginning of this section, in question answer pair #1, adding alternate questions such as "How can I buy a car", "I wanna buy a car" aren't required. Whereas adding alternate questions such as "How to purchase a car", "What are the options of buying a vehicle" can be useful.
59
59
60
60
### When to add synonyms to a project?
61
61
@@ -67,17 +67,17 @@ For better relevance, you need to provide a list of acronyms that the end user i
67
67
*`ID` – Identification
68
68
*`ETA` – Estimated time of Arrival
69
69
70
-
Other than acronyms, if you think your words are similar in context of a particular domain and generic language models won’t consider them similar, it’s better to add them as synonyms. For instance, if an auto company producing a car model X receives queries such as “my car’s audio isn’t working” and the project has questions on “fixing audio for car X”, then we need to add ‘X’ and ‘car’ as synonyms.
70
+
Other than acronyms, if you think your words are similar in context of a particular domain and generic language models won't consider them similar, it's better to add them as synonyms. For instance, if an auto company producing a car model X receives queries such as "my car's audio isn't working" and the project has questions on "fixing audio for car X", then we need to add 'X' and 'car' as synonyms.
71
71
72
-
The transformer-based model already takes care of most of the common synonym cases, for example: `Purchase – Buy`, `Sell - Auction`, `Price – Value`. For another example, consider the following question answer pair: Q: “What is the price of Microsoft Stock?” A: “$200”.
72
+
The transformer-based model already takes care of most of the common synonym cases, for example: `Purchase – Buy`, `Sell - Auction`, `Price – Value`. For another example, consider the following question answer pair: Q: "What is the price of Microsoft Stock?" A: "$200".
73
73
74
-
If we receive user queries like “Microsoft stock value”,” Microsoft share value”, “Microsoft stock worth”, “Microsoft share worth”, “stock value”, etc., you should be able to get the correct answer even though these queries have words like "share", "value", and "worth", which aren’t originally present in the project.
74
+
If we receive user queries like "Microsoft stock value"," Microsoft share value", "Microsoft stock worth", "Microsoft share worth", "stock value", etc., you should be able to get the correct answer even though these queries have words like "share", "value", and "worth", which aren't originally present in the project.
75
75
76
76
Special characters are not allowed in synonyms.
77
77
78
78
### How are lowercase/uppercase characters treated?
79
79
80
-
Question answering takes casing into account but it's intelligent enough to understand when it’s to be ignored. You shouldn’t be seeing any perceivable difference due to wrong casing.
80
+
Question answering takes casing into account but it's intelligent enough to understand when it's to be ignored. You shouldn't be seeing any perceivable difference due to wrong casing.
81
81
82
82
### How are question answer pairs prioritized for multi-turn questions?
83
83
@@ -89,7 +89,7 @@ Accents are supported for all major European languages. If the query has an inco
89
89
90
90
### How is punctuation in a user query treated?
91
91
92
-
Punctuation is ignored in a user query before sending it to the ranking stack. Ideally it shouldn’t impact the relevance scores. Punctuation that is ignored is as follows: `,?:;\"'(){}[]-+。./!*؟`
92
+
Punctuation is ignored in a user query before sending it to the ranking stack. Ideally it shouldn't impact the relevance scores. Punctuation that is ignored is as follows: `,?:;\"'(){}[]-+。./!*؟`
93
93
94
94
## Chit-Chat
95
95
@@ -109,7 +109,7 @@ Chit-chat is supported for several predefined personalities:
The responses range from formal to informal and irreverent. You should select the personality that is closest aligned with the tone you want for your bot. You can view the [datasets](https://github.com/Microsoft/BotBuilder-PersonalityChat/tree/master/CSharp/Datasets), and choose one that serves as a base for your bot, and then customize the responses.
112
+
The responses range from formal to informal and irreverent. You should select the personality that is closest aligned with the tone you want for your bot. You can view the datasets, and choose one that serves as a base for your bot, and then customize the responses.
113
113
114
114
### Edit bot-specific questions
115
115
@@ -131,15 +131,15 @@ If you add your own chit-chat question answer pairs, make sure to add metadata s
131
131
132
132
The custom question answering REST API uses both questions and the answer to search for best answers to a user's query.
133
133
134
-
### Searching questions only when answer isn’t relevant
134
+
### Searching questions only when answer isn't relevant
135
135
136
136
Use the [`RankerType=QuestionOnly`](#choosing-ranker-type) if you don't want to search answers.
137
137
138
-
An example of this is when the project is a catalog of acronyms as questions with their full form as the answer. The value of the answer won’t help to search for the appropriate answer.
138
+
An example of this is when the project is a catalog of acronyms as questions with their full form as the answer. The value of the answer won't help to search for the appropriate answer.
139
139
140
140
## Ranking/Scoring
141
141
142
-
Make sure you’re making the best use of the supported ranking features. Doing so will improve the likelihood that a given user query is answered with an appropriate response.
142
+
Make sure you're making the best use of the supported ranking features. Doing so will improve the likelihood that a given user query is answered with an appropriate response.
143
143
144
144
### Choosing a threshold
145
145
@@ -160,11 +160,11 @@ Alternate questions to improve the likelihood of a match with a user query. Alte
160
160
161
161
### Use metadata tags to filter questions and answers
162
162
163
-
Metadata adds the ability for a client application to know it shouldn’t take all answers but instead to narrow down the results of a user query based on metadata tags. The project answer can differ based on the metadata tag, even if the query is the same. For example, *"where is parking located"* can have a different answer if the location of the restaurant branch is different - that is, the metadata is *Location: Seattle* versus *Location: Redmond*.
163
+
Metadata adds the ability for a client application to know it shouldn't take all answers but instead to narrow down the results of a user query based on metadata tags. The project answer can differ based on the metadata tag, even if the query is the same. For example, *"where is parking located"* can have a different answer if the location of the restaurant branch is different - that is, the metadata is *Location: Seattle* versus *Location: Redmond*.
164
164
165
165
### Use synonyms
166
166
167
-
While there’s some support for synonyms in the English language, use case-insensitive [word alterations](../tutorials/adding-synonyms.md) to add synonyms to keywords that take different forms.
167
+
While there's some support for synonyms in the English language, use case-insensitive [word alterations](../tutorials/adding-synonyms.md) to add synonyms to keywords that take different forms.
168
168
169
169
|Original word|Synonyms|
170
170
|--|--|
@@ -191,7 +191,7 @@ Custom question answering allows users to collaborate on a project. Users need a
191
191
192
192
## Active learning
193
193
194
-
[Active learning](../tutorials/active-learning.md) does the best job of suggesting alternative questions when it has a wide range of quality and quantity of user-based queries. It’s important to allow client-applications' user queries to participate in the active learning feedback loop without censorship. Once questions are suggested in Language Studio, you can review and accept or reject those suggestions.
194
+
[Active learning](../tutorials/active-learning.md) does the best job of suggesting alternative questions when it has a wide range of quality and quantity of user-based queries. It's important to allow client-applications' user queries to participate in the active learning feedback loop without censorship. Once questions are suggested in Language Studio, you can review and accept or reject those suggestions.
@@ -35,7 +35,7 @@ This documentation contains the following article types:
35
35
***When you want to provide the same answer to a request, question, or command** - when different users submit the same question, the same answer is returned.
36
36
***When you want to filter static information based on meta-information** - add [metadata](./tutorials/multiple-domains.md) tags to provide additional filtering options relevant to your client application's users and the information. Common metadata information includes [chit-chat](./how-to/chit-chat.md), content type or format, content purpose, and content freshness. <!--TODO: Fix Link-->
37
37
***When you want to manage a bot conversation that includes static information** - your project takes a user's conversational text or command and answers it. If the answer is part of a pre-determined conversation flow, represented in your project with [multi-turn context](./tutorials/guided-conversations.md), the bot can easily provide this flow.
38
-
***When you want to use an agent to get an exact answer** - Use the [exact question answering](https://aka.ms/exact-answer-agent-template) agent template answers high-value predefined questions deterministically to ensure consistent and accurate responses or the [intent routing](https://aka.ms/intent-triage-agent-template) agent template, which detects user intent and provides exact answering. Perfect for deterministically intent routing and exact question answering with human control.
38
+
***When you want to use an agent to get an exact answer** - Use the [exact question answering](https://github.com/azure-ai-foundry/foundry-samples/tree/main/samples/agent-catalog/msft-agent-samples/foundry-agent-service-sdk/customer-service-agent) agent template answers high-value predefined questions deterministically to ensure consistent and accurate responses or the [intent routing](https://github.com/azure-ai-foundry/foundry-samples/tree/main/samples/agent-catalog/msft-agent-samples/foundry-agent-service-sdk/intent-routing-agent) agent template, which detects user intent and provides exact answering. Perfect for deterministically intent routing and exact question answering with human control.
@@ -28,7 +28,7 @@ The labels are *positive*, *negative*, and *neutral*. At the document level, the
28
28
| At least one `negative` sentence and at least one `positive` sentence are in the document. |`mixed`|
29
29
| All sentences in the document are `neutral`. |`neutral`|
30
30
31
-
Confidence scores range from 1 to 0. Scores closer to 1 indicate a higher confidence in the label's classification, while lower scores indicate lower confidence. For each document or each sentence, the predicted scores associated with the labels (positive, negative, and neutral) add up to 1. For more information, see the [Responsible AI transparency note](/azure/ai-foundry/responsible-ai/text-analytics/transparency-note).
31
+
Confidence scores range from 1 to 0. Scores closer to 1 indicate a higher confidence in the label's classification, while lower scores indicate lower confidence. For each document or each sentence, the predicted scores associated with the labels (positive, negative, and neutral) add up to 1. For more information, see the [Responsible AI transparency note](/azure/ai-foundry/responsible-ai/language-service/transparency-note).
0 commit comments