You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/concepts/evaluation-metrics-built-in.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -238,7 +238,7 @@ For groundedness, we provide two versions:
238
238
| When to use it? | Use the groundedness metric when you need to verify that AI-generated responses align with and are validated by the provided context. It's essential for applications where factual correctness and contextual accuracy are key, like information retrieval, question-answering, and content summarization. This metric ensures that the AI-generated answers are well-supported by the context. |
239
239
| What does it need as input? | Question, Context, Generated Answer |
240
240
241
-
Built-in prompt used by Large Language Model judge to score this metric:
241
+
Built-in prompt used by the Large Language Model judge to score this metric:
242
242
243
243
```
244
244
You will be presented with a CONTEXT and an ANSWER about that CONTEXT. You need to decide whether the ANSWER is entailed by the CONTEXT by choosing one of the following rating:
@@ -269,7 +269,7 @@ Note the ANSWER is generated by a computer system, it can contain certain symbol
269
269
| What does it need as input? | Question, Context, Generated Answer |
270
270
271
271
272
-
Built-in prompt used by Large Language Model judge to score this metric (For question answering data format):
272
+
Built-in prompt used by the Large Language Model judge to score this metric (For question answering data format):
273
273
274
274
```
275
275
Relevance measures how well the answer addresses the main aspects of the question, based on the context. Consider whether all and only the important aspects are contained in the answer when evaluating relevance. Given the context and question, score the relevance of the answer between one to five stars using the following rating scale:
@@ -287,7 +287,7 @@ Five stars: the answer has perfect relevance
287
287
This rating value should always be an integer between 1 and 5. So the rating produced should be 1 or 2 or 3 or 4 or 5.
288
288
```
289
289
290
-
Built-in prompt used by Large Language Model judge to score this metric (For conversation data format) (without Ground Truth available):
290
+
Built-in prompt used by the Large Language Model judge to score this metric (For conversation data format) (without Ground Truth available):
291
291
292
292
```
293
293
You will be provided a question, a conversation history, fetched documents related to the question and a response to the question in the {DOMAIN} domain. Your task is to evaluate the quality of the provided response by following the steps below:
@@ -317,7 +317,7 @@ You will be provided a question, a conversation history, fetched documents relat
317
317
- Your final response must include both the reference answer and the evaluation result. The evaluation result should be written in English.
318
318
```
319
319
320
-
Built-in prompt used by Large Language Model judge to score this metric (For conversation data format) (with Ground Truth available):
320
+
Built-in prompt used by the Large Language Model judge to score this metric (For conversation data format) (with Ground Truth available):
321
321
322
322
```
323
323
@@ -361,7 +361,7 @@ Labeling standards are as following:
361
361
| When to use it? | Use it when assessing the readability and user-friendliness of your model's generated responses in real-world applications. |
362
362
| What does it need as input? | Question, Generated Answer |
363
363
364
-
Built-in prompt used by Large Language Model judge to score this metric:
364
+
Built-in prompt used by the Large Language Model judge to score this metric:
365
365
366
366
```
367
367
Coherence of an answer is measured by how well all the sentences fit together and sound naturally as a whole. Consider the overall quality of the answer when evaluating coherence. Given the question and answer, score the coherence of answer between one to five stars using the following rating scale:
@@ -389,7 +389,7 @@ This rating value should always be an integer between 1 and 5. So the rating pro
389
389
| When to use it? | Use it when evaluating the linguistic correctness of the AI-generated text, ensuring that it adheres to proper grammatical rules, syntactic structures, and vocabulary usage in the generated responses. |
390
390
| What does it need as input? | Question, Generated Answer |
391
391
392
-
Built-in prompt used by Large Language Model judge to score this metric:
392
+
Built-in prompt used by the Large Language Model judge to score this metric:
393
393
394
394
```
395
395
Fluency measures the quality of individual sentences in the answer, and whether they are well-written and grammatically correct. Consider the quality of individual sentences when evaluating fluency. Given the question and answer, score the fluency of the answer between one to five stars using the following rating scale:
@@ -417,7 +417,7 @@ This rating value should always be an integer between 1 and 5. So the rating pro
417
417
| When to use it? | Use the retrieval score when you want to guarantee that the documents retrieved are highly relevant for answering your users' questions. This score helps ensure the quality and appropriateness of the retrieved content. |
418
418
| What does it need as input? | Question, Context, Generated Answer |
419
419
420
-
Built-in prompt used by Large Language Model judge to score this metric:
420
+
Built-in prompt used by the Large Language Model judge to score this metric:
421
421
422
422
```
423
423
A chat history between user and bot is shown below
@@ -473,7 +473,7 @@ Think through step by step:
473
473
474
474
475
475
476
-
Built-in prompt used by Large Language Model judge to score this metric:
476
+
Built-in prompt used by the Large Language Model judge to score this metric:
477
477
478
478
```
479
479
GPT-Similarity, as a metric, measures the similarity between the predicted answer and the correct answer. If the information and content in the predicted answer is similar or equivalent to the correct answer, then the value of the Equivalence metric should be high, else it should be low. Given the question, correct answer, and predicted answer, determine the value of Equivalence metric using the following rating scale:
Copy file name to clipboardExpand all lines: articles/app-service/app-service-ip-restrictions.md
+4-1Lines changed: 4 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -123,7 +123,10 @@ With service endpoints, you can configure your app with application gateways or
123
123
124
124
:::image type="content" source="media/app-service-ip-restrictions/access-restrictions-service-tag-add.png?v2" alt-text="Screenshot of the 'Add Restriction' pane with the Service Tag type selected.":::
125
125
126
-
All available service tags are supported in access restriction rules. Each service tag represents a list of IP ranges from Azure services. A list of these services and links to the specific ranges can be found in the [service tag documentation][servicetags]. Use Azure Resource Manager templates or scripting to configure more advanced rules like regional scoped rules.
126
+
All publicly available service tags are supported in access restriction rules. Each service tag represents a list of IP ranges from Azure services. A list of these services and links to the specific ranges can be found in the [service tag documentation][servicetags]. Use Azure Resource Manager templates or scripting to configure more advanced rules like regional scoped rules.
127
+
128
+
> [!NOTE]
129
+
> When creating service tag-based rules through Azure portal or Azure CLI you will need read access at the subscription level to get the full list of service tags for selection/validation. In addition, the `Microsoft.Network` resource provider needs to be registered on the subscription.
Copy file name to clipboardExpand all lines: articles/backup/azure-file-share-support-matrix.md
+17-1Lines changed: 17 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
title: Support Matrix for Azure file share backup by using Azure Backup
3
3
description: Provides a summary of support settings and limitations when backing up Azure file shares.
4
4
ms.topic: conceptual
5
-
ms.date: 03/29/2024
5
+
ms.date: 08/16/2024
6
6
ms.custom: references_regions, engagement-fy24
7
7
ms.service: azure-backup
8
8
author: AbhishekMallick-MS
@@ -188,6 +188,22 @@ Vaulted backup for Azure Files (preview) is available in West Central US, Southe
188
188
189
189
---
190
190
191
+
## Daylight savings
192
+
193
+
Azure Backup doesn't support automatic clock adjustment for daylight saving time for Azure VM backups. It doesn't shift the hour of the backup forward or backwards. To ensure the backup runs at the desired time, modify the backup policies manually as required.
194
+
195
+
## Support for customer-managed failover
196
+
197
+
This section describes how your backups and restores are affected after customer-managed failovers.
198
+
199
+
The following table lists the behavior of backups due to customer-initiated failovers:
200
+
201
+
| Failover type | Backups | Restore | Enabling protection (re-protection) of failed over account in secondary region |
Copy file name to clipboardExpand all lines: articles/iot-operations/discover-manage-assets/howto-manage-assets-remotely.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -204,8 +204,8 @@ Now you can define the tags associated with the asset. To add OPC UA tags:
204
204
205
205
| Node ID | Tag name | Observability mode |
206
206
| ------- | -------- | ------------------ |
207
-
| ns=3;s=FastUInt10 | temperature | none |
208
-
| ns=3;s=FastUInt100 | Tag 10 | none |
207
+
| ns=3;s=FastUInt10 | temperature | None |
208
+
| ns=3;s=FastUInt100 | Tag 10 | None |
209
209
210
210
1. Select **Manage default settings** to configure default telemetry settings for the asset. These settings apply to all the OPC UA tags that belong to the asset. You can override these settings for each tag that you add. Default telemetry settings include:
211
211
@@ -219,11 +219,11 @@ You can import up to 1000 OPC UA tags at a time from a CSV file:
219
219
220
220
1. Create a CSV file that looks like the following example:
| ns=3;s=FastUInt1000 | Tag 1000 | 5| None| 1000 |
225
+
| ns=3;s=FastUInt1001 | Tag 1001 | 5| None| 1000 |
226
+
| ns=3;s=FastUInt1002 | Tag 1002 | 10| None| 5000 |
227
227
228
228
1. Select **Add tag or CSV > Import CSV (.csv) file**. Select the CSV file you created and select **Open**. The tags defined in the CSV file are imported:
0 commit comments