Skip to content

Commit 5b7653d

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into workbooks-arm-actions
2 parents c7fd518 + f9bd03b commit 5b7653d

File tree

334 files changed

+6607
-2429
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

334 files changed

+6607
-2429
lines changed

.openpublishing.redirection.json

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3849,6 +3849,11 @@
38493849
"redirect_url": "/azure/reliability/reliability-guidance-overview",
38503850
"redirect_document_id": false
38513851
},
3852+
{
3853+
"source_path_from_root": "/articles/aks/cluster-configuration.md",
3854+
"redirect_url": "/azure/aks/concepts-clusters-workloads.md",
3855+
"redirect_document_id": false
3856+
},
38523857
{
38533858
"source_path_from_root": "/articles/orbital/overview-analytics.md",
38543859
"redirect_url": "/azure/orbital/overview",
@@ -3984,6 +3989,11 @@
39843989
"source_path_from_root":"/articles/container-instances/availability-zones.md",
39853990
"redirect_url":"/azure/reliability/reliability-containers",
39863991
"redirect_document_id":false
3987-
}
3992+
},
3993+
{
3994+
"source_path_from_root":"/articles/service-connector/quickstart-cli-aks-connection.md",
3995+
"redirect_url":"/azure/service-connector/quickstart-portal-aks-connection",
3996+
"redirect_document_id":false
3997+
}
39883998
]
39893999
}

.openpublishing.redirection.sentinel.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1089,6 +1089,11 @@
10891089
"source_path_from_root": "/articles/sentinel/notebooks-with-synapse-hunt.md",
10901090
"redirect_url": "/azure/sentinel/notebooks-hunt",
10911091
"redirect_document_id": false
1092+
},
1093+
{
1094+
"source_path_from_root": "/articles/sentinel/data-connectors/dns.md",
1095+
"redirect_url": "/azure/sentinel/data-connectors/windows-dns-events-via-ama",
1096+
"redirect_document_id": false
10921097
}
10931098
]
10941099
}

articles/ai-services/document-intelligence/concept-accuracy-confidence.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.service: azure-ai-document-intelligence
88
ms.custom:
99
- ignite-2023
1010
ms.topic: conceptual
11-
ms.date: 02/29/2024
11+
ms.date: 04/16/2023
1212
ms.author: lajanuar
1313
---
1414

@@ -53,10 +53,11 @@ Field confidence indicates an estimated probability between 0 and 1 that the pre
5353
## Interpret accuracy and confidence scores for custom models
5454

5555
When interpreting the confidence score from a custom model, you should consider all the confidence scores returned from the model. Let's start with a list of all the confidence scores.
56-
1. **Document type confidence score**: The document type confidence is an indicator of closely the analyzed document resembleds documents in the training dataset. When the document type confidence is low, this is indicative of template or structural variations in the analyzed document. To improve the document type confidence, label a document with that specific variation and add it to your training dataset. Once the model is re-trained, it should be better equipped to handl that class of variations.
57-
2. **Field level confidence**: Each labled field extracted has an associated confidence score. This score reflects the model's confidence on the position of the value extracted. While evaluating the confidence you should also look at the underlying extraction confidence to generate a comprehensive confidence for the extracted result. Evaluate the OCR results for text extraction or selection marks depending on the field type to generate a composite confidence score for the field.
58-
3. **Word confidence score** Each word extracted within the document has an associated confidence score. The score represents the confidence of the transcription. The pages array contains an array of words, each word has an associated span and confidence. Spans from the custom field extracted values will match the spans of the extracted words.
59-
4. **Selection mark confidence score**: The pages array also contains an array of selection marks, each selection mark has a confidence score representing the confidence of the seletion mark and selection state detection. When a labeled field is a selection mark, the custom field selection confidence combined with the selection mark confidence is an accurate representation of the overall confidence that the field was extracted correctly.
56+
57+
1. **Document type confidence score**: The document type confidence is an indicator of closely the analyzed document resembles documents in the training dataset. When the document type confidence is low, it's indicative of template or structural variations in the analyzed document. To improve the document type confidence, label a document with that specific variation and add it to your training dataset. Once the model is retrained, it should be better equipped to handle that class of variations.
58+
2. **Field level confidence**: Each labeled field extracted has an associated confidence score. This score reflects the model's confidence on the position of the value extracted. While evaluating confidence scores, you should also look at the underlying extraction confidence to generate a comprehensive confidence for the extracted result. Evaluate the `OCR` results for text extraction or selection marks depending on the field type to generate a composite confidence score for the field.
59+
3. **Word confidence score** Each word extracted within the document has an associated confidence score. The score represents the confidence of the transcription. The pages array contains an array of words and each word has an associated span and confidence score. Spans from the custom field extracted values match the spans of the extracted words.
60+
4. **Selection mark confidence score**: The pages array also contains an array of selection marks. Each selection mark has a confidence score representing the confidence of the selection mark and selection state detection. When a labeled field has a selection mark, the custom field selection combined with the selection mark confidence is an accurate representation of overall confidence accuracy.
6061

6162
The following table demonstrates how to interpret both the accuracy and confidence scores to measure your custom model's performance.
6263

@@ -69,7 +70,7 @@ The following table demonstrates how to interpret both the accuracy and confiden
6970

7071
## Table, row, and cell confidence
7172

72-
With the addition of table, row and cell confidence with the ```2024-02-29-preview``` API, here are some common questions that should help with interpreting the table, row and cell scores:
73+
With the addition of table, row and cell confidence with the ```2024-02-29-preview``` API, here are some common questions that should help with interpreting the table, row, and cell scores:
7374

7475
**Q:** Is it possible to see a high confidence score for cells, but a low confidence score for the row?<br>
7576

articles/ai-services/document-intelligence/how-to-guides/includes/v4-0/javascript-sdk.md

Lines changed: 15 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: laujan
55
manager: nitinme
66
ms.service: azure-ai-document-intelligence
77
ms.topic: include
8-
ms.date: 03/28/2024
8+
ms.date: 04/16/2024
99
ms.author: lajanuar
1010
ms.custom:
1111
- devx-track-csharp
@@ -106,7 +106,8 @@ Open the `index.js` file in Visual Studio Code or your favorite IDE and select o
106106
## Use the Read model
107107

108108
```javascript
109-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
109+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
110+
const { AzureKeyCredential } = require("@azure/core-auth");
110111

111112
//use your `key` and `endpoint` environment variables
112113
const key = process.env['DI_KEY'];
@@ -202,7 +203,8 @@ Visit the Azure samples repository on GitHub and view the [`read` model output](
202203
## Use the Layout model
203204

204205
```javascript
205-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
206+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
207+
const { AzureKeyCredential } = require("@azure/core-auth");
206208

207209
//use your `key` and `endpoint` environment variables
208210
const key = process.env['DI_KEY'];
@@ -272,7 +274,8 @@ Visit the Azure samples repository on GitHub and view the [layout model output](
272274
## Use the General document model
273275

274276
```javascript
275-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
277+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
278+
const { AzureKeyCredential } = require("@azure/core-auth");
276279

277280
//use your `key` and `endpoint` environment variables
278281
const key = process.env['DI_KEY'];
@@ -318,7 +321,8 @@ Visit the Azure samples repository on GitHub and view the [general document mode
318321
## Use the W-2 tax model
319322

320323
```javascript
321-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
324+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
325+
const { AzureKeyCredential } = require("@azure/core-auth");
322326

323327
//use your `key` and `endpoint` environment variables
324328
const key = process.env['DI_KEY'];
@@ -397,7 +401,8 @@ Visit the Azure samples repository on GitHub and view the [W-2 tax model output]
397401
## Use the Invoice model
398402

399403
```javascript
400-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
404+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
405+
const { AzureKeyCredential } = require("@azure/core-auth");
401406

402407
//use your `key` and `endpoint` environment variables
403408
const key = process.env['DI_KEY'];
@@ -459,7 +464,8 @@ Visit the Azure samples repository on GitHub and view the [invoice model output]
459464
## Use the Receipt model
460465
461466
```javascript
462-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
467+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
468+
const { AzureKeyCredential } = require("@azure/core-auth");
463469

464470
//use your `key` and `endpoint` environment variables
465471
const key = process.env['DI_KEY'];
@@ -518,7 +524,8 @@ Visit the Azure samples repository on GitHub and view the [receipt model output]
518524
## Use the ID document model
519525
520526
```javascript
521-
const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence");
527+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
528+
const { AzureKeyCredential } = require("@azure/core-auth");
522529

523530
//use your `key` and `endpoint` environment variables
524531
const key = process.env['DI_KEY'];

articles/ai-services/document-intelligence/quickstarts/includes/javascript-sdk.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ In this quickstart, use the following features to analyze and extract data and v
7373
4. Install the `ai-document-intelligence` client library and `azure/identity` npm packages:
7474

7575
```console
76-
npm i @azure-rest/[email protected]
76+
npm i @azure-rest/[email protected] @azure/identity
7777

7878
```
7979

@@ -146,10 +146,11 @@ Extract text, selection marks, text styles, table structures, and bounding regio
146146
:::moniker range="doc-intel-4.0.0"
147147

148148
```javascript
149-
const { AzureKeyCredential, DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
149+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
150+
const { AzureKeyCredential } = require("@azure/core-auth");
150151

151152
// set `<your-key>` and `<your-endpoint>` variables with the values from the Azure portal.
152-
const key = "<your-key>";
153+
const key = "<your-key";
153154
const endpoint = "<your-endpoint>";
154155

155156
// sample document
@@ -311,7 +312,8 @@ In this example, we analyze an invoice using the **prebuilt-invoice** model.
311312

312313
```javascript
313314

314-
const { AzureKeyCredential, DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
315+
const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence");
316+
const { AzureKeyCredential } = require("@azure/core-auth");
315317

316318
// set `<your-key>` and `<your-endpoint>` variables with the values from the Azure portal.
317319
const key = "<your-key>";

articles/ai-services/immersive-reader/overview.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,10 @@ With Immersive Reader, you can break words into syllables to improve readability
6969

7070
Immersive Reader is a standalone web application. When it's invoked, the Immersive Reader client library displays on top of your existing web application in an `iframe`. When your web application calls the Immersive Reader service, you specify the content to show the reader. The Immersive Reader client library handles the creation and styling of the `iframe` and communication with the Immersive Reader backend service. The Immersive Reader service processes the content for parts of speech, text to speech, translation, and more.
7171

72+
## Data privacy for Immersive reader
73+
74+
Immersive reader doesn't store any customer data.
75+
7276
## Next step
7377

7478
The Immersive Reader client library is available in C#, JavaScript, Java (Android), Kotlin (Android), and Swift (iOS). Get started with:

articles/ai-services/openai/how-to/content-filters.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ description: Learn how to use content filters (preview) with Azure OpenAI Servic
66
manager: nitinme
77
ms.service: azure-ai-openai
88
ms.topic: how-to
9-
ms.date: 03/29/2024
9+
ms.date: 04/16/2024
1010
author: mrbullwinkle
1111
ms.author: mbullwin
1212
recommendations: false
@@ -15,7 +15,7 @@ recommendations: false
1515
# How to configure content filters with Azure OpenAI Service
1616

1717
> [!NOTE]
18-
> All customers have the ability to modify the content filters to be stricter (for example, to filter content at lower severity levels than the default). Approval is required for turning the content filters partially or fully off. Managed customers only may apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR).
18+
> All customers have the ability to modify the content filters and configure the severity thresholds (low, medium, high). Approval is required for turning the content filters partially or fully off. Managed customers only may apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR).
1919
2020
The content filtering system integrated into Azure OpenAI Service runs alongside the core models and uses an ensemble of multi-class classification models to detect four categories of harmful content (violence, hate, sexual, and self-harm) at four severity levels respectively (safe, low, medium, and high), and optional binary classifiers for detecting jailbreak risk, existing text, and code in public repositories. The default content filtering configuration is set to filter at the medium severity threshold for all four content harms categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low or safe is not filtered by the content filters. Learn more about content categories, severity levels, and the behavior of the content filtering system [here](../concepts/content-filter.md). Jailbreak risk detection and protected text and code models are optional and off by default. For jailbreak and protected material text and code models, the configurability feature allows all customers to turn the models on and off. The models are by default off and can be turned on per your scenario. Some models are required to be on for certain scenarios to retain coverage under the [Customer Copyright Commitment](/legal/cognitive-services/openai/customer-copyright-commitment?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
2121

articles/ai-services/openai/how-to/latency.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ Latency varies based on what model you're using. For an identical request, expec
5959
When you send a completion request to the Azure OpenAI endpoint, your input text is converted to tokens that are then sent to your deployed model. The model receives the input tokens and then begins generating a response. It's an iterative sequential process, one token at a time. Another way to think of it is like a for loop with `n tokens = n iterations`. For most models, generating the response is the slowest step in the process.
6060

6161
At the time of the request, the requested generation size (max_tokens parameter) is used as an initial estimate of the generation size. The compute-time for generating the full size is reserved by the model as the request is processed. Once the generation is completed, the remaining quota is released. Ways to reduce the number of tokens:
62-
- Set the `max_token` parameter on each call as small as possible.
62+
- Set the `max_tokens` parameter on each call as small as possible.
6363
- Include stop sequences to prevent generating extra content.
6464
- Generate fewer responses: The best_of & n parameters can greatly increase latency because they generate multiple outputs. For the fastest response, either don't specify these values or set them to 1.
6565

@@ -136,4 +136,4 @@ Time from the first token to the last token, divided by the number of generated
136136

137137
* **Streaming**: Enabling streaming can be useful in managing user expectations in certain situations by allowing the user to see the model response as it is being generated rather than having to wait until the last token is ready.
138138

139-
* **Content Filtering** improves safety, but it also impacts latency. Evaluate if any of your workloads would benefit from [modified content filtering policies](./content-filters.md).
139+
* **Content Filtering** improves safety, but it also impacts latency. Evaluate if any of your workloads would benefit from [modified content filtering policies](./content-filters.md).

articles/ai-services/openai/how-to/monitoring.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: mbullwin
66
ms.service: azure-ai-openai
77
ms.topic: how-to
88
ms.custom: subject-monitoring
9-
ms.date: 03/29/2024
9+
ms.date: 04/16/2024
1010
---
1111

1212
# Monitoring Azure OpenAI Service
@@ -60,7 +60,9 @@ The following table summarizes the current subset of metrics available in Azure
6060
| `Processed FineTuned Training Hours` | Usage |Sum| Number of training hours processed on an Azure OpenAI fine-tuned model. | `ApiName`, `ModelDeploymentName`,`ModelName`, `Region`|
6161
| `Processed Inference Tokens` | Usage | Sum| Number of inference tokens processed by an Azure OpenAI model. Calculated as prompt tokens (input) + generated tokens. Applies to PayGo, PTU, and PTU-manged SKUs.|`ApiName`, `ModelDeploymentName`,`ModelName`, `Region`|
6262
| `Processed Prompt Tokens` | Usage | Sum | Total number of prompt tokens (input) processed on an Azure OpenAI model. Applies to PayGo, PTU, and PTU-managed SKUs.|`ApiName`, `ModelDeploymentName`,`ModelName`, `Region`|
63-
| `Provision-managed Utilization V2` | Usage | Average | Provision-managed utilization is the utilization percentage for a given provisioned-managed deployment. Calculated as (PTUs consumed/PTUs deployed)*100. When utilization is at or above 100%, calls are throttled and return a 429 error code. | `ModelDeploymentName`,`ModelName`,`ModelVersion`, `Region`, `StreamType`|
63+
| `Provision-managed Utilization V2` | HTTP | Average | Provision-managed utilization is the utilization percentage for a given provisioned-managed deployment. Calculated as (PTUs consumed/PTUs deployed)*100. When utilization is at or above 100%, calls are throttled and return a 429 error code. | `ModelDeploymentName`,`ModelName`,`ModelVersion`, `Region`, `StreamType`|
64+
|`Prompt Token Cache Match Rate` | HTTP | Average | **Provisioned-managed only**. The prompt token cache hit ration expressed as a percentage. | `ModelDeploymentName`, `ModelVersion`, `ModelName`, `Region`|
65+
|`Time to Response` | HTTP | Average | Recommended latency (responsiveness) measure for streaming requests. **Applies to PTU, and PTU-managed deployments**. This metric does not apply to standard pay-go deployments. Calculated as time taken for the first response to appear after a user sends a prompt, as measured by the API gateway. This number increases as the prompt size increases and/or cache hit size reduces. Note: this metric is an approximation as measured latency is heavily dependent on multiple factors, including concurrent calls and overall workload pattern. In addition, it does not account for any client- side latency that may exist between your client and the API endpoint. Please refer to your own logging for optimal latency tracking.| `ModelDepIoymentName`, `ModelName`, and `ModelVersion` |
6466

6567
## Configure diagnostic settings
6668

0 commit comments

Comments
 (0)