Skip to content

Commit 609329c

Browse files
committed
Merge branch 'main' into release-machine-config-toc-reorganization
2 parents 7bda739 + fb912a9 commit 609329c

File tree

695 files changed

+3024
-5333
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

695 files changed

+3024
-5333
lines changed

.openpublishing.redirection.json

Lines changed: 78 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5707,7 +5707,77 @@
57075707
},
57085708
{
57095709
"source_path_from_root": "/articles/industrial-iot/overview-what-is-industrial-iot-platform.md",
5710-
"redirect_url": "/azure/industrial-iot/overview-what-is-industrial-iot",
5710+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5711+
"redirect_document_id": false
5712+
},
5713+
{
5714+
"source_path_from_root": "/articles/industrial-iot/index.yml",
5715+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5716+
"redirect_document_id": false
5717+
},
5718+
{
5719+
"source_path_from_root": "/articles/industrial-iot/industrial-iot-platform-versions.md",
5720+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5721+
"redirect_document_id": false
5722+
},
5723+
{
5724+
"source_path_from_root": "/articles/industrial-iot/overview-what-is-industrial-iot.md",
5725+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5726+
"redirect_document_id": false
5727+
},
5728+
{
5729+
"source_path_from_root": "/articles/industrial-iot/overview-what-is-opc-publisher.md",
5730+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5731+
"redirect_document_id": false
5732+
},
5733+
{
5734+
"source_path_from_root": "/articles/industrial-iot/reference-command-line-arguments.md",
5735+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5736+
"redirect_document_id": false
5737+
},
5738+
{
5739+
"source_path_from_root": "/articles/industrial-iot/reference-opc-publisher-telemetry-format.md",
5740+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5741+
"redirect_document_id": false
5742+
},
5743+
{
5744+
"source_path_from_root": "/articles/industrial-iot/tutorial-configure-industrial-iot-components.md",
5745+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5746+
"redirect_document_id": false
5747+
},
5748+
{
5749+
"source_path_from_root": "/articles/industrial-iot/tutorial-data-explorer-import.md",
5750+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5751+
"redirect_document_id": false
5752+
},
5753+
{
5754+
"source_path_from_root": "/articles/industrial-iot/tutorial-deploy-industrial-iot-platform.md",
5755+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5756+
"redirect_document_id": false
5757+
},
5758+
{
5759+
"source_path_from_root": "/articles/industrial-iot/tutorial-industrial-iot-azure-data-explorer.md",
5760+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5761+
"redirect_document_id": false
5762+
},
5763+
{
5764+
"source_path_from_root": "/articles/industrial-iot/tutorial-publisher-configure-opc-publisher.md",
5765+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5766+
"redirect_document_id": false
5767+
},
5768+
{
5769+
"source_path_from_root": "/articles/industrial-iot/tutorial-publisher-deploy-opc-publisher-standalone.md",
5770+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5771+
"redirect_document_id": false
5772+
},
5773+
{
5774+
"source_path_from_root": "/articles/industrial-iot/tutorial-publisher-performance-memory-tuning-opc-publisher.md",
5775+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5776+
"redirect_document_id": false
5777+
},
5778+
{
5779+
"source_path_from_root": "/articles/industrial-iot/tutorial-visualize-data-time-series-insights.md",
5780+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
57115781
"redirect_document_id": false
57125782
},
57135783
{
@@ -10798,6 +10868,13 @@
1079810868
"source_path_from_root": "/articles/azure-health-insights/response-info.md",
1079910869
"redirect_url": "/azure/azure-health-insights/overview",
1080010870
"redirect_document_id": false
10871+
},
10872+
{
10873+
"source_path_from_root": "/articles/networking/disaster-recovery-dns-traffic-manager.md",
10874+
"redirect_url": "/azure/reliability/reliability-traffic-manager",
10875+
"redirect_document_id": false
1080110876
}
10877+
10878+
1080210879
]
1080310880
}

articles/active-directory-b2c/custom-policies-series-validate-user-input.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -379,7 +379,7 @@ Follow the steps in [Upload custom policy file](custom-policies-series-hello-wor
379379

380380
## Step 7 - Validate user input by using validation technical profiles
381381

382-
The validation techniques we've used in step 1, step 2 and step 3 aren't applicable for all scenarios. If your business rules are complex to be defined at claim declaration level, you can configure a [Validation Technical](validation-technical-profile.md), and then call it from a [Self-Asserted Technical Profile](self-asserted-technical-profile.md).
382+
The validation techniques we've used in step 1, step 2 and step 3 aren't applicable for all scenarios. If your business rules are too complex to be defined at claim declaration level, you can configure a [Validation Technical](validation-technical-profile.md), and then call it from a [Self-Asserted Technical Profile](self-asserted-technical-profile.md).
383383

384384
> [!NOTE]
385385
> Only self-asserted technical profiles can use validation technical profiles. Learn more about [validation technical profile](validation-technical-profile.md)

articles/ai-services/document-intelligence/containers/install-run.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,25 +18,29 @@ ms.author: lajanuar
1818
<!-- markdownlint-disable MD024 -->
1919
<!-- markdownlint-disable MD051 -->
2020

21-
:::moniker range="doc-intel-2.1.0 || doc-intel-3.1.0||doc-intel-4.0.0"
21+
:::moniker range="doc-intel-2.1.0 || doc-intel-4.0.0"
2222

23-
Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` only:
23+
Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read and Layout only:
2424

2525
* [REST API `2022-08-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
26+
* [REST API `2023-07-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)
2627
* [SDKs targeting `REST API 2022-08-31 (GA)`](../sdk-overview-v3-0.md)
28+
* [SDKs targeting `REST API 2023-07-31 (GA)`](../sdk-overview-v3-1.md)
2729

2830
✔️ See [**Install and run Document Intelligence v3.0 containers**](?view=doc-intel-3.0.0&preserve-view=true) for supported container documentation.
2931

3032
:::moniker-end
3133

32-
:::moniker range="doc-intel-3.0.0"
34+
:::moniker range="doc-intel-3.0.0 || doc-intel-3.1.0"
3335

34-
**This content applies to:** ![checkmark](../media/yes-icon.png) **v3.0 (GA)**
36+
**This content applies to:** ![checkmark](../media/yes-icon.png) **v3.0 (GA)** ![checkmark](../media/yes-icon.png) **v3.1 (GA)**
3537

3638
Azure AI Document Intelligence is an Azure AI service that lets you build automated data processing software using machine-learning technology. Document Intelligence enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your documents. The results are delivered as structured data that ../includes the relationships in the original file.
3739

3840
In this article you learn how to download, install, and run Document Intelligence containers. Containers enable you to run the Document Intelligence service in your own environment. Containers are great for specific security and data governance requirements.
3941

42+
* **Read**, and **Layout** models are supported by Document Intelligence v3.1 containers.
43+
4044
* **Read**, **Layout**, **General Document**, **ID Document**, **Receipt**, **Invoice**, **Business Card**, and **Custom** models are supported by Document Intelligence v3.0 containers.
4145

4246
* **Business Card** model is currently only supported in the [v2.1 containers](install-run.md?view=doc-intel-2.1.0&preserve-view=true).

articles/ai-services/document-intelligence/faq.yml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -520,6 +520,14 @@ sections:
520520
521521
Yes. The labeling experience from Studio is open sourced in the [Toolkit repo](https://github.com/microsoft/Form-Recognizer-Toolkit)
522522
523+
- question: |
524+
Why am I receiving 'Form Recognizer Not Found' error when opening my custom project?
525+
answer: |
526+
Your Document Intelligence resource bound to this custom project may have been deleted or moved to another resource group. There are two ways to resolve this issue:
527+
528+
- Re-create the Document Intelligence resource under the same subscription/resource group with the same name.
529+
530+
- Re-create a custom project with the migrated Document Intelligence resource and specify the very same storage account.
523531
524532
- name: Containers
525533
questions:

articles/ai-services/openai/concepts/models.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ See [model versions](../concepts/model-versions.md) to learn about how Azure Ope
8888
> Version `0314` of `gpt-4` and `gpt-4-32k` will be retired no earlier than July 5, 2024. Version `0613` of `gpt-4` and `gpt-4-32k` will be retired no earlier than September 30, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior.
8989
9090

91-
GPT-4 version 0125-preview is an updated version of the GPT-4 Turbo preview previously released as version 1106-preview. GPT-4 versio 0125-preview completes tasks such as code generation more completely compared to gpt-4-1106-preview. Because of this, depending on the task, customers may find that GPT-4-0125-preview generates more output compared to the gpt-4-1106-preview. We recommend customers compare the outputs of the new model. GPT-4-0125-preview also addresses bugs in gpt-4-1106-preview with UTF-8 handling for non-English languages.
91+
GPT-4 version 0125-preview is an updated version of the GPT-4 Turbo preview previously released as version 1106-preview. GPT-4 version 0125-preview completes tasks such as code generation more completely compared to gpt-4-1106-preview. Because of this, depending on the task, customers may find that GPT-4-0125-preview generates more output compared to the gpt-4-1106-preview. We recommend customers compare the outputs of the new model. GPT-4-0125-preview also addresses bugs in gpt-4-1106-preview with UTF-8 handling for non-English languages.
9292

9393
> [!IMPORTANT]
9494
>
@@ -144,7 +144,7 @@ GPT-3.5 Turbo version 0301 is the first version of the model released. Version
144144
See [model versions](../concepts/model-versions.md) to learn about how Azure OpenAI Service handles model version upgrades, and [working with models](../how-to/working-with-models.md) to learn how to view and configure the model version settings of your GPT-3.5 Turbo deployments.
145145

146146
> [!NOTE]
147-
> Version `0613` of `gpt-35-turbo` and `gpt-35-turbo-16k` will be retired on June 13, 2024. Version `0301` of `gpt-35-turbo` will be retired on July 5, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior.
147+
> Version `0613` of `gpt-35-turbo` and `gpt-35-turbo-16k` will be retired no earlier than June 13, 2024. Version `0301` of `gpt-35-turbo` will be retired no earlier than July 5, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior.
148148
149149
### GPT-3.5-Turbo model availability
150150

articles/ai-services/openai/how-to/latency.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn about performance and latency with Azure OpenAI
55
manager: nitinme
66
ms.service: azure-ai-openai
77
ms.topic: how-to
8-
ms.date: 11/21/2023
8+
ms.date: 02/07/2024
99
author: mrbullwinkle
1010
ms.author: mbullwin
1111
recommendations: false
@@ -58,17 +58,17 @@ Latency varies based on what model you're using. For an identical request, expec
5858

5959
When you send a completion request to the Azure OpenAI endpoint, your input text is converted to tokens that are then sent to your deployed model. The model receives the input tokens and then begins generating a response. It's an iterative sequential process, one token at a time. Another way to think of it is like a for loop with `n tokens = n iterations`. For most models, generating the response is the slowest step in the process.
6060

61-
At the time of the request, the requested generation size (max_tokens parameter) is used as an initial estimate of the generation size. The compute-time for generating the full size is reserved the model as the request is processed. Once the generation is completed, the remaining quota is released. Ways to reduce the number of tokens:
62-
o Set the `max_token` parameter on each call as small as possible.
63-
o Include stop sequences to prevent generating extra content.
64-
o Generate fewer responses: The best_of & n parameters can greatly increase latency because they generate multiple outputs. For the fastest response, either don't specify these values or set them to 1.
61+
At the time of the request, the requested generation size (max_tokens parameter) is used as an initial estimate of the generation size. The compute-time for generating the full size is reserved by the model as the request is processed. Once the generation is completed, the remaining quota is released. Ways to reduce the number of tokens:
62+
- Set the `max_token` parameter on each call as small as possible.
63+
- Include stop sequences to prevent generating extra content.
64+
- Generate fewer responses: The best_of & n parameters can greatly increase latency because they generate multiple outputs. For the fastest response, either don't specify these values or set them to 1.
6565

6666
In summary, reducing the number of tokens generated per request reduces the latency of each request.
6767

6868
### Streaming
6969
Setting `stream: true` in a request makes the service return tokens as soon as they're available, instead of waiting for the full sequence of tokens to be generated. It doesn't change the time to get all the tokens, but it reduces the time for first response. This approach provides a better user experience since end-users can read the response as it is generated.
7070

71-
Streaming is also valuable with large calls that take a long time to process. Many clients and intermediary layers have timeouts on individual calls. Long generation calls might be canceled due to client-side time outs. By streaming the data back, you can ensure incremental data is received.
71+
Streaming is also valuable with large calls that take a long time to process. Many clients and intermediary layers have timeouts on individual calls. Long generation calls might be canceled due to client-side time outs. By streaming the data back, you can ensure incremental data is received.
7272

7373

7474

articles/ai-services/openai/whats-new.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,12 @@ recommendations: false
1818

1919
## February 2024
2020

21+
### GPT-4-0125-preview model available
22+
23+
The `gpt-4` model version `0125-preview` is now available on Azure OpenAI Service in the East US, North Central US, and South Central US regions. Customers with deployments of `gpt-4` version `1106-preview` will be automatically upgraded to `0125-preview` in the coming weeks.
24+
25+
For information on model regional availability and upgrades refer to the [models page](./concepts/models.md).
26+
2127
### Assistants API public preview
2228

2329
Azure OpenAI now supports the API that powers OpenAI's GPTs. Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and advanced tools like code interpreter, and custom functions. To learn more, see:

articles/ai-services/security-controls-policy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Azure Policy Regulatory Compliance controls for Azure AI services
33
description: Lists Azure Policy Regulatory Compliance controls available for Azure AI services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
4-
ms.date: 01/22/2024
4+
ms.date: 02/06/2024
55
ms.topic: sample
66
author: PatrickFarley
77
ms.author: pafarley

articles/ai-services/speech-service/includes/how-to/professional-voice/create-consent/rest.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111

1212
With the professional voice feature, it's required that every voice be created with explicit consent from the user. A recorded statement from the user is required acknowledging that the customer (Azure AI Speech resource owner) will create and use their voice.
1313

14-
To add voice talent consent to the professional voice project, you get the prerecorded consent audio file from a publicly accessible URL (`Consents_Create`) or upload the audio file (`Consents_Post`). In this article, you add consent from a URL.
14+
To add voice talent consent to the professional voice project, you get the prerecorded consent audio file from a publicly accessible URL ([Consents_Create](/rest/api/speechapi/consents/create)) or upload the audio file ([Consents_Post](/rest/api/speechapi/consents/post)). In this article, you add consent from a URL.
1515

1616
## Consent statement
1717

@@ -25,15 +25,15 @@ You can get the consent statement text for each locale from the text to speech G
2525

2626
## Add consent from a URL
2727

28-
To add consent to a professional voice project from the URL of an audio file, use the `Consents_Create` operation of the custom voice API. Construct the request body according to the following instructions:
28+
To add consent to a professional voice project from the URL of an audio file, use the [Consents_Create](/rest/api/speechapi/consents/create) operation of the custom voice API. Construct the request body according to the following instructions:
2929

3030
- Set the required `projectId` property. See [create a project](../../../../professional-voice-create-project.md).
3131
- Set the required `voiceTalentName` property. The voice talent name can't be changed later.
3232
- Set the required `companyName` property. The company name can't be changed later.
3333
- Set the required `audioUrl` property. The URL of the voice talent consent audio file. Use a URI with the [shared access signatures (SAS)](/azure/storage/common/storage-sas-overview) token.
3434
- Set the required `locale` property. This should be the locale of the consent. The locale can't be changed later. You can find the text to speech locale list [here](/azure/ai-services/speech-service/language-support?tabs=tts).
3535

36-
Make an HTTP PUT request using the URI as shown in the following `Consents_Create` example.
36+
Make an HTTP PUT request using the URI as shown in the following [Consents_Create](/rest/api/speechapi/consents/create) example.
3737
- Replace `YourResourceKey` with your Speech resource key.
3838
- Replace `YourResourceRegion` with your Speech resource region.
3939
- Replace `JessicaConsentId` with a consent ID of your choice. The case sensitive ID will be used in the consent's URI and can't be changed later.
@@ -65,7 +65,7 @@ You should receive a response body in the following format:
6565
}
6666
```
6767

68-
The response header contains the `Operation-Location` property. Use this URI to get details about the `Consents_Create` operation. Here's an example of the response header:
68+
The response header contains the `Operation-Location` property. Use this URI to get details about the [Consents_Create](/rest/api/speechapi/consents/create) operation. Here's an example of the response header:
6969

7070
```HTTP 201
7171
Operation-Location: https://eastus.api.cognitive.microsoft.com/customvoice/operations/070f7986-ef17-41d0-ba2b-907f0f28e314?api-version=2023-12-01-preview

0 commit comments

Comments
 (0)