Skip to content

Commit bdf7628

Browse files
authored
Merge pull request #265611 from MicrosoftDocs/main
2/7/2024 AM Publish
2 parents 2e8e062 + fb912a9 commit bdf7628

File tree

212 files changed

+894
-2243
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

212 files changed

+894
-2243
lines changed

.openpublishing.redirection.json

Lines changed: 78 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5707,7 +5707,77 @@
57075707
},
57085708
{
57095709
"source_path_from_root": "/articles/industrial-iot/overview-what-is-industrial-iot-platform.md",
5710-
"redirect_url": "/azure/industrial-iot/overview-what-is-industrial-iot",
5710+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5711+
"redirect_document_id": false
5712+
},
5713+
{
5714+
"source_path_from_root": "/articles/industrial-iot/index.yml",
5715+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5716+
"redirect_document_id": false
5717+
},
5718+
{
5719+
"source_path_from_root": "/articles/industrial-iot/industrial-iot-platform-versions.md",
5720+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5721+
"redirect_document_id": false
5722+
},
5723+
{
5724+
"source_path_from_root": "/articles/industrial-iot/overview-what-is-industrial-iot.md",
5725+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5726+
"redirect_document_id": false
5727+
},
5728+
{
5729+
"source_path_from_root": "/articles/industrial-iot/overview-what-is-opc-publisher.md",
5730+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5731+
"redirect_document_id": false
5732+
},
5733+
{
5734+
"source_path_from_root": "/articles/industrial-iot/reference-command-line-arguments.md",
5735+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5736+
"redirect_document_id": false
5737+
},
5738+
{
5739+
"source_path_from_root": "/articles/industrial-iot/reference-opc-publisher-telemetry-format.md",
5740+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5741+
"redirect_document_id": false
5742+
},
5743+
{
5744+
"source_path_from_root": "/articles/industrial-iot/tutorial-configure-industrial-iot-components.md",
5745+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5746+
"redirect_document_id": false
5747+
},
5748+
{
5749+
"source_path_from_root": "/articles/industrial-iot/tutorial-data-explorer-import.md",
5750+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5751+
"redirect_document_id": false
5752+
},
5753+
{
5754+
"source_path_from_root": "/articles/industrial-iot/tutorial-deploy-industrial-iot-platform.md",
5755+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5756+
"redirect_document_id": false
5757+
},
5758+
{
5759+
"source_path_from_root": "/articles/industrial-iot/tutorial-industrial-iot-azure-data-explorer.md",
5760+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5761+
"redirect_document_id": false
5762+
},
5763+
{
5764+
"source_path_from_root": "/articles/industrial-iot/tutorial-publisher-configure-opc-publisher.md",
5765+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5766+
"redirect_document_id": false
5767+
},
5768+
{
5769+
"source_path_from_root": "/articles/industrial-iot/tutorial-publisher-deploy-opc-publisher-standalone.md",
5770+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5771+
"redirect_document_id": false
5772+
},
5773+
{
5774+
"source_path_from_root": "/articles/industrial-iot/tutorial-publisher-performance-memory-tuning-opc-publisher.md",
5775+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
5776+
"redirect_document_id": false
5777+
},
5778+
{
5779+
"source_path_from_root": "/articles/industrial-iot/tutorial-visualize-data-time-series-insights.md",
5780+
"redirect_url": "https://github.com/Azure/Industrial-IoT/blob/main/readme.md",
57115781
"redirect_document_id": false
57125782
},
57135783
{
@@ -10798,6 +10868,13 @@
1079810868
"source_path_from_root": "/articles/azure-health-insights/response-info.md",
1079910869
"redirect_url": "/azure/azure-health-insights/overview",
1080010870
"redirect_document_id": false
10871+
},
10872+
{
10873+
"source_path_from_root": "/articles/networking/disaster-recovery-dns-traffic-manager.md",
10874+
"redirect_url": "/azure/reliability/reliability-traffic-manager",
10875+
"redirect_document_id": false
1080110876
}
10877+
10878+
1080210879
]
1080310880
}

articles/ai-services/document-intelligence/faq.yml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -520,6 +520,14 @@ sections:
520520
521521
Yes. The labeling experience from Studio is open sourced in the [Toolkit repo](https://github.com/microsoft/Form-Recognizer-Toolkit)
522522
523+
- question: |
524+
Why am I receiving 'Form Recognizer Not Found' error when opening my custom project?
525+
answer: |
526+
Your Document Intelligence resource bound to this custom project may have been deleted or moved to another resource group. There are two ways to resolve this issue:
527+
528+
- Re-create the Document Intelligence resource under the same subscription/resource group with the same name.
529+
530+
- Re-create a custom project with the migrated Document Intelligence resource and specify the very same storage account.
523531
524532
- name: Containers
525533
questions:

articles/ai-services/openai/concepts/models.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ See [model versions](../concepts/model-versions.md) to learn about how Azure Ope
8888
> Version `0314` of `gpt-4` and `gpt-4-32k` will be retired no earlier than July 5, 2024. Version `0613` of `gpt-4` and `gpt-4-32k` will be retired no earlier than September 30, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior.
8989
9090

91-
GPT-4 version 0125-preview is an updated version of the GPT-4 Turbo preview previously released as version 1106-preview. GPT-4 versio 0125-preview completes tasks such as code generation more completely compared to gpt-4-1106-preview. Because of this, depending on the task, customers may find that GPT-4-0125-preview generates more output compared to the gpt-4-1106-preview. We recommend customers compare the outputs of the new model. GPT-4-0125-preview also addresses bugs in gpt-4-1106-preview with UTF-8 handling for non-English languages.
91+
GPT-4 version 0125-preview is an updated version of the GPT-4 Turbo preview previously released as version 1106-preview. GPT-4 version 0125-preview completes tasks such as code generation more completely compared to gpt-4-1106-preview. Because of this, depending on the task, customers may find that GPT-4-0125-preview generates more output compared to the gpt-4-1106-preview. We recommend customers compare the outputs of the new model. GPT-4-0125-preview also addresses bugs in gpt-4-1106-preview with UTF-8 handling for non-English languages.
9292

9393
> [!IMPORTANT]
9494
>
@@ -144,7 +144,7 @@ GPT-3.5 Turbo version 0301 is the first version of the model released. Version
144144
See [model versions](../concepts/model-versions.md) to learn about how Azure OpenAI Service handles model version upgrades, and [working with models](../how-to/working-with-models.md) to learn how to view and configure the model version settings of your GPT-3.5 Turbo deployments.
145145

146146
> [!NOTE]
147-
> Version `0613` of `gpt-35-turbo` and `gpt-35-turbo-16k` will be retired on June 13, 2024. Version `0301` of `gpt-35-turbo` will be retired on July 5, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior.
147+
> Version `0613` of `gpt-35-turbo` and `gpt-35-turbo-16k` will be retired no earlier than June 13, 2024. Version `0301` of `gpt-35-turbo` will be retired no earlier than July 5, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior.
148148
149149
### GPT-3.5-Turbo model availability
150150

articles/ai-services/openai/how-to/latency.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn about performance and latency with Azure OpenAI
55
manager: nitinme
66
ms.service: azure-ai-openai
77
ms.topic: how-to
8-
ms.date: 11/21/2023
8+
ms.date: 02/07/2024
99
author: mrbullwinkle
1010
ms.author: mbullwin
1111
recommendations: false
@@ -58,17 +58,17 @@ Latency varies based on what model you're using. For an identical request, expec
5858

5959
When you send a completion request to the Azure OpenAI endpoint, your input text is converted to tokens that are then sent to your deployed model. The model receives the input tokens and then begins generating a response. It's an iterative sequential process, one token at a time. Another way to think of it is like a for loop with `n tokens = n iterations`. For most models, generating the response is the slowest step in the process.
6060

61-
At the time of the request, the requested generation size (max_tokens parameter) is used as an initial estimate of the generation size. The compute-time for generating the full size is reserved the model as the request is processed. Once the generation is completed, the remaining quota is released. Ways to reduce the number of tokens:
62-
o Set the `max_token` parameter on each call as small as possible.
63-
o Include stop sequences to prevent generating extra content.
64-
o Generate fewer responses: The best_of & n parameters can greatly increase latency because they generate multiple outputs. For the fastest response, either don't specify these values or set them to 1.
61+
At the time of the request, the requested generation size (max_tokens parameter) is used as an initial estimate of the generation size. The compute-time for generating the full size is reserved by the model as the request is processed. Once the generation is completed, the remaining quota is released. Ways to reduce the number of tokens:
62+
- Set the `max_token` parameter on each call as small as possible.
63+
- Include stop sequences to prevent generating extra content.
64+
- Generate fewer responses: The best_of & n parameters can greatly increase latency because they generate multiple outputs. For the fastest response, either don't specify these values or set them to 1.
6565

6666
In summary, reducing the number of tokens generated per request reduces the latency of each request.
6767

6868
### Streaming
6969
Setting `stream: true` in a request makes the service return tokens as soon as they're available, instead of waiting for the full sequence of tokens to be generated. It doesn't change the time to get all the tokens, but it reduces the time for first response. This approach provides a better user experience since end-users can read the response as it is generated.
7070

71-
Streaming is also valuable with large calls that take a long time to process. Many clients and intermediary layers have timeouts on individual calls. Long generation calls might be canceled due to client-side time outs. By streaming the data back, you can ensure incremental data is received.
71+
Streaming is also valuable with large calls that take a long time to process. Many clients and intermediary layers have timeouts on individual calls. Long generation calls might be canceled due to client-side time outs. By streaming the data back, you can ensure incremental data is received.
7272

7373

7474

articles/ai-services/openai/whats-new.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,12 @@ recommendations: false
1818

1919
## February 2024
2020

21+
### GPT-4-0125-preview model available
22+
23+
The `gpt-4` model version `0125-preview` is now available on Azure OpenAI Service in the East US, North Central US, and South Central US regions. Customers with deployments of `gpt-4` version `1106-preview` will be automatically upgraded to `0125-preview` in the coming weeks.
24+
25+
For information on model regional availability and upgrades refer to the [models page](./concepts/models.md).
26+
2127
### Assistants API public preview
2228

2329
Azure OpenAI now supports the API that powers OpenAI's GPTs. Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and advanced tools like code interpreter, and custom functions. To learn more, see:

articles/ai-services/speech-service/includes/how-to/professional-voice/create-consent/rest.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111

1212
With the professional voice feature, it's required that every voice be created with explicit consent from the user. A recorded statement from the user is required acknowledging that the customer (Azure AI Speech resource owner) will create and use their voice.
1313

14-
To add voice talent consent to the professional voice project, you get the prerecorded consent audio file from a publicly accessible URL (`Consents_Create`) or upload the audio file (`Consents_Post`). In this article, you add consent from a URL.
14+
To add voice talent consent to the professional voice project, you get the prerecorded consent audio file from a publicly accessible URL ([Consents_Create](/rest/api/speechapi/consents/create)) or upload the audio file ([Consents_Post](/rest/api/speechapi/consents/post)). In this article, you add consent from a URL.
1515

1616
## Consent statement
1717

@@ -25,15 +25,15 @@ You can get the consent statement text for each locale from the text to speech G
2525

2626
## Add consent from a URL
2727

28-
To add consent to a professional voice project from the URL of an audio file, use the `Consents_Create` operation of the custom voice API. Construct the request body according to the following instructions:
28+
To add consent to a professional voice project from the URL of an audio file, use the [Consents_Create](/rest/api/speechapi/consents/create) operation of the custom voice API. Construct the request body according to the following instructions:
2929

3030
- Set the required `projectId` property. See [create a project](../../../../professional-voice-create-project.md).
3131
- Set the required `voiceTalentName` property. The voice talent name can't be changed later.
3232
- Set the required `companyName` property. The company name can't be changed later.
3333
- Set the required `audioUrl` property. The URL of the voice talent consent audio file. Use a URI with the [shared access signatures (SAS)](/azure/storage/common/storage-sas-overview) token.
3434
- Set the required `locale` property. This should be the locale of the consent. The locale can't be changed later. You can find the text to speech locale list [here](/azure/ai-services/speech-service/language-support?tabs=tts).
3535

36-
Make an HTTP PUT request using the URI as shown in the following `Consents_Create` example.
36+
Make an HTTP PUT request using the URI as shown in the following [Consents_Create](/rest/api/speechapi/consents/create) example.
3737
- Replace `YourResourceKey` with your Speech resource key.
3838
- Replace `YourResourceRegion` with your Speech resource region.
3939
- Replace `JessicaConsentId` with a consent ID of your choice. The case sensitive ID will be used in the consent's URI and can't be changed later.
@@ -65,7 +65,7 @@ You should receive a response body in the following format:
6565
}
6666
```
6767

68-
The response header contains the `Operation-Location` property. Use this URI to get details about the `Consents_Create` operation. Here's an example of the response header:
68+
The response header contains the `Operation-Location` property. Use this URI to get details about the [Consents_Create](/rest/api/speechapi/consents/create) operation. Here's an example of the response header:
6969

7070
```HTTP 201
7171
Operation-Location: https://eastus.api.cognitive.microsoft.com/customvoice/operations/070f7986-ef17-41d0-ba2b-907f0f28e314?api-version=2023-12-01-preview

articles/ai-services/speech-service/includes/how-to/professional-voice/create-project/rest.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,12 +15,12 @@ Each project is specific to a country/region and language, and the gender of the
1515

1616
## Create a project
1717

18-
To create a professional voice project, use the `Projects_Create` operation of the custom voice API. Construct the request body according to the following instructions:
18+
To create a professional voice project, use the [Projects_Create](/rest/api/speechapi/projects/create) operation of the custom voice API. Construct the request body according to the following instructions:
1919

2020
- Set the required `kind` property to `ProfessionalVoice`. The kind can't be changed later.
2121
- Optionally, set the `description` property for the project description. The project description can be changed later.
2222

23-
Make an HTTP PUT request using the URI as shown in the following `Projects_Create` example.
23+
Make an HTTP PUT request using the URI as shown in the following [Projects_Create](/rest/api/speechapi/projects/create) example.
2424
- Replace `YourResourceKey` with your Speech resource key.
2525
- Replace `YourResourceRegion` with your Speech resource region.
2626
- Replace `ProjectId` with a project ID of your choice. The case sensitive ID must be unique within your Speech resource. The ID will be used in the project's URI and can't be changed later.

0 commit comments

Comments
 (0)