Skip to content

Commit 8452662

Browse files
committed
reverting to upstream/main
1 parent 806874e commit 8452662

File tree

550 files changed

+11585
-4896
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

550 files changed

+11585
-4896
lines changed

.openpublishing.publish.config.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1248,6 +1248,7 @@
12481248
"articles/digital-twins/.openpublishing.redirection.digital-twins.json",
12491249
"articles/event-grid/.openpublishing.redirection.event-grid.json",
12501250
"articles/event-hubs/.openpublishing.redirection.event-hubs.json",
1251+
"articles/governance/policy/.openpublishing.redirection.policy.json",
12511252
"articles/hdinsight/.openpublishing.redirection.hdinsight.json",
12521253
"articles/hdinsight-aks/.openpublishing.redirection.hdinsight-aks.json",
12531254
"articles/healthcare-apis/.openpublishing.redirection.healthcare-apis.json",

.openpublishing.redirection.azure-resource-manager.json

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1919,11 +1919,6 @@
19191919
"source_path_from_root": "/articles/azure-resource-manager/managed-applications/scripts/managed-application-powershell-sample-get-managed-group-resize-vm.md",
19201920
"redirect_url": "/azure/azure-resource-manager/managed-applications/overview",
19211921
"redirect_document_id": false
1922-
},
1923-
{
1924-
"source_path_from_root": "/articles/governance/policy/tutorials/policy-as-code-github.md",
1925-
"redirect_url": "/azure/governance/policy/concepts/policy-as-code",
1926-
"redirect_document_id": false
1927-
}
1922+
}
19281923
]
19291924
}

.openpublishing.redirection.json

Lines changed: 69 additions & 779 deletions
Large diffs are not rendered by default.

.openpublishing.redirection.virtual-desktop.json

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -304,6 +304,71 @@
304304
"source_path_from_root": "/articles/virtual-desktop/whats-new-client-web.md",
305305
"redirect_url": "/azure/virtual-desktop/users/remote-desktop-clients-overview",
306306
"redirect_document_id": false
307+
},
308+
{
309+
"source_path_from_root": "/articles/virtual-desktop/remote-app-streaming/index.yml",
310+
"redirect_url": "/azure/virtual-desktop/index",
311+
"redirect_document_id": true
312+
},
313+
{
314+
"source_path_from_root": "/articles/virtual-desktop/remote-app-streaming/overview.md",
315+
"redirect_url": "/azure/virtual-desktop/overview",
316+
"redirect_document_id": true
317+
},
318+
{
319+
"source_path_from_root": "/articles/virtual-desktop/remote-app-streaming/architecture-recs.md",
320+
"redirect_url": "/azure/virtual-desktop/organization-internal-external-commercial-purposes-recommendations",
321+
"redirect_document_id": true
322+
},
323+
{
324+
"source_path_from_root": "/articles/virtual-desktop/remote-app-streaming/custom-apps.md",
325+
"redirect_url": "/azure/virtual-desktop/publish-applications",
326+
"redirect_document_id": false
327+
},
328+
{
329+
"source_path_from_root": "/articles/virtual-desktop/remote-app-streaming/identities.md",
330+
"redirect_url": "/azure/virtual-desktop/authentication",
331+
"redirect_document_id": true
332+
},
333+
{
334+
"source_path_from_root": "/articles/virtual-desktop/remote-app-streaming/licensing.md",
335+
"redirect_url": "/azure/virtual-desktop/licensing",
336+
"redirect_document_id": true
337+
},
338+
{
339+
"source_path_from_root": "/articles/virtual-desktop/remote-app-streaming/msix-app-attach.md",
340+
"redirect_url": "/azure/virtual-desktop/app-attach-overview",
341+
"redirect_document_id": false
342+
},
343+
{
344+
"source_path_from_root": "/articles/virtual-desktop/remote-app-streaming/per-user-access-pricing.md",
345+
"redirect_url": "/azure/virtual-desktop/enroll-per-user-access-pricing",
346+
"redirect_document_id": true
347+
},
348+
{
349+
"source_path_from_root": "/articles/virtual-desktop/security-guide.md",
350+
"redirect_url": "/azure/virtual-desktop/security-recommendations",
351+
"redirect_document_id": true
352+
},
353+
{
354+
"source_path_from_root": "/articles/virtual-desktop/remote-app-streaming/security.md",
355+
"redirect_url": "/azure/virtual-desktop/security-recommendations",
356+
"redirect_document_id": false
357+
},
358+
{
359+
"source_path_from_root": "/articles/virtual-desktop/remote-app-streaming/total-costs.md",
360+
"redirect_url": "/azure/virtual-desktop/understand-estimate-costs",
361+
"redirect_document_id": true
362+
},
363+
{
364+
"source_path_from_root": "/articles/virtual-desktop/remote-app-streaming/streaming-costs.md",
365+
"redirect_url": "/azure/virtual-desktop/understand-estimate-costs",
366+
"redirect_document_id": false
367+
},
368+
{
369+
"source_path_from_root": "/articles/virtual-desktop/publish-applications.md",
370+
"redirect_url": "/azure/virtual-desktop/publish-applications-stream-remoteapp",
371+
"redirect_document_id": false
307372
}
308373
]
309374
}

articles/ai-services/authentication.md

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -216,11 +216,12 @@ Now that you have a custom subdomain associated with your resource, you're going
216216
New-AzADServicePrincipal -ApplicationId <APPLICATION_ID>
217217
```
218218

219-
>[!NOTE]
219+
> [!NOTE]
220220
> If you register an application in the Azure portal, this step is completed for you.
221221
222222
3. The last step is to [assign the "Cognitive Services User" role](/powershell/module/az.Resources/New-azRoleAssignment) to the service principal (scoped to the resource). By assigning a role, you're granting service principal access to this resource. You can grant the same service principal access to multiple resources in your subscription.
223-
>[!NOTE]
223+
224+
> [!NOTE]
224225
> The ObjectId of the service principal is used, not the ObjectId for the application.
225226
> The ACCOUNT_ID will be the Azure resource Id of the Azure AI services account you created. You can find Azure resource Id from "properties" of the resource in Azure portal.
226227
@@ -239,32 +240,31 @@ In this sample, a password is used to authenticate the service principal. The to
239240
```
240241

241242
2. Get a token:
242-
> [!NOTE]
243-
> If you're using Azure Cloud Shell, the `SecureClientSecret` class isn't available.
244-
245-
#### [PowerShell](#tab/powershell)
246243
```powershell-interactive
247-
$authContext = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext" -ArgumentList "https://login.windows.net/<TENANT_ID>"
248-
$secureSecretObject = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.SecureClientSecret" -ArgumentList $SecureStringPassword
249-
$clientCredential = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.ClientCredential" -ArgumentList $app.ApplicationId, $secureSecretObject
250-
$token=$authContext.AcquireTokenAsync("https://cognitiveservices.azure.com/", $clientCredential).Result
251-
$token
252-
```
244+
$tenantId = $context.Tenant.Id
245+
$clientId = $app.ApplicationId
246+
$clientSecret = "<YOUR_PASSWORD>"
247+
$resourceUrl = "https://cognitiveservices.azure.com/"
253248
254-
#### [Azure Cloud Shell](#tab/azure-cloud-shell)
255-
```Azure Cloud Shell
256-
$authContext = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext" -ArgumentList "https://login.windows.net/<TENANT_ID>"
257-
$clientCredential = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.ClientCredential" -ArgumentList $app.ApplicationId, <YOUR_PASSWORD>
258-
$token=$authContext.AcquireTokenAsync("https://cognitiveservices.azure.com/", $clientCredential).Result
259-
$token
260-
```
261-
262-
---
249+
$tokenEndpoint = "https://login.microsoftonline.com/$tenantId/oauth2/token"
250+
$body = @{
251+
grant_type = "client_credentials"
252+
client_id = $clientId
253+
client_secret = $clientSecret
254+
resource = $resourceUrl
255+
}
256+
257+
$responseToken = Invoke-RestMethod -Uri $tokenEndpoint -Method Post -Body $body
258+
$accessToken = $responseToken.access_token
259+
```
263260

261+
> [!NOTE]
262+
> Anytime you use passwords in a script, the most secure option is to use the PowerShell Secrets Management module and integrate with a solution such as Azure KeyVault.
263+
264264
3. Call the Computer Vision API:
265265
```powershell-interactive
266266
$url = $account.Endpoint+"vision/v1.0/models"
267-
$result = Invoke-RestMethod -Uri $url -Method Get -Headers @{"Authorization"=$token.CreateAuthorizationHeader()} -Verbose
267+
$result = Invoke-RestMethod -Uri $url -Method Get -Headers @{"Authorization"="Bearer $accessToken"} -Verbose
268268
$result | ConvertTo-Json
269269
```
270270

articles/ai-services/computer-vision/toc.yml

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -378,12 +378,8 @@ items:
378378
href: intro-to-spatial-analysis-public-preview.md
379379
- name: Responsible use of AI
380380
items:
381-
- name: Transparency notes
382-
items:
383-
- name: Spatial Analysis use cases
384-
href: /legal/cognitive-services/computer-vision/transparency-note-spatial-analysis?context=/azure/ai-services/computer-vision/context/context
385-
- name: Characteristics and limitations
386-
href: /legal/cognitive-services/computer-vision/accuracy-and-limitations?context=/azure/ai-services/computer-vision/context/context
381+
- name: Transparency note
382+
href: /legal/cognitive-services/computer-vision/transparency-note-spatial-analysis?context=/azure/ai-services/computer-vision/context/context
387383
- name: Integration and responsible use
388384
items:
389385
- name: Responsible use in AI deployment

articles/ai-services/language-service/index.yml

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,13 @@ brand: azure
77
metadata:
88
title: Azure AI Language documentation - Tutorials, API Reference | Microsoft Docs
99
titleSuffix: Azure AI services
10-
#services: cognitive-services
1110
description: Learn how to integrate AI that works on written text into your applications.
1211
author: aahill
1312
manager: nitinme
14-
ms.service: speech-service
13+
ms.service: azure-ai-language
1514
ms.custom: event-tier1-build-2022
1615
ms.topic: hub-page
17-
ms.date: 11/02/2021
16+
ms.date: 01/09/2024
1817
ms.author: aahi
1918
highlightedContent:
2019
# itemType: architecture | concept | deploy | download | get-started | how-to-guide | learn | overview | quickstart | reference | tutorial | whats-new

articles/ai-services/openai/concepts/content-filter.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -649,9 +649,9 @@ Customers who have been approved for modified content filters can choose Asynchr
649649
650650
Approval for Modified Content Filtering is required for access to Streaming – Asynchronous Modified Filter. The application can be found [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu). To enable it via Azure OpenAI Studio please follow the instructions [here](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select “Asynchronous Modified Filter” in the Streaming section, as shown in the below screenshot.
651651
652-
### Overview tbd
652+
### Overview
653653
654-
| | Streaming - Default | Streaming - Asynchronous Modified Filter |
654+
| Category | Streaming - Default | Streaming - Asynchronous Modified Filter |
655655
|---|---|---|
656656
|Status |GA |Public Preview |
657657
| Access | Enabled by default, no action needed |Customers approved for Modified Content Filtering can configure directly via Azure OpenAI Studio (as part of a content filtering configuration; applied on deployment-level) |
Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
---
2+
title: GPT-4 Turbo with Vision concepts
3+
titleSuffix: Azure OpenAI
4+
description: Learn about vision chats enabled by GPT-4 Turbo with Vision.
5+
author: PatrickFarley
6+
ms.author: pafarley
7+
ms.service: azure-ai-openai
8+
ms.topic: conceptual
9+
ms.date: 01/02/2024
10+
manager: nitinme
11+
keywords:
12+
---
13+
14+
# GPT-4 Turbo with Vision concepts
15+
16+
GPT-4 Turbo with Vision is a large multimodal model (LMM) developed by OpenAI that can analyze images and provide textual responses to questions about them. It incorporates both natural language processing and visual understanding. This guide provides details on the capabilities and limitations of GPT-4 Turbo with Vision.
17+
18+
To try out GPT-4 Turbo with Vision, see the [quickstart](/azure/ai-services/openai/gpt-v-quickstart).
19+
20+
## Chats with vision
21+
22+
The GPT-4 Turbo with Vision model answers general questions about what's present in the images or videos you upload.
23+
24+
25+
## Enhancements
26+
27+
Enhancements let you incorporate other Azure AI services (such as Azure AI Vision) to add new functionality to the chat-with-vision experience.
28+
29+
**Object grounding**: Azure AI Vision complements GPT-4 Turbo with Vision’s text response by identifying and locating salient objects in the input images. This lets the chat model give more accurate and detailed responses about the contents of the image.
30+
31+
:::image type="content" source="../media/concepts/gpt-v/object-grounding.png" alt-text="Screenshot of an image with object grounding applied. Objects have bounding boxes with labels.":::
32+
33+
:::image type="content" source="../media/concepts/gpt-v/object-grounding-response.png" alt-text="Screenshot of a chat response to an image prompt about an outfit. The response is an itemized list of clothing items seen in the image.":::
34+
35+
**Optical Character Recognition (OCR)**: Azure AI Vision complements GPT-4 Turbo with Vision by providing high-quality OCR results as supplementary information to the chat model. It allows the model to produce higher quality responses for images with dense text, transformed images, and numbers-heavy financial documents, and increases the variety of languages the model can recognize in text.
36+
37+
:::image type="content" source="../media/concepts/gpt-v/receipts.png" alt-text="Photo of several receipts.":::
38+
39+
:::image type="content" source="../media/concepts/gpt-v/ocr-response.png" alt-text="Screenshot of the JSON response of an OCR call.":::
40+
41+
**Video prompt**: The **video prompt** enhancement lets you use video clips as input for AI chat, enabling the model to generate summaries and answers about video content. It uses Azure AI Vision Video Retrieval to sample a set of frames from a video and create a transcript of the speech in the video.
42+
43+
In order to use the video prompt enhancement, you need both an Azure AI Vision resource and an Azure Video Indexer resource, in addition to your Azure OpenAI resource.
44+
45+
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RW1eHRf]
46+
47+
48+
## Special pricing information
49+
50+
> [!IMPORTANT]
51+
> Pricing details are subject to change in the future.
52+
53+
GPT-4 Turbo with Vision accrues charges like other Azure OpenAI chat models. You pay a per-token rate for the prompts and completions, detailed on the [Pricing page](/pricing/details/cognitive-services/openai-service/). The base charges and additional features are outlined here:
54+
55+
Base Pricing for GPT-4 Turbo with Vision is:
56+
- Input: $0.01 per 1000 tokens
57+
- Output: $0.03 per 1000 tokens
58+
59+
See the [Tokens section of the overview](/azure/ai-services/openai/overview#tokens) for information on how text and images translate to tokens.
60+
61+
Additionally, if you use video prompt integration with the Video Retrieval add-on, it accrues other costs:
62+
- Ingestion: $0.05 per minute of video
63+
- Transactions: $0.25 per 1000 queries of the Video Retrieval index
64+
65+
Processing videos involves the use of extra tokens to identify key frames for analysis. The number of these additional tokens will be roughly equivalent to the sum of the tokens in the text input, plus 700 tokens.
66+
67+
### Example price calculation
68+
69+
> [!IMPORTANT]
70+
> The following content is an example only, and prices are subject to change in the future.
71+
72+
For a typical use case, take a 3-minute video with a 100-token prompt input. The video has a transcript that's 100 tokens long, and when the service processes the prompt, it generates 100 tokens of output. The pricing for this transaction would be:
73+
74+
| Item | Detail | Total Cost |
75+
|-----------------|-----------------|--------------|
76+
| GPT-4 Turbo with Vision input tokens | 100 text tokens | $0.001 |
77+
| Additional Cost to identify frames | 100 input tokens + 700 tokens + 1 Video Retrieval transaction | $0.00825 |
78+
| Image Inputs and Transcript Input | 20 images (85 tokens each) + 100 transcript tokens | $0.018 |
79+
| Output Tokens | 100 tokens (assumed) | $0.003 |
80+
| **Total Cost** | | **$0.03025** |
81+
82+
Additionally, there's a one-time indexing cost of $0.15 to generate the Video Retrieval index for this 3-minute video. This index can be reused across any number of Video Retrieval and GPT-4 Turbo with Vision API calls.
83+
84+
## Limitations
85+
86+
This section describes the limitations of GPT-4 Turbo with Vision.
87+
88+
### Image support
89+
90+
- **Limitation on image enhancements per chat session**: Enhancements cannot be applied to multiple images within a single chat call.
91+
- **Maximum input image size**: The maximum size for input images is restricted to 20 MB.
92+
- **Object grounding in enhancement API**: When the enhancement API is used for object grounding, and the model detects duplicates of an object, it will generate one bounding box and label for all the duplicates instead of separate ones for each.
93+
- **Low resolution accuracy**: When images are analyzed using the "low resolution" setting, it allows for faster responses and uses fewer input tokens for certain use cases. However, this could impact the accuracy of object and text recognition within the image.
94+
- **Image chat restriction**: When you upload images in Azure OpenAI Studio or the API, there is a limit of 10 images per chat call.
95+
96+
### Video support
97+
98+
- **Low resolution**: Video frames are analyzed using GPT-4 Turbo with Vision's "low resolution" setting, which may affect the accuracy of small object and text recognition in the video.
99+
- **Video file limits**: Both MP4 and MOV file types are supported. In Azure OpenAI Studio, videos must be less than 3 minutes long. When you use the API there is no such limitation.
100+
- **Prompt limits**: Video prompts only contain one video and no images. In Azure OpenAI Studio, you can clear the session to try another video or images.
101+
- **Limited frame selection**: The service selects 20 frames from the entire video, which might not capture all the critical moments or details. Frame selection can be approximately evenly spread through the video or focused by a specific video retrieval query, depending on the prompt.
102+
- **Language support**: The service primarily supports English for grounding with transcripts. Transcripts don't provide accurate information on lyrics in songs.
103+
104+
## Next steps
105+
106+
- Get started using GPT-4 Turbo with Vision by following the [quickstart](/azure/ai-services/openai/gpt-v-quickstart).
107+
- For a more in-depth look at the APIs, and to use video prompts in chat, follow the [how-to guide](../how-to/gpt-with-vision.md).
108+
- See the [completions and embeddings API reference](../reference.md)

articles/ai-services/openai/concepts/provisioned-throughput.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,6 @@ We introduced a new deployment type called **ProvisionedManaged** which provides
5252

5353
Provisioned throughput quota represents a specific amount of total throughput you can deploy. Quota in the Azure OpenAI Service is managed at the subscription level meaning that it can be consumed by different resources within that subscription.
5454

55-
Quota is specific to a (deployment type, mode, region) triplet and isn't interchangeable. Meaning you can't use quota for GPT-4 to deploy GPT-35-turbo. Customers can raise a support request to move the quota across deployment types, models, or regions but we can't guarantee that it will be possible.
55+
Quota is specific to a (deployment type, model, region) triplet and isn't interchangeable. Meaning you can't use quota for GPT-4 to deploy GPT-35-turbo. Customers can raise a support request to move the quota across deployment types, models, or regions but we can't guarantee that it will be possible.
5656

5757
While we make every attempt to ensure that quota is always deployable, quota does not represent a guarantee that the underlying capacity is available for the customer to use. The service assigns capacity to the customer at deployment time and if capacity is unavailable the deployment will fail with an out of capacity error.

0 commit comments

Comments
 (0)