Skip to content

Commit 9ac0ca7

Browse files
committed
Merge branch 'main' into release-rolling-upgrades-on-flex
2 parents b1bb9e7 + d814194 commit 9ac0ca7

File tree

160 files changed

+1992
-1795
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

160 files changed

+1992
-1795
lines changed

articles/active-directory-b2c/TOC.yml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -824,8 +824,6 @@
824824
href: user-flow-versions-legacy.md
825825
- name: Resources
826826
items:
827-
- name: Azure Roadmap
828-
href: https://azure.microsoft.com/updates/?status=nowavailable,inpreview,indevelopment&category=identity,security&query=b2c
829827
- name: Frequently asked questions
830828
href: ./faq.yml
831829
displayName: FAQ

articles/ai-services/openai/assistants-reference.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn how to use Azure OpenAI's Python & REST API with Assistants.
55
manager: nitinme
66
ms.service: azure-ai-openai
77
ms.topic: conceptual
8-
ms.date: 05/22/2024
8+
ms.date: 06/13/2024
99
author: mrbullwinkle
1010
ms.author: mbullwin
1111
recommendations: false
@@ -35,7 +35,7 @@ Create an assistant with a model and instructions.
3535
| name | string or null | Optional | The name of the assistant. The maximum length is 256 characters.|
3636
| description| string or null | Optional | The description of the assistant. The maximum length is 512 characters.|
3737
| instructions | string or null | Optional | The system instructions that the assistant uses. The maximum length is 256,000 characters.|
38-
| tools | array | Optional | Defaults to []. A list of tools enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can currently be of types `code_interpreter`, or `function`.|
38+
| tools | array | Optional | Defaults to []. A list of tools enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can currently be of types `code_interpreter`, or `function`. A `function` description can be a maximum of 1,024 characters. |
3939
| file_ids | array | Optional | Defaults to []. A list of file IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order.|
4040
| metadata | map | Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.|
4141
| temperature | number or null | Optional | Defaults to 1. Determines what sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. |
@@ -360,7 +360,7 @@ Modifies an assistant.
360360
| `name` | string or null | Optional | The name of the assistant. The maximum length is 256 characters. |
361361
| `description` | string or null | Optional | The description of the assistant. The maximum length is 512 characters. |
362362
| `instructions` | string or null | Optional | The system instructions that the assistant uses. The maximum length is 32768 characters. |
363-
| `tools` | array | Optional | Defaults to []. A list of tools enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, or function. |
363+
| `tools` | array | Optional | Defaults to []. A list of tools enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, or function. A `function` description can be a maximum of 1,024 characters. |
364364
| `file_ids` | array | Optional | Defaults to []. A list of File IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order. If a file was previously attached to the list but does not show up in the list, it will be deleted from the assistant. |
365365
| `metadata` | map | Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. |
366366

@@ -517,7 +517,7 @@ Assistants use the [same API for file upload as fine-tuning](/rest/api/azureopen
517517
| `description` | string or null | The description of the assistant. The maximum length is 512 characters.|
518518
| `model` | string | Name of the model deployment name to use.|
519519
| `instructions` | string or null | The system instructions that the assistant uses. The maximum length is 32768 characters.|
520-
| `tools` | array | A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, or function.|
520+
| `tools` | array | A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, or function. A `function` description can be a maximum of 1,024 characters.|
521521
| `file_ids` | array | A list of file IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order.|
522522
| `metadata` | map | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.|
523523

articles/ai-services/openai/quotas-limits.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ The following sections provide you with a quick guide to the default quotas and
2727
| OpenAI resources per region per Azure subscription | 30 |
2828
| Default DALL-E 2 quota limits | 2 concurrent requests |
2929
| Default DALL-E 3 quota limits| 2 capacity units (6 requests per minute)|
30+
| Default Whisper quota limits | 3 requests per minute |
3031
| Maximum prompt tokens per request | Varies per model. For more information, see [Azure OpenAI Service models](./concepts/models.md)|
3132
| Max fine-tuned model deployments | 5 |
3233
| Total number of training jobs per resource | 100 |
@@ -48,6 +49,7 @@ The following sections provide you with a quick guide to the default quotas and
4849
| GPT-4o max images per request (# of images in the messages array/conversation history) | 10 |
4950
| GPT-4 `vision-preview` & GPT-4 `turbo-2024-04-09` default max tokens | 16 <br><br> Increase the `max_tokens` parameter value to avoid truncated responses. GPT-4o max tokens defaults to 4096. |
5051

52+
5153
## Regional quota limits
5254

5355
[!INCLUDE [Quota](./includes/model-matrix/quota.md)]
@@ -99,7 +101,7 @@ If your Azure subscription is linked to certain [offer types](https://azure.micr
99101
|Azure for Students, Free Trials | 1 K (all models)|
100102
| Monthly credit card based accounts <sup>1</sup> | GPT 3.5 Turbo Series: 30 K <br> GPT-4 series: 8 K <br> |
101103

102-
<sup>1</sup>This currently applies to [offer type 0003P](https://azure.microsoft.com/support/legal/offer-details/)
104+
<sup>1</sup> This currently applies to [offer type 0003P](https://azure.microsoft.com/support/legal/offer-details/)
103105

104106
In the Azure portal you can view what offer type is associated with your subscription by navigating to your subscription and checking the subscriptions overview pane. Offer type corresponds to the plan field in the subscription overview.
105107

articles/ai-services/openai/whats-new.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.custom:
1010
- ignite-2023
1111
- references_regions
1212
ms.topic: whats-new
13-
ms.date: 06/11/2024
13+
ms.date: 06/13/2024
1414
recommendations: false
1515
---
1616

@@ -20,6 +20,10 @@ This article provides a summary of the latest releases and major documentation u
2020

2121
## June 2024
2222

23+
### Token based billing for fine-tuning
24+
25+
* Azure OpenAI fine-tuning billing is now based on the number of tokens in your training file – instead of the total elapsed training time. This can result in a significant cost reduction for some training runs, and makes estimating fine-tuning costs much easier. To learn more, you can consult the [official announcement](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/pricing-update-token-based-billing-for-fine-tuning-training/ba-p/4164465).
26+
2327
### GPT-4o released in new regions
2428

2529
* GPT-4o is now also available in:
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
---
2+
title: Content Credentials in Azure Text to Speech Avatar
3+
titleSuffix: Azure Text to Speech Avatar
4+
description: Learn about the content credentials feature, which lets you verify that a video was generated by the Azure text to speech avatar system.
5+
author: sally-baolian
6+
ms.author: v-baolianzou
7+
ms.service: azure-ai-speech
8+
ms.topic: conceptual
9+
ms.date: 6/11/2024
10+
manager: nitinme
11+
---
12+
13+
# Content credentials
14+
15+
The high-quality models in the Azure text to speech avatar feature generate realistic avatar videos from text input. To improve the transparency of the generated content, the Azure text to speech avatar provides content credentials, a tamper-evident way to disclose the origin and history of the content. Content credentials are based on an open technical specification from the [Coalition for Content Provenance and Authenticity (C2PA)](https://www.c2pa.org), a Joint Development Foundation project.
16+
17+
## What are content credentials?
18+
19+
Content credentials in the Azure text to speech avatar provide customers with information about the origin of an avatar video. This information is represented by a manifest attached to the video. The manifest is cryptographically signed by a certificate that traces back to Azure text to speech avatar.
20+
21+
The manifest contains several key pieces of information:
22+
23+
| Field name | Field content |
24+
| --- | --- |
25+
| `"generator"` | This field has a value of `"Microsoft Azure Text To Speech Avatar Service"` for all applicable videos, attesting to the AI-generated nature of the video. |
26+
| `"when"` | The timestamp of when the content credentials were created. |
27+
28+
Content credentials in the Azure text to speech avatar can help people understand when video content is generated by the Azure text to speech avatar system. For more information on how to responsibly build solutions with text to speech avatar models, visit the [Text to speech transparency note](/legal/cognitive-services/speech-service/text-to-speech/transparency-note?context=/azure/ai-services/speech-service/context/context).
29+
30+
## Limitations
31+
32+
The content credentials are only supported in video files generated by batch synthesis of text to speech avatar, and only `mp4` file format is supported.
33+
34+
## How do I leverage content credentials in my solution today?
35+
36+
You may leverage content credentials by:
37+
38+
- Ensuring that your Azure text to speech avatar generated video files contain content credentials
39+
40+
No additional set-up is necessary. Content credentials are automatically applied to all applicable videos generated by the Azure text to speech avatar.
41+
42+
## Verifying that a video file has content credentials
43+
44+
As for now, self-serve verification of content credentials for text to speech avatar video isn't yet available. You can contact [[email protected]](mailto:[email protected]) through email for verification of content credentials of Azure text to speech avatar generated videos.
45+
46+
## Next steps
47+
48+
* [Use batch synthesis for text to speech avatar](./batch-synthesis-avatar.md)

articles/ai-services/speech-service/toc.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -200,6 +200,8 @@ items:
200200
- name: What is text to speech avatar?
201201
href: text-to-speech-avatar/what-is-text-to-speech-avatar.md
202202
displayName: avatar
203+
- name: Content credentials
204+
href: text-to-speech-avatar/content-credentials.md
203205
- name: How to synthesize text to speech avatar
204206
items:
205207
- name: Real-time synthesis

articles/ai-studio/reference/reference-model-inference-api.md

Lines changed: 46 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ While foundational models excel in specific domains, they lack a uniform set of
3232
> * Use smaller models that can run faster on specific tasks.
3333
> * Compose multiple models to develop intelligent experiences.
3434
35-
Having a uniform way to consume foundational models allow developers to realize all those benefits without changing a single line of code on their applications.
35+
Having a uniform way to consume foundational models allow developers to realize all those benefits without sacrificing portability or changing the underlying code.
3636

3737
## Availability
3838

@@ -43,8 +43,8 @@ Models deployed to [serverless API endpoints](../how-to/deploy-models-serverless
4343
> [!div class="checklist"]
4444
> * [Cohere Embed V3](../how-to/deploy-models-cohere-embed.md) family of models
4545
> * [Cohere Command R](../how-to/deploy-models-cohere-command.md) family of models
46-
> * [Meta Llama 2](../how-to/deploy-models-llama.md) family of models
47-
> * [Meta Llama 3](../how-to/deploy-models-llama.md) family of models
46+
> * [Meta Llama 2 chat](../how-to/deploy-models-llama.md) family of models
47+
> * [Meta Llama 3 instruct](../how-to/deploy-models-llama.md) family of models
4848
> * [Mistral-Small](../how-to/deploy-models-mistral.md)
4949
> * [Mistral-Large](../how-to/deploy-models-mistral.md)
5050
> * [Phi-3](../how-to/deploy-models-phi-3.md) family of models
@@ -154,6 +154,49 @@ __Response__
154154
> [!TIP]
155155
> You can inspect the property `details.loc` to understand the location of the offending parameter and `details.input` to see the value that was passed in the request.
156156
157+
## Content safety
158+
159+
The Azure AI model inference API supports [Azure AI Content Safety](../concepts/content-filtering.md). When using deployments with Azure AI Content Safety on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
160+
161+
The following example shows the response for a chat completion request that has triggered content safety.
162+
163+
__Request__
164+
165+
```HTTP/1.1
166+
POST /chat/completions?api-version=2024-04-01-preview
167+
Authorization: Bearer <bearer-token>
168+
Content-Type: application/json
169+
```
170+
171+
```JSON
172+
{
173+
"messages": [
174+
{
175+
"role": "system",
176+
"content": "You are a helpful assistant"
177+
},
178+
{
179+
"role": "user",
180+
"content": "Chopping tomatoes and cutting them into cubes or wedges are great ways to practice your knife skills."
181+
}
182+
],
183+
"temperature": 0,
184+
"top_p": 1,
185+
}
186+
```
187+
188+
__Response__
189+
190+
```JSON
191+
{
192+
"status": 400,
193+
"code": "content_filter",
194+
"message": "The response was filtered",
195+
"param": "messages",
196+
"type": null
197+
}
198+
```
199+
157200
## Getting started
158201

159202
The Azure AI Model Inference API is currently supported in models deployed as [Serverless API endpoints](../how-to/deploy-models-serverless.md). Deploy any of the [supported models](#availability) to a new [Serverless API endpoints](../how-to/deploy-models-serverless.md) to get started. Then you can consume the API in the following ways:

0 commit comments

Comments
 (0)