Skip to content

Commit 78f98a2

Browse files
author
Michael Bender
committed
2 parents c1287cd + eaffbd6 commit 78f98a2

File tree

163 files changed

+4028
-569
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

163 files changed

+4028
-569
lines changed

articles/ai-services/openai/how-to/content-filters.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ description: Learn how to use content filters (preview) with Azure OpenAI Servic
66
manager: nitinme
77
ms.service: azure-ai-openai
88
ms.topic: how-to
9-
ms.date: 03/29/2024
9+
ms.date: 04/16/2024
1010
author: mrbullwinkle
1111
ms.author: mbullwin
1212
recommendations: false
@@ -15,7 +15,7 @@ recommendations: false
1515
# How to configure content filters with Azure OpenAI Service
1616

1717
> [!NOTE]
18-
> All customers have the ability to modify the content filters to be stricter (for example, to filter content at lower severity levels than the default). Approval is required for turning the content filters partially or fully off. Managed customers only may apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR).
18+
> All customers have the ability to modify the content filters and configure the severity thresholds (low, medium, high). Approval is required for turning the content filters partially or fully off. Managed customers only may apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR).
1919
2020
The content filtering system integrated into Azure OpenAI Service runs alongside the core models and uses an ensemble of multi-class classification models to detect four categories of harmful content (violence, hate, sexual, and self-harm) at four severity levels respectively (safe, low, medium, and high), and optional binary classifiers for detecting jailbreak risk, existing text, and code in public repositories. The default content filtering configuration is set to filter at the medium severity threshold for all four content harms categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low or safe is not filtered by the content filters. Learn more about content categories, severity levels, and the behavior of the content filtering system [here](../concepts/content-filter.md). Jailbreak risk detection and protected text and code models are optional and off by default. For jailbreak and protected material text and code models, the configurability feature allows all customers to turn the models on and off. The models are by default off and can be turned on per your scenario. Some models are required to be on for certain scenarios to retain coverage under the [Customer Copyright Commitment](/legal/cognitive-services/openai/customer-copyright-commitment?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
2121

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
---
2+
title: Deploy your custom text to speech avatar model as an endpoint - Speech service
3+
titleSuffix: Azure AI services
4+
description: Learn about how to deploy your custom text to speech avatar model as an endpoint.
5+
author: sally-baolian
6+
manager: nitinme
7+
ms.service: azure-ai-speech
8+
ms.topic: how-to
9+
ms.date: 4/15/2024
10+
ms.author: v-baolianzou
11+
---
12+
13+
# Deploy your custom text to speech avatar model as an endpoint
14+
15+
You must deploy the custom avatar to an endpoint before you can use it. Once your custom text to speech avatar model is successfully trained through our manual process, we will notify you. Then you can deploy it to a custom avatar endpoint. You can create up to 10 custom avatar endpoints for each standard (S0) Speech resource.
16+
17+
After you deploy your custom avatar, it's available to use in Speech Studio or through API:
18+
19+
- The avatar appears in the avatar list of text to speech avatar on [Speech Studio](https://speech.microsoft.com/portal/talkingavatar).
20+
- The avatar appears in the avatar list of live chat avatar on [Speech Studio](https://speech.microsoft.com/portal/livechat).
21+
- You can call the avatar from the API by specifying the avatar model name.
22+
23+
## Add a deployment endpoint
24+
25+
To create a custom avatar endpoint, follow these steps:
26+
27+
1. Sign in to [Speech Studio](https://speech.microsoft.com/portal).
28+
1. Navigate to **Custom Avatar** > Your project name > **Train model**.
29+
1. All available models are listed on the **Train model** page. Select a model link to view more information, such as the created date and a preview image of the custom avatar.
30+
1. Select a model that you would like to deploy, then select the **Deploy model** button above the list.
31+
1. Confirm the deployment to create your endpoint.
32+
33+
Once your model is successfully deployed as an endpoint, you can select the endpoint link on the **Deploy model** page. There, you'll find a link to the text to speech avatar portal on Speech Studio, where you can try and create videos with your custom avatar using text input.
34+
35+
## Remove a deployment endpoint
36+
37+
To remove a deployment endpoint, follow these steps:
38+
39+
1. Sign in to [Speech Studio](https://speech.microsoft.com/portal).
40+
1. Navigate to **Custom Avatar** > Your project name > **Train model**.
41+
1. All available models are listed on the **Train model** page. Select a model link to view more information, such as the created date and a preview image of the custom avatar.
42+
1. Select a model on the **Train model** page. If it's in "Succeeded" status, it means it's in hosting status. You can select the **Delete** button and confirm the deletion to remove the hosting.
43+
44+
## Use your custom neural voice
45+
46+
If you're also creating a custom neural voice for the actor, the avatar can be highly realistic. For more information, see [What is custom text to speech avatar](./what-is-custom-text-to-speech-avatar.md).
47+
48+
[Custom neural voice](../custom-neural-voice.md) and [custom text to speech avatar](what-is-custom-text-to-speech-avatar.md) are separate features. You can use them independently or together.
49+
50+
If you've built a custom neural voice (CNV) and would like to use it together with the custom avatar, pay attention to the following points:
51+
52+
- Ensure that the CNV endpoint is created in the same Speech resource as the custom avatar endpoint. You can see the CNV voice option in the voices list of the [avatar content generation page](https://speech.microsoft.com/portal/talkingavatar) and [live chat voice settings](https://speech.microsoft.com/portal/livechat).
53+
- If you're using the batch synthesis for avatar API, add the "customVoices" property to associate the deployment ID of the CNV model with the voice name in the request. For more information, refer to the [Text to speech properties](batch-synthesis-avatar-properties.md#text-to-speech-properties).
54+
- If you're using real-time synthesis for avatar API, refer to our sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser/avatar) to set the custom neural voice.
55+
- If your custom neural voice endpoint is in a different Speech resource from the custom avatar endpoint, refer to [Train your professional voice model](../professional-voice-train-voice.md#copy-your-voice-model-to-another-project) to copy the CNV model to the same Speech resource as the custom avatar endpoint.
56+
57+
## Next steps
58+
59+
- Learn more about custom text to speech avatar in the [overview](what-is-custom-text-to-speech-avatar.md).

articles/ai-services/speech-service/toc.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -224,6 +224,9 @@ items:
224224
- name: How to record video samples
225225
href: text-to-speech-avatar/custom-avatar-record-video-samples.md
226226
displayName: avatar
227+
- name: Deploy your custom text to speech avatar model as an endpoint
228+
href: text-to-speech-avatar/custom-avatar-endpoint.md
229+
displayName: avatar
227230
- name: Audio Content Creation
228231
href: how-to-audio-content-creation.md
229232
displayName: acc

articles/ai-studio/how-to/prompt-flow-tools/prompt-flow-tools-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ The following table provides an index of tools in prompt flow.
2929
| [Index Lookup](./index-lookup-tool.md) | Search a vector-based query for relevant results using one or more text queries. | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
3030
| [Vector Index Lookup](./vector-index-lookup-tool.md)<sup>1</sup> | Search text or a vector-based query from a vector index. | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
3131
| [Faiss Index Lookup](./faiss-index-lookup-tool.md)<sup>1</sup> | Search a vector-based query from the Faiss index file. | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
32-
| [Vector DB Lookup](./vector-db-lookup-tool.md)<sup>1</sup> For | Search a vector-based query from an existing vector database. | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
32+
| [Vector DB Lookup](./vector-db-lookup-tool.md)<sup>1</sup> | Search a vector-based query from an existing vector database. | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
3333

3434
<sup>1</sup> The Index Lookup tool replaces the three deprecated legacy index tools: Vector Index Lookup, Vector DB Lookup, and Faiss Index Lookup. If you have a flow that contains one of those tools, follow the [migration steps](./index-lookup-tool.md#migrate-from-legacy-tools-to-the-index-lookup-tool) to upgrade your flow.
3535

articles/app-service/overview-vnet-integration.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ The virtual network integration feature:
2828

2929
* Requires a [supported Basic or Standard](./overview-vnet-integration.md#limitations), Premium, Premium v2, Premium v3, or Elastic Premium App Service pricing tier.
3030
* Supports TCP and UDP.
31-
* Works with App Service apps, function apps and Logic apps.
31+
* Works with App Service apps, function apps, and Logic apps.
3232

3333
There are some things that virtual network integration doesn't support, like:
3434

@@ -72,17 +72,17 @@ When you scale up/down in instance size, the amount of IP addresses used by the
7272

7373
Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. You should also reserve IP addresses for platform upgrades. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When you're creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of `/27` is required. If the subnet already exists before integrating through the portal, you can use a `/28` subnet.
7474

75-
With multi plan subnet join (MPSJ) you can join multiple App Service plans in to the same subnet. All App Service plans must be in the same subscription but the virtual network/subnet can be in a different subscription. Each instance from each App Service plan requires an IP address from the subnet and to use MPSJ a minimum size of `/26` subnet is required. If you plan to join many and/or large scale plans, you should plan for larger subnet ranges.
75+
With multi plan subnet join (MPSJ), you can join multiple App Service plans in to the same subnet. All App Service plans must be in the same subscription but the virtual network/subnet can be in a different subscription. Each instance from each App Service plan requires an IP address from the subnet and to use MPSJ a minimum size of `/26` subnet is required. If you plan to join many and/or large scale plans, you should plan for larger subnet ranges.
7676

7777
>[!NOTE]
7878
> Multi plan subnet join is currently in public preview. During preview the following known limitations should be observed:
7979
>
80-
> * The minimum requirement for subnet size of `/26` is currently not enforced, but will be enforced at GA.
80+
> * The minimum requirement for subnet size of `/26` is currently not enforced, but will be enforced at GA. If you have joined multiple plans to a smaller subnet during preview they will still work, but you cannot connect additional plans and if you disconnect you will not be able to connect again.
8181
> * There is currently no validation if the subnet has available IPs, so you might be able to join N+1 plan, but the instances will not get an IP. You can view available IPs in the Virtual network integration page in Azure portal in apps that are already connected to the subnet.
8282
8383
### Windows Containers specific limits
8484

85-
Windows Containers uses an additional IP address per app for each App Service plan instance, and you need to size the subnet accordingly. If you have for example 10 Windows Container App Service plan instances with 4 apps running, you will need 50 IP addresses and additional addresses to support horizontal (in/out) scale.
85+
Windows Containers uses an extra IP address per app for each App Service plan instance, and you need to size the subnet accordingly. If you have, for example, 10 Windows Container App Service plan instances with four apps running, you need 50 IP addresses and extra addresses to support horizontal (in/out) scale.
8686

8787
Sample calculation:
8888

@@ -96,13 +96,13 @@ For 10 instances:
9696

9797
Since you have 1 App Service plan, 1 x 50 = 50 IP addresses.
9898

99-
You are in addition limited by the number of cores available in the worker SKU used. Each core adds three "networking units". The worker itself uses one unit and each virtual network connection uses one unit. The remaining units can be used for apps.
99+
You are in addition limited by the number of cores available in the worker tier used. Each core adds three networking units. The worker itself uses one unit and each virtual network connection uses one unit. The remaining units can be used for apps.
100100

101101
Sample calculation:
102102

103-
App Service plan instance with 4 apps running and using virtual network integration. The Apps are connected to two different subnets (virtual network connections). This will require 7 networking units (1 worker + 2 connections + 4 apps). The minimum size for running this configuration would be I2v2 (4 cores x 3 units = 12 units).
103+
App Service plan instance with four apps running and using virtual network integration. The Apps are connected to two different subnets (virtual network connections). This configuration requires seven networking units (1 worker + 2 connections + 4 apps). The minimum size for running this configuration would be I2v2 (four cores x 3 units = 12 units).
104104

105-
With I1v2 you can run a maximum of 4 apps using the same (1) connection or 3 apps using 2 connections.
105+
With I1v2, you can run a maximum of four apps using the same (1) connection or 3 apps using 2 connections.
106106

107107
## Permissions
108108

articles/azure-app-configuration/howto-geo-replication.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -179,6 +179,8 @@ You can specify one or more endpoints of a geo-replication-enabled App Configura
179179

180180
The automatically discovered replicas will be selected and used randomly. If you have a preference for specific replicas, you can explicitly specify their endpoints. This feature is enabled by default, but you can refer to the following sample code to disable it.
181181

182+
### [.NET](#tab/Dotnet)
183+
182184
Edit the call to the `AddAzureAppConfiguration` method, which is often found in the `program.cs` file of your application.
183185

184186
```csharp
@@ -197,6 +199,27 @@ configurationBuilder.AddAzureAppConfiguration(options =>
197199
> - `Microsoft.Azure.AppConfiguration.AspNetCore`
198200
> - `Microsoft.Azure.AppConfiguration.Functions.Worker`
199201
202+
### [Kubernetes](#tab/kubernetes)
203+
204+
Update the `AzureAppConfigurationProvider` resource of your Azure App Configuration Kubernetes Provider. Add a `replicaDiscoveryEnabled` property and set it to `false`.
205+
206+
``` yaml
207+
apiVersion: azconfig.io/v1
208+
kind: AzureAppConfigurationProvider
209+
metadata:
210+
name: appconfigurationprovider-sample
211+
spec:
212+
endpoint: <your-app-configuration-store-endpoint>
213+
replicaDiscoveryEnabled: false
214+
target:
215+
configMapName: configmap-created-by-appconfig-provider
216+
```
217+
218+
> [!NOTE]
219+
> The automatic replica discovery and failover support is available if you use version **1.3.0** or later of [Azure App Configuration Kubernetes Provider](./quickstart-azure-kubernetes-service.md).
220+
221+
---
222+
200223
## Next steps
201224
202225
> [!div class="nextstepaction"]

0 commit comments

Comments
 (0)