Skip to content

Commit d5240c5

Browse files
authored
Apply suggestions from code review
1 parent e3ba52b commit d5240c5

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

articles/ai-services/speech-service/how-to-bring-your-own-model.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
2-
title: Bring Your Own Model (BYOM) with Voice live API (Preview)
3-
description: Learn how to integrate your own models with the Voice live API using Bring Your Own Model (BYOM) capabilities in Azure AI Speech Service.
2+
title: Bring Your Own Model (BYOM) with Voice Live API (Preview)
3+
description: Learn how to integrate your own models with the Voice Live API using Bring Your Own Model (BYOM) capabilities in Azure AI Speech Service.
44
author: PatrickFarley
55
ms.author: pafarley
66
ms.date: 09/26/2025
@@ -11,21 +11,21 @@ ms.custom: ai-speech, voice-live, byom, preview
1111

1212
# Bring Your Own Model (BYOM) with Voice Live API (Preview)
1313

14-
The Voice live API provides Bring Your Own Model (BYOM) capabilities, allowing you to integrate your custom models into the voice interaction workflow. BYOM is useful for the following scenarios:
14+
The Voice Live API provides Bring Your Own Model (BYOM) capabilities, allowing you to integrate your custom models into the voice interaction workflow. BYOM is useful for the following scenarios:
1515

1616
- **Fine-tuned models**: Use your custom Azure OpenAI or Azure Foundry models
1717
- **Provisioned throughput**: Use your PTU (Provisioned Throughput Units) deployments for consistent performance
1818
- **Content safety**: Apply customized content safety configurations with your LLM
1919

2020
> [!IMPORTANT]
21-
> You can integrate any model that's deployed in the same Azure Foundry resource you're using to call the Voice live API.
21+
> You can integrate any model that's deployed in the same Azure Foundry resource you're using to call the Voice Live API.
2222
2323
> [!TIP]
24-
> When you use your own model deployment with Voice live, we recommend you set its content filtering configuration to [Asynchronous filtering](/azure/ai-foundry/openai/concepts/content-streaming#asynchronous-filtering) to reduce latency. Content filtering settings can be configured in the [Azure AI Foundry portal](https://ai.azure.com/).
24+
> When you use your own model deployment with Voice Live, we recommend you set its content filtering configuration to [Asynchronous filtering](/azure/ai-foundry/openai/concepts/content-streaming#asynchronous-filtering) to reduce latency. Content filtering settings can be configured in the [Azure AI Foundry portal](https://ai.azure.com/).
2525
2626
## Authentication setup
2727

28-
When using Microsoft Entra ID authentication with Voice live API, in `byom-azure-openai-chat-completion` mode specifically, you need to configure proper permissions for your Foundry resource. Since tokens may expire during long sessions, the system-assigned managed identity of the Foundry resource requires access to model deployments for the `byom-azure-openai-chat-completion` BYOM mode.
28+
When using Microsoft Entra ID authentication with Voice Live API, in `byom-azure-openai-chat-completion` mode specifically, you need to configure proper permissions for your Foundry resource. Since tokens may expire during long sessions, the system-assigned managed identity of the Foundry resource requires access to model deployments for the `byom-azure-openai-chat-completion` BYOM mode.
2929

3030
Run the following Azure CLI commands to configure the necessary permissions:
3131

@@ -47,7 +47,7 @@ az role assignment create --assignee-object-id ${identity_principal_id} --role "
4747

4848
## Choose BYOM integration mode
4949

50-
The Voice live API supports two BYOM integration modes:
50+
The Voice Live API supports two BYOM integration modes:
5151

5252
| Mode | Description | Example Models |
5353
| ------- | ------------------ | ------------- |

0 commit comments

Comments
 (0)