Skip to content

Commit 9436a6f

Browse files
Merge pull request #97771 from markamos/v-ammark-dec18
[Cog Svcs] Fix "Speech service" terminology
2 parents 2f8d34b + 1a527d6 commit 9436a6f

15 files changed

+42
-42
lines changed

articles/cognitive-services/Speech-Service/record-custom-voice-samples.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: "Record custom voice samples - Speech Service"
2+
title: "Record custom voice samples - Speech service"
33
titleSuffix: Azure Cognitive Services
44
description: Make a production-quality custom voice by preparing a robust script, hiring good voice talent, and recording professionally.
55
services: cognitive-services

articles/cognitive-services/Speech-Service/regions.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
2-
title: Regions - Speech Service
2+
title: Regions - Speech service
33
titleSuffix: Azure Cognitive Services
4-
description: A list of available regions and endpoints for the Speech Service, including speech-to-text, text-to-speech, and speech translation.
4+
description: A list of available regions and endpoints for the Speech service, including speech-to-text, text-to-speech, and speech translation.
55
services: cognitive-services
66
author: mahilleb-msft
77
manager: nitinme
@@ -13,7 +13,7 @@ ms.author: panosper
1313
ms.custom: seodec18
1414
---
1515

16-
# Speech Service supported regions
16+
# Speech service supported regions
1717

1818
The Speech service allows your application to convert audio to text, perform speech translation, and covert text to speech. The service is available in multiple regions with unique endpoints for the Speech SDK and REST APIs.
1919

articles/cognitive-services/Speech-Service/releasenotes.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
2-
title: Release Notes - Speech Service
2+
title: Release Notes - Speech service
33
titleSuffix: Azure Cognitive Services
4-
description: See a running log of feature releases, improvements, bug fixes, and known issues for the Speech Service.
4+
description: See a running log of feature releases, improvements, bug fixes, and known issues for the Speech service.
55
services: cognitive-services
66
author: BrianMouncer
77
manager: nitinme
@@ -172,7 +172,7 @@ This is a JavaScript-only release. No features have been added. The following fi
172172

173173
**Bug fixes**
174174

175-
- Mac/iOS: A bug that led to a long wait when a connection to the Speech Service could not be established was fixed.
175+
- Mac/iOS: A bug that led to a long wait when a connection to the Speech service could not be established was fixed.
176176
- Python: improve error handling for arguments in Python callbacks.
177177
- JavaScript: Fixed wrong state reporting for speech ended on RequestSession.
178178

@@ -188,7 +188,7 @@ This is a bug fix release and only affecting the native/managed SDK. It is not a
188188

189189
**New Features**
190190

191-
- The Speech SDK supports selection of the input microphone through the AudioConfig class. This allows you to stream audio data to the Speech Services from a non-default microphone. For more information, see the documentation describing [audio input device selection](how-to-select-audio-input-devices.md). This feature is not yet available from JavaScript.
191+
- The Speech SDK supports selection of the input microphone through the AudioConfig class. This allows you to stream audio data to the Speech service from a non-default microphone. For more information, see the documentation describing [audio input device selection](how-to-select-audio-input-devices.md). This feature is not yet available from JavaScript.
192192
- The Speech SDK now supports Unity in a beta version. Provide feedback through the issue section in the [GitHub sample repository](https://aka.ms/csspeech/samples). This release supports Unity on Windows x86 and x64 (desktop or Universal Windows Platform applications), and Android (ARM32/64, x86). More information is available in our [Unity quickstart](~/articles/cognitive-services/Speech-Service/quickstarts/speech-to-text-from-microphone.md?pivots=programming-language-csharp&tabs=unity).
193193
- The file `Microsoft.CognitiveServices.Speech.csharp.bindings.dll` (shipped in previous releases) isn't needed anymore. The functionality is now integrated into the core SDK.
194194

articles/cognitive-services/Speech-Service/rest-speech-to-text.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Speech-to-text API reference (REST) - Speech Service
2+
title: Speech-to-text API reference (REST) - Speech service
33
titleSuffix: Azure Cognitive Services
44
description: Learn how to use the speech-to-text REST API. In this article, you'll learn about authorization options, query options, how to structure a request and receive a response.
55
services: cognitive-services
@@ -14,7 +14,7 @@ ms.author: erhopf
1414

1515
# Speech-to-text REST API
1616

17-
As an alternative to the [Speech SDK](speech-sdk.md), Speech Services allows you to convert speech-to-text using a REST API. Each accessible endpoint is associated with a region. Your application requires a subscription key for the endpoint you plan to use.
17+
As an alternative to the [Speech SDK](speech-sdk.md), the Speech service allows you to convert speech-to-text using a REST API. Each accessible endpoint is associated with a region. Your application requires a subscription key for the endpoint you plan to use.
1818

1919
Before using the speech-to-text REST API, understand:
2020

@@ -47,12 +47,12 @@ This table lists required and optional headers for speech-to-text requests.
4747

4848
|Header| Description | Required / Optional |
4949
|------|-------------|---------------------|
50-
| `Ocp-Apim-Subscription-Key` | Your Speech Services subscription key. | Either this header or `Authorization` is required. |
50+
| `Ocp-Apim-Subscription-Key` | Your Speech service subscription key. | Either this header or `Authorization` is required. |
5151
| `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Either this header or `Ocp-Apim-Subscription-Key` is required. |
5252
| `Content-type` | Describes the format and codec of the provided audio data. Accepted values are `audio/wav; codecs=audio/pcm; samplerate=16000` and `audio/ogg; codecs=opus`. | Required |
5353
| `Transfer-Encoding` | Specifies that chunked audio data is being sent, rather than a single file. Only use this header if chunking audio data. | Optional |
54-
| `Expect` | If using chunked transfer, send `Expect: 100-continue`. The Speech Services acknowledges the initial request and awaits additional data.| Required if sending chunked audio data. |
55-
| `Accept` | If provided, it must be `application/json`. The Speech Services provides results in JSON. Some request frameworks provide an incompatible default value. It is good practice to always include `Accept`. | Optional, but recommended. |
54+
| `Expect` | If using chunked transfer, send `Expect: 100-continue`. The Speech service acknowledges the initial request and awaits additional data.| Required if sending chunked audio data. |
55+
| `Accept` | If provided, it must be `application/json`. The Speech service provides results in JSON. Some request frameworks provide an incompatible default value. It is good practice to always include `Accept`. | Optional, but recommended. |
5656

5757
## Audio formats
5858

@@ -64,7 +64,7 @@ Audio is sent in the body of the HTTP `POST` request. It must be in one of the f
6464
| OGG | OPUS | 16-bit | 16 kHz, mono |
6565

6666
>[!NOTE]
67-
>The above formats are supported through REST API and WebSocket in the Speech Services. The [Speech SDK](speech-sdk.md) currently only supports the WAV format with PCM codec.
67+
>The above formats are supported through REST API and WebSocket in the Speech service. The [Speech SDK](speech-sdk.md) currently only supports the WAV format with PCM codec.
6868
6969
## Sample request
7070

@@ -94,7 +94,7 @@ The HTTP status code for each response indicates success or common errors.
9494

9595
## Chunked transfer
9696

97-
Chunked transfer (`Transfer-Encoding: chunked`) can help reduce recognition latency. It allows the Speech Services to begin processing the audio file while it is transmitted. The REST API does not provide partial or interim results.
97+
Chunked transfer (`Transfer-Encoding: chunked`) can help reduce recognition latency. It allows the Speech service to begin processing the audio file while it is transmitted. The REST API does not provide partial or interim results.
9898

9999
This code sample shows how to send audio in chunks. Only the first chunk should contain the audio file's header. `request` is an HTTPWebRequest object connected to the appropriate REST endpoint. `audioFile` is the path to an audio file on disk.
100100

articles/cognitive-services/Speech-Service/rest-text-to-speech.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Text-to-speech API reference (REST) - Speech Service
2+
title: Text-to-speech API reference (REST) - Speech service
33
titleSuffix: Azure Cognitive Services
44
description: Learn how to use the text-to-speech REST API. In this article, you'll learn about authorization options, query options, how to structure a request and receive a response.
55
services: cognitive-services
@@ -14,7 +14,7 @@ ms.author: erhopf
1414

1515
# Text-to-speech REST API
1616

17-
The Speech Services allow you to [convert text into synthesized speech](#convert-text-to-speech) and [get a list of supported voices](#get-a-list-of-voices) for a region using a set of REST APIs. Each available endpoint is associated with a region. A subscription key for the endpoint/region you plan to use is required.
17+
The Speech service allows you to [convert text into synthesized speech](#convert-text-to-speech) and [get a list of supported voices](#get-a-list-of-voices) for a region using a set of REST APIs. Each available endpoint is associated with a region. A subscription key for the endpoint/region you plan to use is required.
1818

1919
The text-to-speech REST API supports neural and standard text-to-speech voices, each of which supports a specific language and dialect, identified by locale.
2020

@@ -162,7 +162,7 @@ This table lists required and optional headers for text-to-speech requests.
162162

163163
### Audio outputs
164164

165-
This is a list of supported audio formats that are sent in each request as the `X-Microsoft-OutputFormat` header. Each incorporates a bitrate and encoding type. The Speech Services supports 24 kHz, 16 kHz, and 8 kHz audio outputs.
165+
This is a list of supported audio formats that are sent in each request as the `X-Microsoft-OutputFormat` header. Each incorporates a bitrate and encoding type. The Speech service supports 24 kHz, 16 kHz, and 8 kHz audio outputs.
166166

167167
|||
168168
|-|-|

articles/cognitive-services/Speech-Service/scenario-availability.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Scenario Availability - Speech Service
2+
title: Scenario Availability - Speech service
33
titleSuffix: Azure Cognitive Services
44
description: The Speech SDK features many scenarios across a wide variety of programming languages and environments. Not all scenarios are available in all programming languages or all environments yet. Listed below is the availability of each scenario.
55
services: cognitive-services

articles/cognitive-services/Speech-Service/ship-application.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Develop apps with the Speech SDK - Speech Service
2+
title: Develop apps with the Speech SDK - Speech service
33
titleSuffix: Azure Cognitive Services
44
description: Learn how to deploy an application that uses the Speech SDK on supported platforms.
55
services: cognitive-services

articles/cognitive-services/Speech-Service/speech-container-configuration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Configure Speech containers
33
titleSuffix: Azure Cognitive Services
4-
description: Speech Services provides each container with a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers.
4+
description: Speech service provides each container with a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers.
55
services: cognitive-services
66
author: IEvangelist
77
manager: nitinme
@@ -12,7 +12,7 @@ ms.date: 11/07/2019
1212
ms.author: dapine
1313
---
1414

15-
# Configure Speech Service containers
15+
# Configure Speech service containers
1616

1717
Speech containers enable customers to build one speech application architecture that is optimized to take advantage of both robust cloud capabilities and edge locality. The four speech containers we support now are, **speech-to-text**, **custom-speech-to-text**, **text-to-speech**, and **custom-text-to-speech**.
1818

articles/cognitive-services/Speech-Service/speech-container-howto-on-premises.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Use Speech Service container with Kubernetes and Helm
2+
title: Use Speech service containers with Kubernetes and Helm
33
titleSuffix: Azure Cognitive Services
44
description: Using Kubernetes and Helm to define the speech-to-text and text-to-speech container images, we'll create a Kubernetes package. This package will be deployed to a Kubernetes cluster on-premises.
55
services: cognitive-services
@@ -12,9 +12,9 @@ ms.date: 11/04/2019
1212
ms.author: dapine
1313
---
1414

15-
# Use Speech Service container with Kubernetes and Helm
15+
# Use Speech service containers with Kubernetes and Helm
1616

17-
One option to manage your Speech containers on-premises is to use Kubernetes and Helm. Using Kubernetes and Helm to define the speech-to-text and text-to-speech container images, we'll create a Kubernetes package. This package will be deployed to a Kubernetes cluster on-premises. Finally, we'll explore how to test the deployed services and various configuration options. For more information about running Docker containers without Kubernetes orchestration, see [install and run Speech Service containers](speech-container-howto.md).
17+
One option to manage your Speech containers on-premises is to use Kubernetes and Helm. Using Kubernetes and Helm to define the speech-to-text and text-to-speech container images, we'll create a Kubernetes package. This package will be deployed to a Kubernetes cluster on-premises. Finally, we'll explore how to test the deployed services and various configuration options. For more information about running Docker containers without Kubernetes orchestration, see [install and run Speech service containers](speech-container-howto.md).
1818

1919
## Prerequisites
2020

@@ -30,7 +30,7 @@ The following prerequisites before using Speech containers on-premises:
3030

3131
## The recommended host computer configuration
3232

33-
Refer to the [Speech Service container host computer][speech-container-host-computer] details as a reference. This *helm chart* automatically calculates CPU and memory requirements based on how many decodes (concurrent requests) that the user specifies. Additionally, it will adjust based on whether optimizations for audio/text input are configured as `enabled`. The helm chart defaults to, two concurrent requests and disabling optimization.
33+
Refer to the [Speech service container host computer][speech-container-host-computer] details as a reference. This *helm chart* automatically calculates CPU and memory requirements based on how many decodes (concurrent requests) that the user specifies. Additionally, it will adjust based on whether optimizations for audio/text input are configured as `enabled`. The helm chart defaults to, two concurrent requests and disabling optimization.
3434

3535
| Service | CPU / Container | Memory / Container |
3636
|--|--|--|
@@ -137,7 +137,7 @@ The *Helm chart* contains the configuration of which docker image(s) to pull fro
137137

138138
> A [Helm chart][helm-charts] is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
139139

140-
The provided *Helm charts* pull the docker images of the Speech Service, both text-to-speech and the speech-to-text services from the `mcr.microsoft.com` container registry.
140+
The provided *Helm charts* pull the docker images of the Speech service, both text-to-speech and the speech-to-text services from the `mcr.microsoft.com` container registry.
141141

142142
## Install the Helm chart on the Kubernetes cluster
143143

articles/cognitive-services/Speech-Service/speech-container-howto.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Install Speech containers - Speech Service
2+
title: Install Speech containers - Speech service
33
titleSuffix: Azure Cognitive Services
44
description: Install and run speech containers. Speech-to-text transcribes audio streams to text in real time that your applications, tools, or devices can consume or display. Text-to-speech converts input text into human-like synthesized speech.
55
services: cognitive-services
@@ -12,9 +12,9 @@ ms.date: 12/04/2019
1212
ms.author: dapine
1313
---
1414

15-
# Install and run Speech Service containers (Preview)
15+
# Install and run Speech service containers (Preview)
1616

17-
Containers enable you to run some of the Speech Service APIs in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run a Speech container.
17+
Containers enable you to run some of the Speech service APIs in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run a Speech container.
1818

1919
Speech containers enable customers to build a speech application architecture that is optimized for both robust cloud capabilities and edge locality. There are four different containers available. The two standard containers are **Speech-to-text** and **Text-to-speech**. The two custom containers are **Custom Speech-to-text** and **Custom Text-to-speech**.
2020

@@ -429,5 +429,5 @@ In this article, you learned concepts and workflow for downloading, installing,
429429
## Next steps
430430

431431
* Review [configure containers](speech-container-configuration.md) for configuration settings
432-
* Learn how to [use Speech Service containers with Kubernetes and Helm](speech-container-howto-on-premises.md)
432+
* Learn how to [use Speech service containers with Kubernetes and Helm](speech-container-howto-on-premises.md)
433433
* Use more [Cognitive Services containers](../cognitive-services-container-support.md)

0 commit comments

Comments
 (0)