Skip to content

Commit 2277264

Browse files
committed
[speech-service] Use FromEndpoint in quickstart
1 parent 1e978b8 commit 2277264

File tree

10 files changed

+26
-26
lines changed

10 files changed

+26
-26
lines changed

articles/ai-services/speech-service/includes/how-to/recognize-speech/java.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ import java.util.concurrent.Future;
2626

2727
public class Program {
2828
public static void main(String[] args) throws InterruptedException, ExecutionException {
29-
SpeechConfig speechConfig = SpeechConfig.fromSubscription("<paste-your-speech-key>", "<paste-your-region>");
29+
SpeechConfig speechConfig = SpeechConfig.fromEndpoint("<paste-your-speech-endpoint>", "<paste-your-speech-key>");
3030
}
3131
}
3232
```
@@ -52,7 +52,7 @@ import java.util.concurrent.Future;
5252

5353
public class Program {
5454
public static void main(String[] args) throws InterruptedException, ExecutionException {
55-
SpeechConfig speechConfig = SpeechConfig.fromSubscription("<paste-your-speech-key>", "<paste-your-region>");
55+
SpeechConfig speechConfig = SpeechConfig.fromEndpoint("<paste-your-speech-endpoint>", "<paste-your-speech-key>");
5656
fromMic(speechConfig);
5757
}
5858

@@ -82,7 +82,7 @@ import java.util.concurrent.Future;
8282

8383
public class Program {
8484
public static void main(String[] args) throws InterruptedException, ExecutionException {
85-
SpeechConfig speechConfig = SpeechConfig.fromSubscription("<paste-your-speech-key>", "<paste-your-region>");
85+
SpeechConfig speechConfig = SpeechConfig.fromEndpoint("<paste-your-speech-endpoint>", "<paste-your-speech-key>");
8686
fromFile(speechConfig);
8787
}
8888

articles/ai-services/speech-service/includes/how-to/recognize-speech/javascript.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -15,11 +15,11 @@ ms.custom: devx-track-js
1515

1616
To call the Speech service by using the Speech SDK, you need to create a [`SpeechConfig`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig) instance. This class includes information about your Speech resource, like your key and associated region, endpoint, host, or authorization token.
1717

18-
1. Create an AI Foundry resource for Speech in the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesAIFoundry). Get the Speech resource key and region.
19-
1. Create a `SpeechConfig` instance by using the following code. Replace `YourSpeechKey` and `YourSpeechRegion` with your Speech resource key and region.
18+
1. Create an AI Foundry resource for Speech in the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesAIFoundry). Get the Speech resource key and endpoint.
19+
1. Create a `SpeechConfig` instance by using the following code. Replace `YourSpeechEndpoint` and `YourSpeechKey` with your Speech resource endpoint and key.
2020

2121
```javascript
22-
const speechConfig = sdk.SpeechConfig.fromSubscription("YourSpeechKey", "YourSpeechRegion");
22+
const speechConfig = sdk.SpeechConfig.fromEndpoint(new URL("YourSpeechEndpoint"), "YourSpeechKey");
2323
```
2424

2525
You can initialize `SpeechConfig` in a few other ways:
@@ -45,7 +45,7 @@ To recognize speech from an audio file, create an `AudioConfig` instance by usin
4545
```javascript
4646
const fs = require('fs');
4747
const sdk = require("microsoft-cognitiveservices-speech-sdk");
48-
const speechConfig = sdk.SpeechConfig.fromSubscription("YourSpeechKey", "YourSpeechRegion");
48+
const speechConfig = sdk.SpeechConfig.fromEndpoint("YourSpeechEndpoint", "YourSpeechKey");
4949

5050
function fromFile() {
5151
let audioConfig = sdk.AudioConfig.fromWavFileInput(fs.readFileSync("YourAudioFile.wav"));
@@ -70,7 +70,7 @@ For many use cases, your audio data likely comes from Azure Blob Storage. Or it'
7070
```javascript
7171
const fs = require('fs');
7272
const sdk = require("microsoft-cognitiveservices-speech-sdk");
73-
const speechConfig = sdk.SpeechConfig.fromSubscription("YourSpeechKey", "YourSpeechRegion");
73+
const speechConfig = sdk.SpeechConfig.fromEndpoint("YourSpeechEndpoint", "YourSpeechKey");
7474

7575
function fromStream() {
7676
let pushStream = sdk.AudioInputStream.createPushStream();
@@ -116,7 +116,7 @@ switch (result.reason) {
116116
if (cancellation.reason == sdk.CancellationReason.Error) {
117117
console.log(`CANCELED: ErrorCode=${cancellation.ErrorCode}`);
118118
console.log(`CANCELED: ErrorDetails=${cancellation.errorDetails}`);
119-
console.log("CANCELED: Did you set the speech resource key and region values?");
119+
console.log("CANCELED: Did you set the speech resource key and endpoint values?");
120120
}
121121
break;
122122
}
@@ -162,7 +162,7 @@ speechRecognizer.canceled = (s, e) => {
162162
if (e.reason == sdk.CancellationReason.Error) {
163163
console.log(`"CANCELED: ErrorCode=${e.errorCode}`);
164164
console.log(`"CANCELED: ErrorDetails=${e.errorDetails}`);
165-
console.log("CANCELED: Did you set the speech resource key and region values?");
165+
console.log("CANCELED: Did you set the speech resource key and endpoint values?");
166166
}
167167

168168
speechRecognizer.stopContinuousRecognitionAsync();

articles/ai-services/speech-service/includes/quickstarts/speech-to-text-basics/javascript.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,8 +49,8 @@ To transcribe speech from a file:
4949
```javascript
5050
import { readFileSync, createReadStream } from "fs";
5151
import { SpeechConfig, AudioConfig, ConversationTranscriber, AudioInputStream } from "microsoft-cognitiveservices-speech-sdk";
52-
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
53-
const speechConfig = SpeechConfig.fromSubscription(process.env.SPEECH_KEY, process.env.SPEECH_REGION);
52+
// This example requires environment variables named "ENDPOINT" and "SPEECH_KEY"
53+
const speechConfig = SpeechConfig.fromEndpoint(new URL(process.env.ENDPOINT), process.env.SPEECH_KEY);
5454
function fromFile() {
5555
const filename = "katiesteve.wav";
5656
const audioConfig = AudioConfig.fromWavFileInput(readFileSync(filename));

articles/ai-services/speech-service/includes/quickstarts/speech-to-text-basics/typescript.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -69,8 +69,8 @@ To transcribe speech from a file:
6969
SpeechRecognitionResult
7070
} from "microsoft-cognitiveservices-speech-sdk";
7171
72-
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
73-
const speechConfig: SpeechConfig = SpeechConfig.fromSubscription(process.env.SPEECH_KEY!, process.env.SPEECH_REGION!);
72+
// This example requires environment variables named "ENDPOINT" and "SPEECH_KEY"
73+
const speechConfig: SpeechConfig = SpeechConfig.fromEndpoint(new URL(process.env.ENDPOINT!), process.env.SPEECH_KEY!);
7474
speechConfig.speechRecognitionLanguage = "en-US";
7575
7676
function fromFile(): void {

articles/ai-services/speech-service/includes/quickstarts/speech-translation-basics/javascript.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,8 +47,8 @@ To translate speech from a file:
4747
```javascript
4848
import { readFileSync } from "fs";
4949
import { SpeechTranslationConfig, AudioConfig, TranslationRecognizer, ResultReason, CancellationDetails, CancellationReason } from "microsoft-cognitiveservices-speech-sdk";
50-
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
51-
const speechTranslationConfig = SpeechTranslationConfig.fromSubscription(process.env.SPEECH_KEY, process.env.SPEECH_REGION);
50+
// This example requires environment variables named "ENDPOINT" and "SPEECH_KEY"
51+
const speechTranslationConfig = SpeechTranslationConfig.fromEndpoint(new URL(process.env.ENDPOINT), process.env.SPEECH_KEY);
5252
speechTranslationConfig.speechRecognitionLanguage = "en-US";
5353
const language = "it";
5454
speechTranslationConfig.addTargetLanguage(language);

articles/ai-services/speech-service/includes/quickstarts/speech-translation-basics/typescript.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,8 +67,8 @@ To translate speech from a file:
6767
TranslationRecognitionResult
6868
} from "microsoft-cognitiveservices-speech-sdk";
6969
70-
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
71-
const speechTranslationConfig: SpeechTranslationConfig = SpeechTranslationConfig.fromSubscription(process.env.SPEECH_KEY!, process.env.SPEECH_REGION!);
70+
// This example requires environment variables named "ENDPOINT" and "SPEECH_KEY"
71+
const speechTranslationConfig: SpeechTranslationConfig = SpeechTranslationConfig.fromEndpoint(new URL(process.env.ENDPOINT!), process.env.SPEECH_KEY!);
7272
speechTranslationConfig.speechRecognitionLanguage = "en-US";
7373
7474
const language = "it";

articles/ai-services/speech-service/includes/quickstarts/stt-diarization/javascript.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,8 +48,8 @@ Follow these steps to create a new console application for conversation transcri
4848
const fs = require("fs");
4949
const sdk = require("microsoft-cognitiveservices-speech-sdk");
5050
51-
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
52-
const speechConfig = sdk.SpeechConfig.fromSubscription(process.env.SPEECH_KEY, process.env.SPEECH_REGION);
51+
// This example requires environment variables named "ENDPOINT" and "SPEECH_KEY"
52+
const speechConfig = sdk.SpeechConfig.fromEndpoint(new URL(process.env.ENDPOINT), process.env.SPEECH_KEY);
5353
5454
function fromFile() {
5555
const filename = "katiesteve.wav";

articles/ai-services/speech-service/includes/quickstarts/stt-diarization/typescript.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,8 +64,8 @@ Follow these steps to create a new console application for conversation transcri
6464
AudioInputStream
6565
} from "microsoft-cognitiveservices-speech-sdk";
6666
67-
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
68-
const speechConfig: SpeechConfig = SpeechConfig.fromSubscription(process.env.SPEECH_KEY!, process.env.SPEECH_REGION!);
67+
// This example requires environment variables named "ENDPOINT" and "SPEECH_KEY"
68+
const speechConfig: SpeechConfig = SpeechConfig.fromEndpoint(new URL(process.env.ENDPOINT!), process.env.SPEECH_KEY!);
6969
7070
function fromFile(): void {
7171
const filename = "katiesteve.wav";

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/javascript.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,8 +49,8 @@ To translate speech from a file:
4949
import { SpeechConfig, AudioConfig, SpeechSynthesizer, ResultReason } from "microsoft-cognitiveservices-speech-sdk";
5050
function synthesizeSpeech() {
5151
const audioFile = "YourAudioFile.wav";
52-
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
53-
const speechConfig = SpeechConfig.fromSubscription(process.env.SPEECH_KEY, process.env.SPEECH_REGION);
52+
// This example requires environment variables named "ENDPOINT" and "SPEECH_KEY"
53+
const speechConfig = SpeechConfig.fromEndpoint(new URL(ENDPOINT), process.env.SPEECH_KEY);
5454
const audioConfig = AudioConfig.fromAudioFileOutput(audioFile);
5555
// The language of the voice that speaks.
5656
speechConfig.speechSynthesisVoiceName = "en-US-AvaMultilingualNeural";

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/typescript.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,8 +67,8 @@ To translate speech from a file:
6767
6868
function synthesizeSpeech(): void {
6969
const audioFile = "YourAudioFile.wav";
70-
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
71-
const speechConfig: SpeechConfig = SpeechConfig.fromSubscription(process.env.SPEECH_KEY!, process.env.SPEECH_REGION!);
70+
// This example requires environment variables named "ENDPOINT" and "SPEECH_KEY"
71+
const speechConfig: SpeechConfig = SpeechConfig.fromEndpoint(new URL(process.env.ENDPOINT!), process.env.SPEECH_KEY!);
7272
const audioConfig: AudioConfig = AudioConfig.fromAudioFileOutput(audioFile);
7373
7474
// The language of the voice that speaks.

0 commit comments

Comments
 (0)