You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/includes/quickstarts/from-file/python/python.md
+7-10Lines changed: 7 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ manager: nitinme
7
7
ms.service: cognitive-services
8
8
ms.subservice: speech-service
9
9
ms.topic: include
10
-
ms.date: 12/17/2019
10
+
ms.date: 01/14/2020
11
11
ms.author: chlandsi
12
12
---
13
13
@@ -16,7 +16,7 @@ ms.author: chlandsi
16
16
Before you get started, make sure to:
17
17
18
18
> [!div class="checklist"]
19
-
> *[Create an Azure Speech Resource](../../../../get-started.md)
19
+
> *[Create an Azure Speech resource](../../../../get-started.md)
20
20
> *[Create a LUIS application and get an endpoint key](../../../../quickstarts/create-luis.md)
21
21
> *[Setup your development environment](../../../../quickstarts/setup-platform.md)
22
22
> *[Create an empty sample project](../../../../quickstarts/create-project.md)
@@ -48,8 +48,7 @@ Or you can download this quickstart tutorial as a [Jupyter](https://jupyter.org)
48
48
> [!NOTE]
49
49
> The Speech SDK will default to recognizing using en-us for the language, see [Specify source language for speech to text](../../../../how-to-specify-source-language.md) for information on choosing the source language.
50
50
51
-
````Python
52
-
51
+
```python
53
52
import azure.cognitiveservices.speech as speechsdk
54
53
55
54
# Creates an instance of a speech config with specified subscription key and service region.
@@ -67,7 +66,6 @@ speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audi
67
66
68
67
print("Recognizing first result...")
69
68
70
-
71
69
# Starts speech recognition, and returns after a single utterance is recognized. The end of a
72
70
# single utterance is determined by listening for silence at the end or until a maximum of 15
73
71
# seconds of audio is processed. The task returns the recognition text as result.
1. Copy, paste, and save the [Python code](#sample-code) to the newly created file.
110
107
1. Insert your Speech service subscription information.
111
108
1. If selected, a Python interpreter displays on the left side of the status bar at the bottom of the window.
112
-
Otherwise, bring up a list of available Python interpreters. Open the command palette (Ctrl+Shift+P) and enter **Python: Select Interpreter**. Choose an appropriate one.
109
+
Otherwise, bring up a list of available Python interpreters. Open the command palette <kbd>Ctrl+Shift+P</kbd> and enter **Python: Select Interpreter**. Choose an appropriate one.
113
110
1. You can install the Speech SDK Python package from within Visual Studio Code. Do that if it's not installed yet for the Python interpreter you selected.
114
-
To install the Speech SDK package, open a terminal. Bring up the command palette again (Ctrl+Shift+P) and enter **Terminal: Create New Integrated Terminal**.
111
+
To install the Speech SDK package, open a terminal. Bring up the command palette again <kbd>Ctrl+Shift+P</kbd> and enter **Terminal: Create New Integrated Terminal**.
115
112
In the terminal that opens, enter the command `python -m pip install azure-cognitiveservices-speech` or the appropriate command for your system.
116
113
1. To run the sample code, right-click somewhere inside the editor. Select **Run Python File in Terminal**.
117
114
The first 15 seconds of speech input from your audio file will be recognized and logged in the console window.
0 commit comments