You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+19-10Lines changed: 19 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,6 +36,8 @@ Options: No direct options, all provided options are passed to MicrophoneStream
36
36
Requires the `getUserMedia` API, so limited browser compatibility (see http://caniuse.com/#search=getusermedia)
37
37
Also note that Chrome requires https (with a few exceptions for localhost and such) - see https://www.chromium.org/Home/chromium-security/prefer-secure-origins-for-powerful-new-features
38
38
39
+
Pipes results through a `{FormatStream}` by default, set `options.format=false` to disable.
Requires that the browser support MediaElement and whatever audio codec is used in your media file.
46
48
47
-
Will automatically call `.play()` the `element`. Calling `.stop()` on the returned RecognizeStream will automatically call `.stop()` on the `element`.
49
+
Will automatically call `.play()` the `element`, set `options.autoplay=false` to disable. Calling `.stop()` on the returned stream will automatically call `.stop()` on the `element`.
50
+
51
+
Pipes results through a `{FormatStream}` by default, set `options.format=false` to disable.
*`playFile`: (optional, default=`false`) Attempt to also play the file locally while uploading it for transcription
57
+
*`play`: (optional, default=`false`) Attempt to also play the file locally while uploading it for transcription
54
58
* Other options passed to RecognizeStream
55
59
56
-
`playFile`requires that the browser support the format; most browsers support wav and ogg/opus, but not flac.)
60
+
`play`requires that the browser support the format; most browsers support wav and ogg/opus, but not flac.)
57
61
Will emit a `playback-error` on the RecognizeStream if playback fails.
58
62
Playback will automatically stop when `.stop()` is called on the RecognizeStream.
59
63
64
+
Pipes results through a `{TimingStream}` by if `options.play=true`, set `options.realtime=false` to disable.
65
+
66
+
Pipes results through a `{FormatStream}` by default, set `options.format=false` to disable.
60
67
61
68
### Class `RecognizeStream()`
62
69
@@ -99,25 +106,27 @@ Inherits `.promise()` and `.stop()` methods and `result` event from the `Recogni
99
106
100
107
### Class `TimingStream()`
101
108
102
-
For use with `.recognizeBlob({playFile: true})` - slows the results down to match the audio. Pipe in the `RecognizeStream` (or `FormatStream`) and listen for results as usual.
109
+
For use with `.recognizeBlob({play: true})` - slows the results down to match the audio. Pipe in the `RecognizeStream` (or `FormatStream`) and listen for results as usual.
103
110
104
111
Inherits `.stop()` method and `result` event from the `RecognizeStream`.
105
112
106
113
107
114
## Changelog
108
115
109
116
### v0.7
110
-
* Changed playFile option of recognizeBlob to play to match docs
111
-
* Added options.format to recognize* to pipe text through a FormatStream (default: true)
112
-
* Added close and end events to TimingStream
113
-
117
+
* Changed `playFile` option of `recognizeBlob()` to just `play`, corrected default
118
+
* Added `options.format=true` to `recognize*()` to pipe text through a FormatStream
119
+
* Added `options.realtime=options.play` to `recognizeBlob()` to automatically pipe results through a TimingStream when playing locally
120
+
* Added `close` and `end` events to TimingStream
121
+
* Added `delay` option to `TimingStream`
122
+
* Moved compiled binary to GitHub Releases (in addition to uncompiled source on npm).
123
+
* Misc. doc and internal improvements
114
124
115
125
## todo
116
126
117
-
* Fix bugs around `.stop()
118
127
* Solidify API
119
128
* support objectMode instead of having random events
120
-
*add text-to-speech support
129
+
* add text-to-speech support
121
130
* add an example that includes alternatives and word confidence scores
122
131
* enable eslint
123
132
* break components into standalone npm modules where it makes sense
Copy file name to clipboardExpand all lines: examples/readme.md
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,10 +17,11 @@ Prerequisite
17
17
Setup
18
18
-----
19
19
20
-
1.`cd` into the `examples/` directory and run `npm install` to grab dependencies
21
-
2. edit `token-server.js` to include your service credentials
22
-
3. run `npm start`
23
-
4. Open your browser to http://localhost:3000/ to see the examples.
20
+
1. Run `npm install; npm run build` in the project root to generate a `dist/watson-speech.js` file (or grab a copy from GitHub releases and drop it into `examples/public/`)
21
+
2.`cd` into the `examples/` directory and run `npm install` to grab dependencies
22
+
3. edit `token-server.js` to include your service credentials
23
+
4. run `npm start`
24
+
5. Open your browser to http://localhost:3000/ to see the examples.
0 commit comments