You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
switch sources to audiobuffers, split internals of l16 stream
l16 stream (formerly wav stream) now does the downsampling and 16-bit conversion ins separate steps
this is to allow for a future change to native downsampling
I also removed the wav header because it's not needed
and there is also a [REST API](http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/doc/getting_started/gs-tokens.shtml).
10
16
11
17
See several examples at https://github.com/watson-developer-cloud/speech-javascript-sdk/tree/master/examples
12
18
13
19
This library is built with [browserify](http://browserify.org/) and easy to use in browserify-based projects (`npm install --save watson-speech`), but you can also grab the compiled bundle from the
14
20
`dist/` folder and use it as a standalone library.
15
21
16
-
17
22
## `WatsonSpeech.SpeechToText` Basic API
18
23
19
24
Complete API docs should be published at http://watson-developer-cloud.github.io/speech-javascript-sdk/
@@ -101,7 +106,8 @@ Inherits `.stop()` method and `result` event from the `RecognizeStream`.
101
106
102
107
* Fix bugs around `.stop()
103
108
* Solidify API
104
-
* (eventually) add text-to-speech support
109
+
* support objectMode instead of having random events
110
+
* add text-to-speech support
105
111
* add an example that includes alternatives and word confidence scores
crossOrigin: "anonymous"// required for cross-domain audio playback
27
+
crossOrigin: "anonymous",// required for cross-domain audio playback
28
+
objectMode: true// true = emit AudioBuffers w/ audio + some metadata, false = emite node.js Buffers (with binary data only
27
29
},opts);
28
30
29
31
// We can only emit one channel's worth of audio, so only one input. (Who has multiple microphones anyways?)
@@ -38,7 +40,7 @@ function MediaElementAudioStream(source, opts) {
38
40
varrecording=true;
39
41
40
42
// I can't seem to find any documentation for this on <audio> elements, but it seems to be required for cross-domain usage (in addition to CORS headers)
41
-
//source.crossOrigin = opts.crossOrigin;
43
+
source.crossOrigin=opts.crossOrigin;
42
44
43
45
/**
44
46
* Convert and emit the raw audio data
@@ -48,32 +50,15 @@ function MediaElementAudioStream(source, opts) {
48
50
functionprocessAudio(e){
49
51
// onaudioprocess can be called at least once after we've stopped
50
52
if(recording){
51
-
52
-
varraw=e.inputBuffer.getChannelData(0);
53
-
54
-
/**
55
-
* @event MicrophoneStream#raw
56
-
* @param {Float32Array} data raw audio data from browser - each sample is a number from -1 to 1
57
-
*/
58
-
self.emit('raw',raw);
59
-
60
-
// Standard (non-object mode) Node.js streams only accepts Buffers or Strings
61
-
varnodebuffer=newBuffer(raw.buffer);
62
-
63
-
/**
64
-
* Emit the readable/data event with a node-style buffer.
65
-
* Note: this is essentially a new DataView on the same underlying ArrayBuffer.
66
-
* The raw audio data is not actually coppied or changed.
67
-
*
68
-
* @event MicrophoneStream#data
69
-
* @param {Buffer} chunk node-style buffer with audio data; buffers are essentially a Uint8Array
0 commit comments