Skip to content

Audio transcription

cesine edited this page May 26, 2011 · 23 revisions

Table of Contents

Transcription API

  • Quicktate API
    • Speech Transcription
    • Call Auditing for customer service call centers
    • Call Transcription for phone meetings
    • SMS dictation for driver safety
  • The overall process of successfully transcribing an audio file is as follows:
  1. Submit the job into our system
  2. Our system assigns a Job ID, which is returned to your application for your tracking purposes
  3. We transcribe the file into text (using thousands of typists on call 24/7 ready to work at a moment’s notice)
  4. Our servers send the results via HTTP POST to a Callback URL you specify
  • Twilio allows users to call the website and receive some TTS generated speech and some help, and also call cellphones and land lines to get human service.
    • Greg Tracy's example
    • pay service
    • two minutes long
    • append ".txt" to the end of a Recording resource URI to retrieve the transcription text for that recording
  • SpeakerText Beta uses auto-gen captions which are corrected by a human to make transcribed videos for bloggers etc
    • 2$ min
  • Process
  1. User uploads videos on YouTube, Vimeo, Blip.tv, Ooyala, Brightcove
  2. 72 hours later transcriptions are emailed back
  • Ribbit API
    • Consumer application Ribbit Mobile links mobile phones and the internet to create an integrated voice and data solution tailored for the lifestyle of the modern professional.
    • Our enterprise solution, Ribbit for Salesforce, integrates mobile phones and advanced voice automation features directly into Salesforce.com to increase sales team productivity.
  • Process to transcribe voicemail
  1. Create a folder, enable transcription services for that folder, upload media to that folder .mp3, .wav, or .ulaw ,
  2. Transcription event is triggered if there is no corresponding txt file
  3. Transcription costs debited from the user's account

Google Speech Recognition

Google speech recognition stands to be the best quality of all "available" systems. Their Search Langauge Model is based on the billions of google searches. Their free-form Language models are based on transcriptions of Google Voice voicemail messages, YouTube videos (it generates closed captions, and then users can upload corrected versions so that users can have accurate closed captions) among other unconfirmed datasources.

Open Source Clients

  1. Video : the audio must be in a video which has been uploaded and has a video id
  2. Ownership : the video must be owned by the user. so it could be possible to create an single useraccount and push the audio to youtube or ask the user to let the app access their youtube user account, ask for the auto transcribed version and render that as blog text for the users, even providing an interface that helps them navigate their text in time, audio and text format. but that would like be a huge use violation for the developer API key as potentially millions of blank useless youtube videos would be created and (publicly) availible. The privacy factor can be reset if this was done not with YouTube videos, but rather videos uploaded in Google Docs, then the audio is private to that user's google account. There is also Google Video for Busness in the apps products, but neither of those two API are available yet, just the YouTube one.
  • Cromium hack by Mike Putz results in a general perl+post approach, others made it work for PHP and Java
  • The Sample VoiceRecognition.java uses the Android Speech Package android.speech , more specifically the RecognizerIntent. The example works great and is very clear, you can test it out in the API demos Sample code in the SDK.
    • But, it's only for short speech samples (until user pauses) and it is a Intent->GUI->Record->Result use case. No GUI-free/eyes-free access yet. There are feature requests for it on the Android Google Code issue tracker.
  • The implementation of the RecognizerIntent itself (or other files in android.speech) should provide some exposure to the Google Speech Recognition servers..
  • Android Source on GitHub, contains the core code, not the com.google code, and there doesn't appear to be any speech recognition clients in there

Closed Source Clients

  • Relevant packages that could be tweeked to provide a GUI free solution
    • com.google.android.voicesearch
    • com.google.android.voicesearch.speechservice

Open Source Services

Sphinx

A classic and long-standing project now hosting Google Summer of Code students. CMUSphinx is a speaker-independent large vocabulary continuous speech recognizer released under BSD style license. It is also a collection of open source tools and resources that allows researchers and developers to build speech recognition systems.

Language independent phonetic transcription

  • Our goal is to support a bit of bootstrapping, even for non-standard languages so that experiments on any language provide at least a bit of audio analysis.

Related Requirements

Audio Chunking based on Silence

  • The MARF project has some libraries for audio analysis. Not sure how complete and which goals have been realized yet.
 MARF is an open-source research platform and a collection of voice/sound/speech/text and natural language processing (NLP) algorithms written in Java and arranged into a modular and extensible framework facilitating addition of new algorithms.

Additional References

Clone this wiki locally