Replies: 1 comment 10 replies
-
🤖 Hi @dmb0058, good to see you again! From your description, it seems like you're on the right track. The OpenAIWhisperAudio constructor in the langchainjs framework does indeed accept a Blob, which in Node.js is represented as a Buffer. Here's how you can convert your base64 encoded MP3 data to a Buffer and pass it to the constructor: import { OpenAIWhisperAudio } from 'langchain/src/document_loaders/fs/openai_whisper_audio.ts';
// Assuming `event` is your AWS Lambda event containing the base64 encoded MP3
const base64MP3 = event.body; // replace with actual path to base64 MP3 data in your event
// Convert base64 string to Buffer
const mp3Buffer = Buffer.from(base64MP3, 'base64');
// Create a new instance of OpenAIWhisperAudio with the Buffer
const whisperAudio = new OpenAIWhisperAudio(mp3Buffer, {
clientOptions: {
// your OpenAI client options here
}
});
// Now you can use the `whisperAudio` instance to interact with the OpenAI Whisper API This should work because the OpenAIWhisperAudio constructor accepts a Blob, which in Node.js is represented as a Buffer. The You can find more details about the OpenAIWhisperAudio constructor in the langchain/src/document_loaders/fs/openai_whisper_audio.ts file. I hope this helps! Let me know if you have any other questions. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I can't find any documentation on how to do this but it looks like it should be possible from the source code.
I have an AWS Lambda component that receives an MP3 stream in the event body, base64 encoded, as part of a multipart request. I've extracted the mp3 data using AWS multipart parse() which returns the data in a Buffer, and verified that it's complete and valid. If I save it to an S3 bucket I can download and play it.
The langchain OpenAIWhisperAudio constructor looks like it can take a filePath or blob. I don't want to save the mp3 to a local file and then pass the file path if I can just send a blob, whatever that is. I tried
const { Blob } " require('node:buffer');
and then
let loader=new OpenAIWhisperAudio(new Blob(data, { type: 'audio/mp3' })); loader.load();
but while the object looks OK, the load() function returns an 'Unsupported file type: supported file types are mp3, mpeg, m4a, ...' error.
Any info on what kind of blob I need to pass?
Beta Was this translation helpful? Give feedback.
All reactions