Replies: 1 comment
-
your question should be asked at: https://community.openai.com/tag/whisper |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Sorry for the bother but I was wondering if anyone here would be able to help, I have tried using polyfil in the frontend to help with the audio, I checked audio inputs and they are coming in crisp and full, but when they are sent over to the API it only reads the first few seconds, can someone tell me where I am going wrong? Been at it for a few days and narrowed it down to the API understanding the audio, it works with desktop records and even youtube videos but this doesn't seem to work with any iphone recordings I have done:
app.post('/audio', upload.single('audio'), async (req, res, next) => {
try {
const audioFile = req.file;
console.log(audioFile);
} catch (error) {
// Log detailed error information
console.error("Error sending data to Whisper API:", error.message, error.response ? error.response.data : "");
res.status(500).json({ error: 'Error transcribing audio' });
}
});
Beta Was this translation helpful? Give feedback.
All reactions