You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This blueprint provides a complete solution for running **audio/video transcription**, **speaker diarization**, and **summarization** via a RESTful API. It integrates [Faster-Whisper](https://github.com/guillaumekln/faster-whisper) for efficient transcription, [pyannote.audio](https://github.com/pyannote/pyannote-audio) for diarization, and Hugging Face instruction-tuned LLMs (e.g., Mistral-7B) for summarization. It supports multi-GPU acceleration, real-time streaming logs, and JSON/text output formats.
6
6
7
7
---
8
+
8
9
## Pre-Filled Samples
9
10
10
11
Below are pre-configured blueprints for deploying Whisper transcription using different GPU configurations on Oracle Cloud Infrastructure.
11
12
12
-
| Feature Showcase Title | Description | Blueprint File |
| Deploy Whisper transcription on A10 GPU for real-time speech-to-text | Real-time audio transcription with Whisper on BM.GPU.A10.8 |[whisper-transcription-A10.json](whisper-transcription-A10.json)|
15
-
| Deploy Whisper transcription on A100 GPU for high-speed processing | High-performance Whisper transcription using BM.GPU.A100.8 |[whisper-transcription-A100.json](whisper-transcription-A100.json)|
16
-
| Deploy Whisper transcription on H100 GPU for next-gen AI workloads | Ultra-fast Whisper transcription with Whisper on BM.GPU.H100.8 |[whisper-transcription-H100.json](whisper-transcription-H100.json)|
| Deploy Whisper transcription on A10 GPU for real-time speech-to-text | A10 Transcription | Real-time audio transcription with Whisper on BM.GPU.A10.8 |[whisper-transcription-A10.json](whisper-transcription-A10.json)|
18
+
| Deploy Whisper transcription on A100 GPU for high-speed processing | A100 Transcription | High-performance Whisper transcription using BM.GPU.A100.8 |[whisper-transcription-A100.json](whisper-transcription-A100.json)|
19
+
| Deploy Whisper transcription on H100 GPU for next-gen AI workloads | H100 Transcription | Ultra-fast Whisper transcription with Whisper on BM.GPU.H100.8 |[whisper-transcription-H100.json](whisper-transcription-H100.json)|
0 commit comments