Skip to content

bugbakery/decent-whisper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

decent-whisper

use the fastest whisper implementation on every hardware

- this package is highly work-in-progress and not ready for usage yet

backends

Currently, this package can dispatch to (in order of preference):

  1. insanely-fast-whisper (On NVIDIA systems)
  2. mlx-whisper (On Apple Silicon)
  3. faster-whisper (Everything else)

installation

If you want to use insanely-fast-whisper (on an nvidia system), you have to install pytorch as recommended in the pytorch docs before. Also it is recommended to install the CUDA-SDK and set the $CUDA_HOME environment variable to install flash-attn.

usage example

from decent_whiser import available_models, transcribe
from decent_whisper.model import choose_model, download_model, is_model_downloaded

model_info = choose_model(
   available_models(),
   model_size="small",
)

if not model_info:
   raise ValueError("No matching model found")

if not is_model_downloaded(model_info):
   download_model(model_info)

iter, info = transcribe(
    "audio.mp3",
    model=model_info,
)

for segment in iter:
    print("".join([segment.word for segment in segment]))

About

[wip] use the fastest whisper implementation on every hardware

Resources

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

  •  

Packages

No packages published

Contributors 2

  •  
  •  

Languages