Skip to content

Commit 1baf8d2

Browse files
committed
feat: pass --hf_token to WhisperModel for gated model support
Forward the existing --hf_token CLI argument to faster-whisper's WhisperModel via a new use_auth_token parameter on load_model(), enabling downloads of gated/private HuggingFace models.
1 parent 9d687e0 commit 1baf8d2

File tree

2 files changed

+4
-1
lines changed

2 files changed

+4
-1
lines changed

whisperx/asr.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -314,6 +314,7 @@ def load_model(
314314
download_root: Optional[str] = None,
315315
local_files_only=False,
316316
threads=4,
317+
use_auth_token: Optional[Union[str, bool]] = None,
317318
) -> FasterWhisperPipeline:
318319
"""Load a Whisper model for inference.
319320
Args:
@@ -341,7 +342,8 @@ def load_model(
341342
compute_type=compute_type,
342343
download_root=download_root,
343344
local_files_only=local_files_only,
344-
cpu_threads=threads)
345+
cpu_threads=threads,
346+
use_auth_token=use_auth_token)
345347
if language is not None:
346348
tokenizer = Tokenizer(model.hf_tokenizer, model.model.is_multilingual, task=task, language=language)
347349
else:

whisperx/transcribe.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -141,6 +141,7 @@ def transcribe_task(args: dict, parser: argparse.ArgumentParser):
141141
task=task,
142142
local_files_only=model_cache_only,
143143
threads=faster_whisper_threads,
144+
use_auth_token=hf_token,
144145
)
145146

146147
for audio_path in args.pop("audio"):

0 commit comments

Comments
 (0)