Skip to content

Conversation

Dw9
Copy link
Contributor

@Dw9 Dw9 commented Aug 12, 2025

Note that ffmpeg has recently integrated whisper FFmpeg/FFmpeg@13ce36f; testing on a multi-card machine with a non-zero card will cause a crash

./ffmpeg -i ../test.wav -vn -af "whisper=model=../whisper.cpp/models/ggml-large-v3-turbo.bin
:language=zh
:gpu_device=1
:queue=3
:destination=output.srt
:format=srt" -f null -
ffmpeg version N-120674-g5733e08c97 Copyright (c) 2000-2025 the FFmpeg developers
built with gcc 11 (Ubuntu 11.4.0-1ubuntu1~22.04)
configuration: --disable-x86asm --enable-whisper
libavutil 60. 9.100 / 60. 9.100
libavcodec 62. 12.100 / 62. 12.100
libavformat 62. 4.100 / 62. 4.100
libavdevice 62. 2.100 / 62. 2.100
libavfilter 11. 5.100 / 11. 5.100
libswscale 9. 2.100 / 9. 2.100
libswresample 6. 2.100 / 6. 2.100
[aist#0:0/pcm_s16le @ 0x5568ff8dabc0] Guessed Channel Layout: mono
Input #0, wav, from '../test.wav':
Metadata:
encoder : Lavf58.76.100
Duration: 00:00:58.62, bitrate: 256 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 16000 Hz, mono, s16, 256 kb/s
[Parsed_whisper_0 @ 0x5568ff8be280] whisper gpu 1 gpu-id:1
/home/zhibo/dw/whisper.cpp/ggml/src/ggml-backend.cpp:736: pre-allocated tensor (leaf_0) in a buffer (CUDA0) that cannot run the operation (NONE)
/usr/local/lib/libggml-base.so(+0x16ceb)[0x7fd221397ceb]
/usr/local/lib/libggml-base.so(ggml_print_backtrace+0x21f)[0x7fd22139814f]
/usr/local/lib/libggml-base.so(ggml_abort+0x152)[0x7fd221398322]
/usr/local/lib/libggml-base.so(+0x2c174)[0x7fd2213ad174]
/usr/local/lib/libggml-base.so(+0x2d846)[0x7fd2213ae846]
/usr/local/lib/libggml-base.so(ggml_backend_sched_alloc_graph+0xcd)[0x7fd2213b007d]
/usr/local/lib/libwhisper.so.1(+0x30163)[0x7fd232886163]
/usr/local/lib/libwhisper.so.1(whisper_init_state+0x99a)[0x7fd23288b1da]
/usr/local/lib/libwhisper.so.1(whisper_init_from_file_with_params+0x37)[0x7fd2328953d7]
./ffmpeg(+0x41cf6a)[0x5568d4beef6a]
./ffmpeg(+0x1f9dcf)[0x5568d49cbdcf]
./ffmpeg(+0x20f928)[0x5568d49e1928]
./ffmpeg(+0x21025b)[0x5568d49e225b]
./ffmpeg(+0x1a5e11)[0x5568d4977e11]
./ffmpeg(+0x1a95bf)[0x5568d497b5bf]
./ffmpeg(+0x1ab6f3)[0x5568d497d6f3]
./ffmpeg(+0x1b11f0)[0x5568d49831f0]
./ffmpeg(+0x1b2b9f)[0x5568d4984b9f]
./ffmpeg(+0x1b30e9)[0x5568d49850e9]
./ffmpeg(+0x1b3b39)[0x5568d4985b39]
./ffmpeg(+0x1b7a97)[0x5568d4989a97]
./ffmpeg(+0x1baecb)[0x5568d498cecb]
./ffmpeg(+0x199a48)[0x5568d496ba48]
/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7fd232656d90]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7fd232656e40]
./ffmpeg(+0x19a565)[0x5568d496c565]

@ggerganov ggerganov requested a review from danbev August 12, 2025 10:01
@ggerganov ggerganov merged commit 5527454 into ggml-org:master Aug 12, 2025
55 checks passed
bygreencn added a commit to bygreencn/whisper.cpp that referenced this pull request Sep 24, 2025
* ggerganov/master: (72 commits)
  node : add win platform check for require path (ggml-org#3363)
  ci : update main-cuda.Dockerfile (ggml-org#3371)
  whisper : fixed crash in GPU device selection on multi-GPU systems (ggml-org#3372)
  wasm : change ggml model host to HF (ggml-org#3369)
  ruby : Add ruby binding for max_len (ggml-org#3365)
  stream.wasm : add language selection support (ggml-org#3354)
  whisper : reset conv scheduler when CoreML is used (ggml-org#3350)
  ggml : remove old kompute, cann (skip) (ggml-org#3349)
  talk-llama : sync llama.cpp
  sync : ggml
  vulkan : add fp16 support for the conv_2d kernel (llama/14872)
  vulkan: skip empty set_rows to avoid invalid API usage (llama/14860)
  HIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 (llama/14624)
  CANN: Implement GLU ops (llama/14884)
  musa: fix build warnings (unused variable) (llama/14869)
  ggml-cpu : disable GGML_NNPA by default due to instability (llama/14880)
  metal: SSM_SCAN performance (llama/14743)
  opencl: add fused `rms_norm_mul` (llama/14841)
  ggml : remove invalid portPos specifiers from dot files (llama/14838)
  rpc : check for null buffers in get/set/copy tensor endpoints (llama/14868)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants