Skip to content

Conversation

danbev
Copy link
Member

@danbev danbev commented Aug 4, 2025

This commit adds a check to the platform in use and adjust the path to the addon.node shared library.

The motivation for this change is that on windows addon.node library is built into build\bin\Release and on linux into build/Release.

Resolves: #3360


With this change I'm able to run the example using the following command:

> node index.js --language='en' --model='../../models/ggml-base.en.bin' --fname_inp='../../samples/jfk.wav'
whisperParams = {
  language: 'en',
  model: '../../models/ggml-base.en.bin',
  fname_inp: '../../samples/jfk.wav',
  use_gpu: true,
  flash_attn: false,
  no_prints: true,
  comma_in_time: false,
  translate: true,
  no_timestamps: false,
  detect_language: false,
  audio_ctx: 0,
  max_len: 0,
  progress_callback: [Function: progress_callback]
}
progress: 0%

progress: 100%

{
  transcription: [
    [
      '00:00:00.000',
      '00:00:11.000',
      ' And so my fellow Americans, ask not what your country can do for you, ask what you can do for your country.'
    ]
  ]
}

This commit adds a check to the platform in use and adjust the path to
the addon.node shared library.

The motivation for this change is that on windows addon.node library is
built into build\bin\Release and on linux into build/Release.

Resolves: ggml-org#3360
@danbev danbev merged commit 040510a into ggml-org:master Aug 15, 2025
57 checks passed
bygreencn added a commit to bygreencn/whisper.cpp that referenced this pull request Sep 24, 2025
* ggerganov/master: (72 commits)
  node : add win platform check for require path (ggml-org#3363)
  ci : update main-cuda.Dockerfile (ggml-org#3371)
  whisper : fixed crash in GPU device selection on multi-GPU systems (ggml-org#3372)
  wasm : change ggml model host to HF (ggml-org#3369)
  ruby : Add ruby binding for max_len (ggml-org#3365)
  stream.wasm : add language selection support (ggml-org#3354)
  whisper : reset conv scheduler when CoreML is used (ggml-org#3350)
  ggml : remove old kompute, cann (skip) (ggml-org#3349)
  talk-llama : sync llama.cpp
  sync : ggml
  vulkan : add fp16 support for the conv_2d kernel (llama/14872)
  vulkan: skip empty set_rows to avoid invalid API usage (llama/14860)
  HIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 (llama/14624)
  CANN: Implement GLU ops (llama/14884)
  musa: fix build warnings (unused variable) (llama/14869)
  ggml-cpu : disable GGML_NNPA by default due to instability (llama/14880)
  metal: SSM_SCAN performance (llama/14743)
  opencl: add fused `rms_norm_mul` (llama/14841)
  ggml : remove invalid portPos specifiers from dot files (llama/14838)
  rpc : check for null buffers in get/set/copy tensor endpoints (llama/14868)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

An error occurred when running the addon.node example: failed to initialize whisper context

2 participants