v3.0.0-beta.14
Pre-release
Pre-release
3.0.0-beta.14 (2024-03-16)
Bug Fixes
DisposedError
was thrown when calling.dispose()
(#178) (315a3eb)- adapt to breaking
llama.cpp
changes (#178) (315a3eb)
Features
- async model and context loading (#178) (315a3eb)
- automatically try to resolve
Failed to detect a default CUDA architecture
CUDA compilation error (#178) (315a3eb) - detect
cmake
binary issues and suggest fixes on detection (#178) (315a3eb)
Shipped with llama.cpp
release b2440
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)