Releases: snadampal/llama.cpp
Releases · snadampal/llama.cpp
b4334
llava : Allow locally downloaded models for QwenVL (#10833) * Allow locally downloaded models for QwenVL * Define model_path * rm trailing space --------- Co-authored-by: Xuan Son Nguyen <[email protected]>
b4027
cuda : clear error after changing peer access (#10153)
b3375
ggml : add NVPL BLAS support (#8329) (#8425) * ggml : add NVPL BLAS support * ggml : replace `<BLASLIB>_ENABLE_CBLAS` with `GGML_BLAS_USE_<BLASLIB>` --------- Co-authored-by: ntukanov <[email protected]>
b3264
json: attempt to skip slow tests when running under emulator (#8189)