Releases: JamePeng/llama-cpp-python
v0.3.16-cu128-Basic-win-20251119
feat: Update Llava15ChatHandler to accept use_gpu, image_min_tokens, and image_max_tokens.
- Now can pass the
image_min_tokensparameter in Qwen3VLChatHandler to support bbox grounding tasks. - Add validation to ensure max tokens are not less than min tokens.
feat: Update llama.cpp 20251115 and Move the ggml-related code to _ggml.py.
feat: Remove parameters that are no longer needed: mctx_params.verbosity
feat: Supplementing the use of mtmd_helper_log_set to align with llama.cpp
feat: Add a Basic workflow for cu128 windows wheels
feat: Update Submodule vendor/llama.cpp cb623de..07b0e7a
v0.3.16-cu126-Basic-win-20251119
feat: Update Llava15ChatHandler to accept use_gpu, image_min_tokens, and image_max_tokens.
- Now can pass the
image_min_tokensparameter in Qwen3VLChatHandler to support bbox grounding tasks. - Add validation to ensure max tokens are not less than min tokens.
feat: Update llama.cpp 20251115 and Move the ggml-related code to _ggml.py.
feat: Remove parameters that are no longer needed: mctx_params.verbosity
feat: Supplementing the use of mtmd_helper_log_set to align with llama.cpp
feat: Add a Basic workflow for cu128 windows wheels
feat: Update Submodule vendor/llama.cpp cb623de..07b0e7a
v0.3.16-cu124-Basic-win-20251119
feat: Update Llava15ChatHandler to accept use_gpu, image_min_tokens, and image_max_tokens.
- Now can pass the
image_min_tokensparameter in Qwen3VLChatHandler to support bbox grounding tasks. - Add validation to ensure max tokens are not less than min tokens.
feat: Update llama.cpp 20251115 and Move the ggml-related code to _ggml.py.
feat: Remove parameters that are no longer needed: mctx_params.verbosity
feat: Supplementing the use of mtmd_helper_log_set to align with llama.cpp
feat: Add a Basic workflow for cu128 windows wheels
feat: Update Submodule vendor/llama.cpp cb623de..07b0e7a