Releases: JamePeng/llama-cpp-python
v0.3.16-cu128-AVX2-linux-20251023
feat: Supplement sm_87 sm101 compilation
feat: Update Submodule vendor/llama.cpp df1b612..dd62dcf
feat: Update some llama model parameters(check_tensors, use_extra_bufts, no_host)
feat: Sync model : Granite docling + Idefics3 preprocessing (SmolVLM)
feat: Sync server : context checkpointing for hybrid and recurrent models
feat: Sync llama: print memory breakdown on exit
feat: Synchronize some enum variable values
feat: Introducing index numbers to avoid the hallucination problem of multiple images entering the minicpm multimodal model series as much as possible
v0.3.16-cu126-AVX2-linux-20251023
feat: Supplement sm_87 compilation
feat: Update Submodule vendor/llama.cpp df1b612..dd62dcf
feat: Update some llama model parameters(check_tensors, use_extra_bufts, no_host)
feat: Sync model : Granite docling + Idefics3 preprocessing (SmolVLM)
feat: Sync server : context checkpointing for hybrid and recurrent models
feat: Sync llama: print memory breakdown on exit
feat: Synchronize some enum variable values
feat: Introducing index numbers to avoid the hallucination problem of multiple images entering the minicpm multimodal model series as much as possible
v0.3.16-cu124-AVX2-linux-20251023
feat: Supplement sm_87 compilation
feat: Update Submodule vendor/llama.cpp df1b612..dd62dcf
feat: Update some llama model parameters(check_tensors, use_extra_bufts, no_host)
feat: Sync model : Granite docling + Idefics3 preprocessing (SmolVLM)
feat: Sync server : context checkpointing for hybrid and recurrent models
feat: Sync llama: print memory breakdown on exit
feat: Synchronize some enum variable values
feat: Introducing index numbers to avoid the hallucination problem of multiple images entering the minicpm multimodal model series as much as possible
v0.3.16-cu126-AVX2-win-20251022
feat: Update Submodule vendor/llama.cpp df1b612..03792ad
feat: Update some llama model parameters(check_tensors, use_extra_bufts, no_host)
feat: Sync model : Granite docling + Idefics3 preprocessing (SmolVLM)
feat: Sync server : context checkpointing for hybrid and recurrent models
feat: Sync llama: print memory breakdown on exit
feat: Synchronize some enum variable values
feat: Introducing index numbers to avoid the hallucination problem of multiple images entering the minicpm multimodal model series as much as possible
v0.3.16-cu124-AVX2-win-20251022
feat: Update Submodule vendor/llama.cpp df1b612..03792ad
feat: Update some llama model parameters(check_tensors, use_extra_bufts, no_host)
feat: Sync model : Granite docling + Idefics3 preprocessing (SmolVLM)
feat: Sync server : context checkpointing for hybrid and recurrent models
feat: Sync llama: print memory breakdown on exit
feat: Synchronize some enum variable values
feat: Introducing index numbers to avoid the hallucination problem of multiple images entering the minicpm multimodal model series as much as possible