Releases: lizhenneng/llama.cpp
Releases · lizhenneng/llama.cpp
b6626
Improve Mobile UI for dialogs and action dropdowns (#16222) * fix: Always show conversation item actions * feat: Improve Alert Dialog and Dialog mobile UI * feat: Add settings reset to default confirmation * fix: Close Edit dialog on save * chore: update webui build output * webui: implement proper z-index system and scroll management - Add CSS variable for centralized z-index control - Fix dropdown positioning with Settings dialog conflicts - Prevent external scroll interference with proper event handling - Clean up hardcoded z-index values for maintainable architecture * webui: ensured the settings dialog enforces dynamic viewport height on mobile while retaining existing desktop sizing overrides * feat: Use `dvh` instead of computed px height for dialogs max height on mobile * chore: update webui build output * feat: Improve Settings fields UI * chore: update webui build output * chore: update webui build output --------- Co-authored-by: Pascal <[email protected]>
b6566
model-conversion : make causal-verify-logits fails with model names c…
b5415
server : added --no-prefill-assistant flag (#13608) * added no-prefill-assistant flag * reworded documentation comment * updated server README.md
b5372
vulkan: workaround FA compile failures on macos (#13517)
b5115
llama-model : add Glm4Model implementation for GLM-4-0414 (#12867) * GLM-4-0414 * use original one * Using with tensor map * fix bug * change order * change order * format with flask8
b5113
SYCL: Add fp16 type support to unary op kernels (#12788) * SYCL: Add fp16 support to some elementwise OP kernels * remove comment ggml-ci * Use static_cast directly * remove not needed cast from tanh * Use static cast and remove unneeded castings * Adjust device_support_op for unary OPs * Use cast_data and typed_data struct to deduplicate casting code