Skip to content

Conversation

jesusmb1995
Copy link

@jesusmb1995 jesusmb1995 commented Aug 28, 2025

Builds changes from #1 on top of Tether synced fork. After #4 was merged to sync with upstream and add Tether fork changes.

Git diff can be used to check the new re-based branch includes same changes of temp-load-from-buffer, changes shown should be those from the re-base:

➜  llamacpp_tether git:(temp-load-from-buffer-rebased-QVAC4552) ✗ git diff --name-only temp-load-from-buffer-rebased-QVAC4552 tetherto/temp-
load-from-buffer | tee
CMakeLists.txt
cmake/llama-config.cmake.in
common/CMakeLists.txt
ggml/CMakeLists.txt
ggml/cmake/ggml-config.cmake.in
ggml/src/CMakeLists.txt
ggml/src/ggml-vulkan/CMakeLists.txt
src/CMakeLists.txt
tools/mtmd/CMakeLists.txt

First commit of this PR is based on:

commit 88d711fad2ac0a1392eb916663448cbf71d64b0a
Author: Jesús <[email protected]>
Date:   Wed Jul 16 11:07:22 2025 +0200

    [common] Pure interface for files
    
    Convert llama_file to a pure virtual class that can be overriden by multiple implementations (disk, single memory buffer, ...)

commit ab269c4c47b19117b956d65539b733c0fe136d33 (tetherto/master, tetherto/HEAD)
Merge: 4fb255655 ce648804b
Author: Yury Samarin <[email protected]>
Date:   Thu Aug 28 08:59:08 2025 +0300

    Merge pull request #4 from jpgaribotti/QVAC-4552
    
    QVAC-4552: Sync port with upstream version b5932

commit ce648804b2881100bd83bd488c8e38a715e7431c (tag: b5932.0.0, jpgaribotti/QVAC-4552)
Author: Juan Pablo Garibotti Arias <[email protected]>
Date:   Wed Aug 13 12:42:39 2025 +0200

    Export mtmd target

Convert llama_file to a pure virtual class that can be overriden by multiple implementations (disk, single memory buffer, ...)
Define a new macro LLAMA_LOG_CMAKE_DEBUG that results in no-op when a release build is activated. This will allow to have a good trace and debugging capabilities that will be specially useful for the async loading of multiple model shards.
This change adds an additional automated test loading from disk, to ensure the existing functionallity does not break.
The gguf-split utility now generates a `.txt` listing all tensors. Useful both for manual inspection/debugging and for incremental tensor loading where its not possible to know tensors present in other split files (the information is critical to handle optional tensors).
Add a flag to the tool to ensure some tensor names are always followed by another tensor and not at the end of a shard. This ensures the shard will not be released when the tensor is processed, and avoid missing-file failures of duplicate tensors that are re-referenced a few tensors later (typically token_embd.weight / output).
Show to which shards belongs each tensor
- Ensures a char trait implementation for uint8 exists, that can be used with std::basic_streambuff.
- Adds an implementation of std::basic_streambuff for a single vector. Will be used by llama.cpp and tests when loading from a single memory buffer.
Override the pure virtual interface with a class that can operate on a single memory buffer.
Auxiliary function to convert a list of C strings to a vector of C++ strings.
Add new GGUF reader implementation that can read metadata from a memory buffer.
- Add code to be able to load a gguf file from a variant (memory or disk).
- Some structs simplify how to load a file and keep track of the pointers (which are now in the same struct).
Move the loader code, that process a file after it has been loaded into memory and populate its own attributes, to a reusable method.
Add new C++ function to Llama main header to load from a single memory buffer, and propagate changes to internal calls/constructors.
A file buffer that can be fulfilled using string keys. The extract method waits until the file is provided.
Handles the logic for incrementally loading files and tensors is model shards.
Refactor backend buffer creation (for model loading) into functions.
- The function now takes size_data instead of the member attribute.
- Sanity checks of file pointer handles

These two changes will be useful when calling `load_all_data` multiple times during incremental shard load.
Adapt the loader and model load to incrementally load files and upload tensors.
Add functions to Llama.cpp public headers to asynchronously load shards.
Split some common loading functionallity. This will help in the memory loading tests.
Add a submodule with re-usable code for tests.
Adapt embedding example to showcase how to load from memory. Can be configured through environment variables.
Adapt simple example to showcase how to load from memory. Can be configured with environment variables.

Qwen3, for example, can be used with the simple example.
Add some automatic tests that load from memory (single buffer or multiple async splits)
@jesusmb1995 jesusmb1995 force-pushed the temp-load-from-buffer-rebased-QVAC4552 branch from bbd1b71 to b6d441b Compare August 28, 2025 08:34
@olek-tether olek-tether merged commit e394035 into tetherto:master Aug 28, 2025
9 of 47 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants