Skip to content

Commit c670222

Browse files
authored
feat: Update llama.cpp to ggml-org/llama.cpp@c0159f9 (#2161)
* Update llama.cpp to c0159f9c1 * Add changelog entry for llama.cpp update
1 parent ac59e5a commit c670222

File tree

3 files changed

+18
-1
lines changed

3 files changed

+18
-1
lines changed

CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
## [Unreleased]
99

10+
- feat: Update llama.cpp to ggerganov/llama.cpp@c0159f9c1f874da15e94f371d136f5920b4b5335 by @abetlen in #2161
1011
- fix: Handle embedding models without KV memory and test embeddings with a real GGUF embedding model by @abetlen in #2160
1112
- fix(ci): Shrink CUDA wheel fatbins so CUDA releases stay under GitHub's asset size limit by @abetlen in #2158
1213

llama_cpp/llama_cpp.py

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1314,6 +1314,22 @@ def llama_model_load_from_splits(
13141314
...
13151315

13161316

1317+
# // Load a model from an open FILE pointer
1318+
# LLAMA_API struct llama_model * llama_model_load_from_file_ptr(
1319+
# FILE * file,
1320+
# struct llama_model_params params);
1321+
@ctypes_function(
1322+
"llama_model_load_from_file_ptr",
1323+
[ctypes.c_void_p, llama_model_params],
1324+
llama_model_p_ctypes,
1325+
)
1326+
def llama_model_load_from_file_ptr(
1327+
file: ctypes.c_void_p, params: llama_model_params, /
1328+
) -> Optional[llama_model_p]:
1329+
"""Load a model from an open FILE pointer."""
1330+
...
1331+
1332+
13171333
# LLAMA_API void llama_model_save_to_file(
13181334
# const struct llama_model * model,
13191335
# const char * path_model);

vendor/llama.cpp

0 commit comments

Comments
 (0)