Skip to content

Commit e69c46c

Browse files
authored
Merge pull request #55 from utilityai/54-fails-to-build-not-finding-llamacpp-even-when-forced-put-into-place
added docs to remind users to populate the submodules
2 parents a99956b + 1df79ce commit e69c46c

File tree

2 files changed

+19
-0
lines changed

2 files changed

+19
-0
lines changed

README.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,3 +14,17 @@ This is the home for [llama-cpp-2][crates.io]. It also contains the [llama-cpp-s
1414
This project was created with the explict goal of staying as up to date as possible with llama.cpp, as a result it is dead simple, very close to raw bindings, and does not follow semver meaningfully.
1515

1616
Check out the [docs.rs] for crate documentation or the [readme] for high level information about the project.
17+
18+
## Hacking
19+
20+
Ensure that when you clone this project you also clone the submodules. This can be done with the following command:
21+
22+
```sh
23+
git clone --recursive https://github.com/utilityai/llama-cpp-rs
24+
```
25+
26+
or if you have already cloned the project you can run:
27+
28+
```sh
29+
git submodule update --init --recursive
30+
```

llama-cpp-sys-2/build.rs

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,16 @@
11
use std::env;
22
use std::path::PathBuf;
3+
use std::path::Path;
34

45
fn main() {
56
println!("cargo:rerun-if-changed=llama.cpp");
67

78
let cublas_enabled = env::var("CARGO_FEATURE_CUBLAS").is_ok();
89

10+
if !Path::new("llama.cpp/ggml.c").exists() {
11+
panic!("llama.cpp seems to not be populated, try running `git submodule update --init --recursive` to init.")
12+
}
13+
914
let mut ggml = cc::Build::new();
1015
let mut ggml_cuda = if cublas_enabled { Some(cc::Build::new()) } else { None };
1116
let mut llama_cpp = cc::Build::new();

0 commit comments

Comments
 (0)