You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/llava/README-minicpmo2.6.md
+18-16Lines changed: 18 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,13 +5,25 @@ Currently, this readme only supports minicpm-omni's image capabilities, and we w
5
5
6
6
Download [MiniCPM-o-2_6](https://huggingface.co/openbmb/MiniCPM-o-2_6) PyTorch model from huggingface to "MiniCPM-o-2_6" folder.
7
7
8
+
9
+
### Build llama.cpp
10
+
Readme modification time: 20250206
11
+
12
+
If there are differences in usage, please refer to the official build [documentation](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)
13
+
8
14
Clone llama.cpp:
9
15
```bash
10
-
git clone git@github.com:OpenBMB/llama.cpp.git
16
+
git clone https://github.com/ggerganov/llama.cpp
11
17
cd llama.cpp
12
-
git checkout minicpm-omni
13
18
```
14
19
20
+
Build llama.cpp using `CMake`:
21
+
```bash
22
+
cmake -B build
23
+
cmake --build build --config Release
24
+
```
25
+
26
+
15
27
### Usage of MiniCPM-o 2.6
16
28
17
29
Convert PyTorch model to gguf files (You can also download the converted [gguf](https://huggingface.co/openbmb/MiniCPM-o-2_6-gguf) by us)
Copy file name to clipboardExpand all lines: examples/llava/README-minicpmv2.5.md
+18-70Lines changed: 18 additions & 70 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,13 +4,26 @@
4
4
5
5
Download [MiniCPM-Llama3-V-2_5](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5) PyTorch model from huggingface to "MiniCPM-Llama3-V-2_5" folder.
6
6
7
+
8
+
### Build llama.cpp
9
+
Readme modification time: 20250206
10
+
11
+
If there are differences in usage, please refer to the official build [documentation](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)
12
+
7
13
Clone llama.cpp:
8
14
```bash
9
15
git clone https://github.com/ggml-org/llama.cpp
10
16
cd llama.cpp
11
17
```
12
18
13
-
### Usage
19
+
Build llama.cpp using `CMake`:
20
+
```bash
21
+
cmake -B build
22
+
cmake --build build --config Release
23
+
```
24
+
25
+
26
+
### Usage of MiniCPM-Llama3-V 2.5
14
27
15
28
Convert PyTorch model to gguf files (You can also download the converted [gguf](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) by us)
Install [termux](https://github.com/termux/termux-app#installation) on your device and run `termux-setup-storage` to get access to your SD card (if Android 11+ then run the command twice).
78
-
79
-
Finally, copy these built `llama` binaries and the model file to your device storage. Because the file permissions in the Android sdcard cannot be changed, you can copy the executable files to the `/data/data/com.termux/files/home/bin` path, and then execute the following commands in Termux to add executable permission:
80
-
81
-
(Assumed that you have pushed the built executable files to the /sdcard/llama.cpp/bin path using `adb push`)
Copy file name to clipboardExpand all lines: examples/llava/README-minicpmv2.6.md
+18-78Lines changed: 18 additions & 78 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,13 +4,25 @@
4
4
5
5
Download [MiniCPM-V-2_6](https://huggingface.co/openbmb/MiniCPM-V-2_6) PyTorch model from huggingface to "MiniCPM-V-2_6" folder.
6
6
7
+
8
+
### Build llama.cpp
9
+
Readme modification time: 20250206
10
+
11
+
If there are differences in usage, please refer to the official build [documentation](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)
12
+
7
13
Clone llama.cpp:
8
14
```bash
9
-
git clone git@github.com:OpenBMB/llama.cpp.git
15
+
git clone https://github.com/ggerganov/llama.cpp
10
16
cd llama.cpp
11
-
git checkout minicpmv-main
12
17
```
13
18
19
+
Build llama.cpp using `CMake`:
20
+
```bash
21
+
cmake -B build
22
+
cmake --build build --config Release
23
+
```
24
+
25
+
14
26
### Usage of MiniCPM-V 2.6
15
27
16
28
Convert PyTorch model to gguf files (You can also download the converted [gguf](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) by us)
Install [termux](https://github.com/termux/termux-app#installation) on your device and run `termux-setup-storage` to get access to your SD card (if Android 11+ then run the command twice).
86
-
87
-
Finally, copy these built `llama` binaries and the model file to your device storage. Because the file permissions in the Android sdcard cannot be changed, you can copy the executable files to the `/data/data/com.termux/files/home/bin` path, and then execute the following commands in Termux to add executable permission:
88
-
89
-
(Assumed that you have pushed the built executable files to the /sdcard/llama.cpp/bin path using `adb push`)
0 commit comments