You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/multimodal/minicpmv4.0.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ Download [MiniCPM-V-4](https://huggingface.co/openbmb/MiniCPM-V-4) PyTorch model
6
6
7
7
8
8
### Build llama.cpp
9
-
Readme modification time: 20250206
9
+
Readme modification time: 20250731
10
10
11
11
If there are differences in usage, please refer to the official build [documentation](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)
Download [MiniCPM-V-4_5](https://huggingface.co/openbmb/MiniCPM-V-4_5) PyTorch model from huggingface to "MiniCPM-V-4_5" folder.
6
+
7
+
8
+
### Build llama.cpp
9
+
Readme modification time: 20250826
10
+
11
+
If there are differences in usage, please refer to the official build [documentation](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)
12
+
13
+
Clone llama.cpp:
14
+
```bash
15
+
git clone https://github.com/ggerganov/llama.cpp
16
+
cd llama.cpp
17
+
```
18
+
19
+
Build llama.cpp using `CMake`:
20
+
```bash
21
+
cmake -B build
22
+
cmake --build build --config Release
23
+
```
24
+
25
+
26
+
### Usage of MiniCPM-V 4
27
+
28
+
Convert PyTorch model to gguf files (You can also download the converted [gguf](https://huggingface.co/openbmb/MiniCPM-V-4_5-gguf) by us)
0 commit comments