Skip to content

Commit c212db6

Browse files
authored
Merge branch 'ggml-org:master' into tr/qwen3-vl
2 parents fd55385 + df1b612 commit c212db6

File tree

10 files changed

+301
-28
lines changed

10 files changed

+301
-28
lines changed

common/arg.cpp

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3859,7 +3859,6 @@ common_params_context common_params_parser_init(common_params & params, llama_ex
38593859
[](common_params & params) {
38603860
params.model.hf_repo = "ggml-org/bge-small-en-v1.5-Q8_0-GGUF";
38613861
params.model.hf_file = "bge-small-en-v1.5-q8_0.gguf";
3862-
params.pooling_type = LLAMA_POOLING_TYPE_NONE;
38633862
params.embd_normalize = 2;
38643863
params.n_ctx = 512;
38653864
params.verbose_prompt = true;
@@ -3873,7 +3872,6 @@ common_params_context common_params_parser_init(common_params & params, llama_ex
38733872
[](common_params & params) {
38743873
params.model.hf_repo = "ggml-org/e5-small-v2-Q8_0-GGUF";
38753874
params.model.hf_file = "e5-small-v2-q8_0.gguf";
3876-
params.pooling_type = LLAMA_POOLING_TYPE_NONE;
38773875
params.embd_normalize = 2;
38783876
params.n_ctx = 512;
38793877
params.verbose_prompt = true;
@@ -3887,7 +3885,6 @@ common_params_context common_params_parser_init(common_params & params, llama_ex
38873885
[](common_params & params) {
38883886
params.model.hf_repo = "ggml-org/gte-small-Q8_0-GGUF";
38893887
params.model.hf_file = "gte-small-q8_0.gguf";
3890-
params.pooling_type = LLAMA_POOLING_TYPE_NONE;
38913888
params.embd_normalize = 2;
38923889
params.n_ctx = 512;
38933890
params.verbose_prompt = true;

tools/rpc/README.md

Lines changed: 41 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
> This example and the RPC backend are currently in a proof-of-concept development stage. As such, the functionality is fragile and
55
> insecure. **Never run the RPC server on an open network or in a sensitive environment!**
66
7-
The `rpc-server` allows running `ggml` backend on a remote host.
7+
The `rpc-server` allows exposing `ggml` devices on a remote host.
88
The RPC backend communicates with one or several instances of `rpc-server` and offloads computations to them.
99
This can be used for distributed LLM inference with `llama.cpp` in the following way:
1010

@@ -14,28 +14,34 @@ flowchart TD
1414
rpcb<-->|TCP|srvb
1515
rpcb<-.->|TCP|srvn
1616
subgraph hostn[Host N]
17-
srvn[rpc-server]<-.->backend3["Backend (CUDA,Metal,etc.)"]
17+
srvn[rpc-server]<-.->dev4["CUDA0"]
18+
srvn[rpc-server]<-.->dev5["CPU"]
1819
end
1920
subgraph hostb[Host B]
20-
srvb[rpc-server]<-->backend2["Backend (CUDA,Metal,etc.)"]
21+
srvb[rpc-server]<-->dev3["Metal"]
2122
end
2223
subgraph hosta[Host A]
23-
srva[rpc-server]<-->backend["Backend (CUDA,Metal,etc.)"]
24+
srva[rpc-server]<-->dev["CUDA0"]
25+
srva[rpc-server]<-->dev2["CUDA1"]
2426
end
2527
subgraph host[Main Host]
26-
local["Backend (CUDA,Metal,etc.)"]<-->ggml[llama-cli]
28+
local["Local devices"]<-->ggml[llama-cli]
2729
ggml[llama-cli]<-->rpcb[RPC backend]
2830
end
2931
style hostn stroke:#66,stroke-width:2px,stroke-dasharray: 5 5
32+
classDef devcls fill:#5B9BD5
33+
class local,dev,dev2,dev3,dev4,dev5 devcls
3034
```
3135

32-
Each host can run a different backend, e.g. one with CUDA and another with Metal.
33-
You can also run multiple `rpc-server` instances on the same host, each with a different backend.
36+
By default, `rpc-server` exposes all available accelerator devices on the host.
37+
If there are no accelerators, it exposes a single `CPU` device.
3438

3539
## Usage
3640

37-
On each host, build the corresponding backend with `cmake` and add `-DGGML_RPC=ON` to the build options.
38-
For example, to build the CUDA backend with RPC support:
41+
### Remote hosts
42+
43+
On each remote host, build the backends for each accelerator by adding `-DGGML_RPC=ON` to the build options.
44+
For example, to build the `rpc-server` with support for CUDA accelerators:
3945

4046
```bash
4147
mkdir build-rpc-cuda
@@ -44,33 +50,38 @@ cmake .. -DGGML_CUDA=ON -DGGML_RPC=ON
4450
cmake --build . --config Release
4551
```
4652

47-
Then, start the `rpc-server` with the backend:
53+
When started, the `rpc-server` will detect and expose all available `CUDA` devices:
4854

4955
```bash
50-
$ bin/rpc-server -p 50052
51-
create_backend: using CUDA backend
52-
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
53-
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
56+
$ bin/rpc-server
57+
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
58+
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
5459
ggml_cuda_init: found 1 CUDA devices:
55-
Device 0: NVIDIA T1200 Laptop GPU, compute capability 7.5, VMM: yes
56-
Starting RPC server on 0.0.0.0:50052
60+
Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
61+
Starting RPC server v3.0.0
62+
endpoint : 127.0.0.1:50052
63+
local cache : n/a
64+
Devices:
65+
CUDA0: NVIDIA GeForce RTX 5090 (32109 MiB, 31588 MiB free)
5766
```
5867

59-
When using the CUDA backend, you can specify the device with the `CUDA_VISIBLE_DEVICES` environment variable, e.g.:
68+
You can control the set of exposed CUDA devices with the `CUDA_VISIBLE_DEVICES` environment variable or the `--device` command line option. The following two commands have the same effect:
6069
```bash
6170
$ CUDA_VISIBLE_DEVICES=0 bin/rpc-server -p 50052
71+
$ bin/rpc-server --device CUDA0 -p 50052
6272
```
63-
This way you can run multiple `rpc-server` instances on the same host, each with a different CUDA device.
6473

74+
### Main host
6575

66-
On the main host build `llama.cpp` for the local backend and add `-DGGML_RPC=ON` to the build options.
67-
Finally, when running `llama-cli`, use the `--rpc` option to specify the host and port of each `rpc-server`:
76+
On the main host build `llama.cpp` with the backends for the local devices and add `-DGGML_RPC=ON` to the build options.
77+
Finally, when running `llama-cli` or `llama-server`, use the `--rpc` option to specify the host and port of each `rpc-server`:
6878

6979
```bash
70-
$ bin/llama-cli -m ../models/tinyllama-1b/ggml-model-f16.gguf -p "Hello, my name is" --repeat-penalty 1.0 -n 64 --rpc 192.168.88.10:50052,192.168.88.11:50052 -ngl 99
80+
$ llama-cli -hf ggml-org/gemma-3-1b-it-GGUF -ngl 99 --rpc 192.168.88.10:50052,192.168.88.11:50052
7181
```
7282

73-
This way you can offload model layers to both local and remote devices.
83+
By default, llama.cpp distributes model weights and the KV cache across all available devices -- both local and remote -- in proportion to each device's available memory.
84+
You can override this behavior with the `--tensor-split` option and set custom proportions when splitting tensor data across devices.
7485

7586
### Local cache
7687

@@ -83,3 +94,11 @@ $ bin/rpc-server -c
8394
```
8495

8596
By default, the cache is stored in the `$HOME/.cache/llama.cpp/rpc` directory and can be controlled via the `LLAMA_CACHE` environment variable.
97+
98+
### Troubleshooting
99+
100+
Use the `GGML_RPC_DEBUG` environment variable to enable debug messages from `rpc-server`:
101+
```bash
102+
$ GGML_RPC_DEBUG=1 bin/rpc-server
103+
```
104+

tools/server/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -393,7 +393,7 @@ node index.js
393393

394394
### GET `/health`: Returns health check result
395395

396-
This endpoint is public (no API key check).
396+
This endpoint is public (no API key check). `/v1/health` also works.
397397

398398
**Response format**
399399

tools/server/public/index.html.gz

1.14 KB
Binary file not shown.

tools/server/server.cpp

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4184,6 +4184,7 @@ int main(int argc, char ** argv) {
41844184
auto middleware_validate_api_key = [&params, &res_error](const httplib::Request & req, httplib::Response & res) {
41854185
static const std::unordered_set<std::string> public_endpoints = {
41864186
"/health",
4187+
"/v1/health",
41874188
"/models",
41884189
"/v1/models",
41894190
"/api/tags"
@@ -5232,6 +5233,7 @@ int main(int argc, char ** argv) {
52325233

52335234
// register API routes
52345235
svr->Get (params.api_prefix + "/health", handle_health); // public endpoint (no API key check)
5236+
svr->Get (params.api_prefix + "/v1/health", handle_health); // public endpoint (no API key check)
52355237
svr->Get (params.api_prefix + "/metrics", handle_metrics);
52365238
svr->Get (params.api_prefix + "/props", handle_props);
52375239
svr->Post(params.api_prefix + "/props", handle_props_change);

tools/server/webui/src/lib/components/app/chat/ChatSidebar/ChatSidebarActions.svelte

Lines changed: 31 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
11
<script lang="ts">
2-
import { Search, SquarePen, X } from '@lucide/svelte';
2+
import { Search, SquarePen, X, Download, Upload } from '@lucide/svelte';
33
import { KeyboardShortcutInfo } from '$lib/components/app';
44
import { Button } from '$lib/components/ui/button';
55
import { Input } from '$lib/components/ui/input';
6+
import { exportAllConversations, importConversations } from '$lib/stores/chat.svelte';
67
78
interface Props {
89
handleMobileSidebarItemClick: () => void;
@@ -77,5 +78,34 @@
7778

7879
<KeyboardShortcutInfo keys={['cmd', 'k']} />
7980
</Button>
81+
82+
<Button
83+
class="w-full justify-start text-sm"
84+
onclick={() => {
85+
importConversations().catch((err) => {
86+
console.error('Import failed:', err);
87+
// Optional: show toast or dialog
88+
});
89+
}}
90+
variant="ghost"
91+
>
92+
<div class="flex items-center gap-2">
93+
<Upload class="h-4 w-4" />
94+
Import conversations
95+
</div>
96+
</Button>
97+
98+
<Button
99+
class="w-full justify-start text-sm"
100+
onclick={() => {
101+
exportAllConversations();
102+
}}
103+
variant="ghost"
104+
>
105+
<div class="flex items-center gap-2">
106+
<Download class="h-4 w-4" />
107+
Export all conversations
108+
</div>
109+
</Button>
80110
{/if}
81111
</div>

tools/server/webui/src/lib/components/app/chat/ChatSidebar/ChatSidebarConversationItem.svelte

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
<script lang="ts">
2-
import { Trash2, Pencil, MoreHorizontal } from '@lucide/svelte';
2+
import { Trash2, Pencil, MoreHorizontal, Download } from '@lucide/svelte';
33
import { ActionDropdown } from '$lib/components/app';
4+
import { downloadConversation } from '$lib/stores/chat.svelte';
45
import { onMount } from 'svelte';
56
67
interface Props {
@@ -101,6 +102,15 @@
101102
onclick: handleEdit,
102103
shortcut: ['shift', 'cmd', 'e']
103104
},
105+
{
106+
icon: Download,
107+
label: 'Export',
108+
onclick: (e) => {
109+
e.stopPropagation();
110+
downloadConversation(conversation.id);
111+
},
112+
shortcut: ['shift', 'cmd', 's']
113+
},
104114
{
105115
icon: Trash2,
106116
label: 'Delete',

0 commit comments

Comments
 (0)