Add docker protocol support for llama-server model loading (#15790) #901
release.yml
on: push
Matrix: ubuntu-22-cpu
Matrix: windows-cpu
Matrix: windows-cuda
Matrix: windows-hip
Matrix: windows
macOS-arm64
2m 46s
macOS-x64
2m 32s
ubuntu-22-vulkan
4m 42s
windows-sycl
8m 26s
ios-xcode-build
16m 31s
release
0s
Annotations
1 error and 1 warning
|
macOS-x64
Process completed with exit code 2.
|
|
windows (vulkan, x64, -DGGML_VULKAN=ON, ggml-vulkan)
Cache not found for keys: ccache-windows-latest-cmake-vulkan-x64-
|
Artifacts
Produced during runtime
| Name | Size | Digest | |
|---|---|---|---|
|
cudart-llama-bin-win-cuda-12.4-x64.zip
|
372 MB |
sha256:18306526aeeba0b88428d74f96adece60c3fadaf1cbf29fb7f53df668a775050
|
|
|
llama-b6457-xcframework
|
84.3 MB |
sha256:5bddcb0e63350cedf31481a336ecd6d53fb7fde80e4466b96c80f378b5984e67
|
|
|
llama-bin-macos-arm64.zip
|
11.2 MB |
sha256:f9f0fd0ad20f228ea1c59b9602640d19fe56a50166b339a98799eff56bf77eb2
|
|
|
llama-bin-ubuntu-vulkan-x64.zip
|
25.4 MB |
sha256:25a0767877e32fdaf1710de53822a43619aab0dcf3484db0ee5ed9320c3988ab
|
|
|
llama-bin-ubuntu-x64.zip
|
13.1 MB |
sha256:a5aef76faef307186a84b5d30ae1a00f6668318f0240073ac0f7aaa166a7a1cd
|
|
|
llama-bin-win-cpu-arm64.zip
|
11.4 MB |
sha256:55794070027d4848fecc5acb6408706ff5422e807e244b1b415d35a51580c8e6
|
|
|
llama-bin-win-cpu-x64.zip
|
14.3 MB |
sha256:d7dbeff07a97e321d8bf510c921ca83dc89b0a02bfa38603036c30fb29759750
|
|
|
llama-bin-win-cuda-12.4-x64.zip
|
131 MB |
sha256:869a2e70283df161693a17afa9776797d7d9f7e7c021161eba5209135d183e14
|
|
|
llama-bin-win-hip-radeon-x64.zip
|
264 MB |
sha256:a30cabb1a555b75086e142cd753b199e2aa535afb1e2b02d18f47802bb29b7b0
|
|
|
llama-bin-win-opencl-adreno-arm64.zip
|
110 KB |
sha256:e4825dbfa5c94cc9e91017b911abfd3dc13330dfef87ff7255fd5353d3d8c93c
|
|
|
llama-bin-win-sycl-x64.zip
|
80 MB |
sha256:2782334eef242405721d2652861e894a2fb4138e544f49570b12e65566bc959c
|
|
|
llama-bin-win-vulkan-x64.zip
|
11.4 MB |
sha256:27f377393ba07189c439d65330a6177e0ed6292912476b58efb8c01d77629936
|
|