Skip to content

Commit 900e4f2

Browse files
committed
Update links, remove ccache and curl
1 parent 1b18583 commit 900e4f2

File tree

3 files changed

+10
-10
lines changed

3 files changed

+10
-10
lines changed

.github/workflows/build-oneapi.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ jobs:
3030
- name: Checkout llama.cpp
3131
uses: actions/checkout@v4
3232
with:
33-
repository: ggerganov/llama.cpp
33+
repository: ggml-org/llama.cpp
3434
path: repo
3535

3636
- name: Get number of CPU cores
@@ -44,12 +44,12 @@ jobs:
4444
wget -q -O - https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB | sudo gpg --dearmor --output /etc/apt/trusted.gpg.d/oneapi.gpg
4545
echo 'deb [trusted=yes] https://apt.repos.intel.com/oneapi all main' | sudo tee /etc/apt/sources.list.d/oneapi.list >/dev/null
4646
sudo apt-get update
47-
sudo apt-get install -y intel-oneapi-compiler-dpcpp-cpp cmake git ccache
47+
sudo apt-get install -y intel-oneapi-compiler-dpcpp-cpp cmake git
4848
4949
- name: CMake
5050
run: |
5151
. /opt/intel/oneapi/setvars.sh
52-
cmake repo -B build -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DCMAKE_C_FLAGS="-Ofast -xCORE-AVX2" -DCMAKE_CXX_FLAGS="-Ofast -xCORE-AVX2" -DCMAKE_EXE_LINKER_FLAGS="-static" -DGGML_NATIVE=OFF -DGGML_STATIC=ON -DGGML_RPC=ON -DGGML_OPENMP=OFF -DLLAMA_BUILD_SERVER=ON -DBUILD_SHARED_LIBS=OFF
52+
cmake repo -B build -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DCMAKE_C_FLAGS="-Ofast -xCORE-AVX2" -DCMAKE_CXX_FLAGS="-Ofast -xCORE-AVX2" -DCMAKE_EXE_LINKER_FLAGS="-static" -DGGML_NATIVE=OFF -DGGML_STATIC=ON -DGGML_RPC=ON -DGGML_CCACHE=OFF -DGGML_OPENMP=OFF -DLLAMA_BUILD_SERVER=ON -DLLAMA_CURL=OFF -DBUILD_SHARED_LIBS=OFF
5353
cmake --build build --config Release --target llama-server --target rpc-server -j ${{ steps.cpu-cores.outputs.count }}
5454
strip build/bin/*
5555

.github/workflows/build.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ jobs:
4444
- name: Checkout llama.cpp
4545
uses: actions/checkout@v4
4646
with:
47-
repository: ggerganov/llama.cpp
47+
repository: ggml-org/llama.cpp
4848
path: repo
4949

5050
- name: Get number of CPU cores
@@ -54,11 +54,11 @@ jobs:
5454
- name: Setup Alpine Linux
5555
uses: jirutka/setup-alpine@v1
5656
with:
57-
packages: build-base cmake ccache git openssl-dev openssl-libs-static
57+
packages: build-base cmake git openssl-dev openssl-libs-static
5858

5959
- name: CMake
6060
run: |
61-
cmake repo -B build -DCMAKE_BUILD_TYPE=Release -DCMAKE_EXE_LINKER_FLAGS="-static" -DGGML_NATIVE=OFF -DGGML_STATIC=ON -DGGML_RPC=ON -DGGML_OPENMP=OFF -DLLAMA_BUILD_SERVER=ON -DBUILD_SHARED_LIBS=OFF ${{ matrix.ssl_defines }} ${{ matrix.avx_defines }}
61+
cmake repo -B build -DCMAKE_BUILD_TYPE=Release -DCMAKE_EXE_LINKER_FLAGS="-static" -DGGML_NATIVE=OFF -DGGML_STATIC=ON -DGGML_RPC=ON -DGGML_CCACHE=OFF -DGGML_OPENMP=OFF -DLLAMA_BUILD_SERVER=ON -DLLAMA_CURL=OFF -DBUILD_SHARED_LIBS=OFF ${{ matrix.ssl_defines }} ${{ matrix.avx_defines }}
6262
cmake --build build --config Release --target llama-server --target rpc-server -j ${{ steps.cpu-cores.outputs.count }}
6363
strip build/bin/*
6464
shell: alpine.sh {0}

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
# llama.cpp-static
22
Your daily, minimal build of llama.cpp. [Also available on Docker Hub.](https://hub.docker.com/r/ezforever/llama.cpp-static)
33

4-
Source code: <https://github.com/ggerganov/llama.cpp>
4+
Source code: <https://github.com/ggml-org/llama.cpp>
55
Built from: <https://github.com/EZForever/llama.cpp-static>
66

77
## Usage
88

9-
Please refer to [llama.cpp docker guide](https://github.com/ggerganov/llama.cpp/blob/master/README.md#docker) and [server README](https://github.com/ggerganov/llama.cpp/tree/master/examples/server).
9+
Please refer to [llama.cpp docker guide](https://github.com/ggml-org/llama.cpp/blob/master/docs/docker.md) and [server README](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md).
1010

1111
## Tag format
1212

@@ -25,7 +25,7 @@ Server images are tagged in the format of `server-<ssl>-<avx>`.
2525
- `avx`: (Only) AVX instruction set is enabled. Might be useful if you are using some old CPUs that don't support AVX2.
2626
- `avx2`: AVX2 instruction set is enabled. This build should support most modern/recent CPUs with reasonable performance.
2727
- `avx512`: AVX512 base instruction set is enabled. Currently only some high-end or server-grade CPUs support this instruction set, so check your hardware specs before using this build.
28-
- `oneapi`: Experimental build with the Intel oneAPI compiler, inspired by ggerganov/llama.cpp#5067. Offers a ~30% speed increase (~20tok/s vs ~15tok/s) in prompt processing on my machine compared to `avx2` builds. Not updated daily.
28+
- `oneapi`: Experimental build with the Intel oneAPI compiler, inspired by ggml-org/llama.cpp#5067. Offers a ~30% speed increase (~20tok/s vs ~15tok/s) in prompt processing on my machine compared to `avx2` builds. Not updated daily.
2929

30-
RPC server images are tagged in the format of `rpc-server-<ssl>-<avx>`. Refer to [rpc README](https://github.com/ggerganov/llama.cpp/tree/master/examples/rpc) for detailed information.
30+
RPC server images are tagged in the format of `rpc-server-<ssl>-<avx>`. Refer to [rpc README](https://github.com/ggml-org/llama.cpp/blob/master/tools/rpc/README.md) for detailed information.
3131

0 commit comments

Comments
 (0)