Skip to content

Commit 16801fb

Browse files
committed
ci : add instructions for adding self-hosted runners
1 parent dd3d5c6 commit 16801fb

File tree

2 files changed

+45
-35
lines changed

2 files changed

+45
-35
lines changed

ci/README-MUSA.md

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
## Running MUSA CI in a Docker Container
2+
3+
Assuming `$PWD` is the root of the `llama.cpp` repository, follow these steps to set up and run MUSA CI in a Docker container:
4+
5+
### 1. Create a local directory to store cached models, configuration files and venv:
6+
7+
```bash
8+
mkdir -p $HOME/llama.cpp/ci-cache
9+
```
10+
11+
### 2. Create a local directory to store CI run results:
12+
13+
```bash
14+
mkdir -p $HOME/llama.cpp/ci-results
15+
```
16+
17+
### 3. Start a Docker container and run the CI:
18+
19+
```bash
20+
docker run --privileged -it \
21+
-v $HOME/llama.cpp/ci-cache:/ci-cache \
22+
-v $HOME/llama.cpp/ci-results:/ci-results \
23+
-v $PWD:/ws -w /ws \
24+
mthreads/musa:rc4.2.0-devel-ubuntu22.04-amd64
25+
```
26+
27+
Inside the container, execute the following commands:
28+
29+
```bash
30+
apt update -y && apt install -y bc cmake ccache git python3.10-venv time unzip wget
31+
git config --global --add safe.directory /ws
32+
GG_BUILD_MUSA=1 bash ./ci/run.sh /ci-results /ci-cache
33+
```
34+
35+
This setup ensures that the CI runs within an isolated Docker environment while maintaining cached files and results across runs.

ci/README.md

Lines changed: 10 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,10 @@
11
# CI
22

3-
This CI implements heavy workflows that are running on self-hosted runners with various hardware configurations.
3+
This CI implements heavy-duty workflows that run on self-hosted runners. Typically the purpose of these workflows is to
4+
cover hardware configurations that are not available from Github-hosted runners and/or require more computational
5+
resource than normally available.
46

5-
It is a good practice, before publishing changes to execute the full CI locally on your machine:
7+
It is a good practice, before publishing changes to execute the full CI locally on your machine. For example:
68

79
```bash
810
mkdir tmp
@@ -19,40 +21,13 @@ GG_BUILD_SYCL=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt
1921

2022
# with MUSA support
2123
GG_BUILD_MUSA=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt
22-
```
23-
24-
## Running MUSA CI in a Docker Container
25-
26-
Assuming `$PWD` is the root of the `llama.cpp` repository, follow these steps to set up and run MUSA CI in a Docker container:
27-
28-
### 1. Create a local directory to store cached models, configuration files and venv:
29-
30-
```bash
31-
mkdir -p $HOME/llama.cpp/ci-cache
32-
```
33-
34-
### 2. Create a local directory to store CI run results:
3524

36-
```bash
37-
mkdir -p $HOME/llama.cpp/ci-results
38-
```
39-
40-
### 3. Start a Docker container and run the CI:
41-
42-
```bash
43-
docker run --privileged -it \
44-
-v $HOME/llama.cpp/ci-cache:/ci-cache \
45-
-v $HOME/llama.cpp/ci-results:/ci-results \
46-
-v $PWD:/ws -w /ws \
47-
mthreads/musa:rc4.2.0-devel-ubuntu22.04-amd64
25+
# etc.
4826
```
4927

50-
Inside the container, execute the following commands:
51-
52-
```bash
53-
apt update -y && apt install -y bc cmake ccache git python3.10-venv time unzip wget
54-
git config --global --add safe.directory /ws
55-
GG_BUILD_MUSA=1 bash ./ci/run.sh /ci-results /ci-cache
56-
```
28+
# Adding self-hosted runners
5729

58-
This setup ensures that the CI runs within an isolated Docker environment while maintaining cached files and results across runs.
30+
- Add a self-hosted `ggml-ci` workflow to [[.github/workflows/build.yml]] with an appropriate label
31+
- Request a runner token from `ggml-org` (for example, via a comment in the PR or email)
32+
- Set-up a machine using the received token ([docs](https://docs.github.com/en/actions/how-tos/manage-runners/self-hosted-runners/add-runners))
33+
- Optionally update [ci/run.sh](https://github.com/ggml-org/llama.cpp/blob/master/ci/run.sh) to build and run on the target platform by gating the implementation with a `GG_BUILD_...` env

0 commit comments

Comments
 (0)