Skip to content

Commit b5c9141

Browse files
authored
Merge pull request #111 from nextcloud/docs/readme
docs: update README.md
2 parents 10123bb + 0c4907c commit b5c9141

File tree

1 file changed

+61
-29
lines changed

1 file changed

+61
-29
lines changed

README.md

Lines changed: 61 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -2,68 +2,100 @@
22
- SPDX-FileCopyrightText: 2024 Nextcloud GmbH and Nextcloud contributors
33
- SPDX-License-Identifier: AGPL-3.0-or-later
44
-->
5+
56
# Nextcloud Local Large Language Model
67

78
[![REUSE status](https://api.reuse.software/badge/github.com/nextcloud/llm2)](https://api.reuse.software/info/github.com/nextcloud/llm2)
89

910
![](https://raw.githubusercontent.com/nextcloud/llm2/main/img/Logo.png)
1011

12+
An on-premises text processing backend for the [Nextcloud Assistant](https://github.com/nextcloud/assistant) or any app that uses the [text processing functionality](https://docs.nextcloud.com/server/latest/admin_manual/ai/overview.html#tp-consumer-apps).
13+
14+
This app uses [llama.cpp](https://github.com/abetlen/llama-cpp-python) under the hood and is thus compatible with any open-source model in GGUF format.
15+
1116
## Installation
12-
See [the nextcloud admin docs](https://docs.nextcloud.com/server/latest/admin_manual/ai/index.html)
1317

18+
See [the Nextcloud admin documentation](https://docs.nextcloud.com/server/latest/admin_manual/ai/app_llm2.html) for installation instructions and system requirements.
1419

15-
## Development installation using docker
20+
## Development installation
1621

17-
**! Requires that your host system have CUDA/NVIDIA drivers installed and is equipped with a GPU capable of performing the required tasks.**
22+
0. (Optional) [Install Nvidia drivers and CUDA on your host system](https://gist.github.com/denguir/b21aa66ae7fb1089655dd9de8351a202).
1823

19-
0. [Install Nvidia Drivers an CUDA on your host system](https://gist.github.com/denguir/b21aa66ae7fb1089655dd9de8351a202) and [install NVIDIA docker toolkit](https://stackoverflow.com/questions/25185405/using-gpu-from-a-docker-container)
24+
1. Create and activate a Python virtual environment:
2025

21-
1. **Build the docker image**
26+
```sh
27+
python3 -m venv ./venv && . ./venv/bin/activate
28+
```
2229

23-
Example assuming you are in the source directory of the cloned repository
30+
2. Install dependencies:
31+
32+
```sh
33+
poetry install
34+
```
2435

25-
> docker build --no-cache -f Dockerfile -t llm2:latest .
36+
3. (Optional) Enable hardware acceleration if your system supports it (check the [`llama.cpp` documentation](https://llama-cpp-python.readthedocs.io/en/latest/) for your accelerator).
2637

38+
4. (Optional) Download any additional desired models into the `models` directory:
2739

40+
Examples:
2841

29-
2. **Run the docker image**
42+
```sh
43+
wget -nc -P models https://download.nextcloud.com/server/apps/llm/llama-2-7b-chat-ggml/llama-2-7b-chat.Q4_K_M.gguf
44+
wget -nc -P models https://download.nextcloud.com/server/apps/llm/leo-hessianai-13B-chat-bilingual-GGUF/leo-hessianai-13b-chat-bilingual.Q4_K_M.gguf
45+
wget -nc -P models https://huggingface.co/Nextcloud-AI/llm_neuralbeagle_14_7b_gguf/resolve/main/neuralbeagle14-7b.Q4_K_M.gguf
46+
```
3047

31-
> sudo docker run -ti -v /var/run/docker.sock:/var/run/docker.sock -e APP_ID=llm2 -e APP_HOST=0.0.0.0 -e APP_PORT=9080 -e APP_SECRET=12345 -e APP_VERSION=1.0.0 -e NEXTCLOUD_URL='<YOUR_NEXTCLOUD_URL_REACHABLE_FROM_INSIDE_DOCKER>' -e CUDA_VISIBLE_DEVICES=0 -p 9080:9080 --gpus all llm2:latest
48+
4. Run the app:
3249

50+
```sh
51+
PYTHONUNBUFFERED=1 APP_HOST=0.0.0.0 APP_ID=llm2 APP_PORT=9081 APP_SECRET=12345 APP_VERSION=<APP_VERSION> NEXTCLOUD_URL=http://nextcloud.local python3 lib/main.py
52+
```
3353

54+
5. Register the app with the `manual_install` AppAPI deploy daemon (see AppAPI admin settings in Nextcloud).
3455

35-
3. **Register the service**
56+
With the [Nextcloud Docker dev environment](https://github.com/juliusknorr/nextcloud-docker-dev), you can just run:
3657

37-
Example assuming you are in the source directory of the cloned repository and the docker image of llm2 was successfully build and is up and running
58+
```sh
59+
make register
60+
```
3861

39-
- *Register manually:*
62+
Example if Nextcloud is installed on bare metal instead:
4063

41-
> sudo -u www-data php /var/www/nextcloud/occ app_api:app:unregister llm2 --silent || true
42-
sudo -u www-data php /var/www/nextcloud/occ app_api:app:register llm2 manual_install --json-info "{\"appid\":\"llm2\",\"name\":\"Local large language model\",\"daemon_config_name\":\"manual_install\",\"version\":\"1.0.0\",\"secret\":\"12345\",\"host\":\"localhost\",\"port\":9080,\"scopes\":[\"AI_PROVIDERS\", "TASK_PROCESSING"],\"system_app\":0}" --force-scopes --wait-finish
64+
```sh
65+
sudo -u www-data php /var/www/nextcloud/occ app_api:app:unregister llm2 --force
66+
sudo -u www-data php /var/www/nextcloud/occ app_api:app:register llm2 manual_install --json-info "{\"id\":\"llm2\",\"name\":\"Local large language model\",\"daemon_config_name\":\"manual_install\",\"version\":\"<APP_VERSION>\",\"secret\":\"12345\",\"port\":9081}" --wait-finish
67+
```
4368

44-
## Development installation on bare metal
69+
## Development installation using Docker
4570

46-
0. [Install Nvidia Drivers an CUDA on your host system](https://gist.github.com/denguir/b21aa66ae7fb1089655dd9de8351a202)
71+
> [!NOTE]
72+
> Currently, running the Docker image requires that your host system have CUDA/NVIDIA drivers installed and is equipped with a GPU capable of performing the required tasks.
4773
48-
1. Create a virtual python environment
74+
0. [Install Nvidia drivers and CUDA on your host system](https://gist.github.com/denguir/b21aa66ae7fb1089655dd9de8351a202) and [install NVIDIA Docker toolkit](https://stackoverflow.com/questions/25185405/using-gpu-from-a-docker-container).
4975

50-
> python3 -m venv ./venv && . ./venv/bin/activate
76+
1. Build the Docker image:
5177

52-
2. Install dependencies (recommended to use a Virtual Environment)
53-
54-
> poetry install
78+
```sh
79+
docker build --no-cache -f Dockerfile -t llm2:latest .
80+
```
5581

56-
3. If you want hardware acceleration support, check the Llama.cpp docs for your accelerator: https://llama-cpp-python.readthedocs.io/en/latest/
82+
2. Run the Docker image:
5783

58-
4. Download some models
84+
```sh
85+
sudo docker run -ti -v /var/run/docker.sock:/var/run/docker.sock -e APP_ID=llm2 -e APP_HOST=0.0.0.0 -e APP_PORT=9081 -e APP_SECRET=12345 -e APP_VERSION=<APP_VERSION> -e NEXTCLOUD_URL='<YOUR_NEXTCLOUD_URL_REACHABLE_FROM_INSIDE_DOCKER>' -e CUDA_VISIBLE_DEVICES=0 -p 9081:9081 --gpus all llm2:latest
86+
```
5987

60-
> make download-models
88+
3. Register the app.
6189

62-
4. Run the app
90+
With the [Nextcloud Docker dev environment](https://github.com/juliusknorr/nextcloud-docker-dev), you can just run:
6391

64-
> PYTHONUNBUFFERED=1 APP_HOST=0.0.0.0 APP_ID=llm2 APP_PORT=9080 APP_SECRET=12345 APP_VERSION=1.1.0 NEXTCLOUD_URL=http://localhost:8080 python3 lib/main.py
92+
```sh
93+
make register
94+
```
6595

66-
5. Register the app with your manual_install AppAPI deploy daemon (see AppAPI admin settings in Nextcloud)
96+
Example if Nextcloud is installed on bare metal instead:
6797

68-
> sudo -u www-data php /var/www/nextcloud/occ app_api:app:unregister llm2 --force
69-
sudo -u www-data php /var/www/nextcloud/occ app_api:app:register llm2 manual_install --json-info "{\"appid\":\"llm2\",\"name\":\"Local large language model\",\"daemon_config_name\":\"manual_install\",\"version\":\"1.1.0\",\"secret\":\"12345\",\"host\":\"localhost\",\"port\":9080,\"scopes\":[\"AI_PROVIDERS\"],\"system_app\":0}" --force-scopes --wait-finish
98+
```sh
99+
sudo -u www-data php /var/www/nextcloud/occ app_api:app:unregister llm2 --force
100+
sudo -u www-data php /var/www/nextcloud/occ app_api:app:register llm2 manual_install --json-info "{\"id\":\"llm2\",\"name\":\"Local large language model\",\"daemon_config_name\":\"manual_install\",\"version\":\"<APP_VERSION>\",\"secret\":\"12345\",\"port\":9081}" --wait-finish
101+
```

0 commit comments

Comments
 (0)