You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running a Dria Compute Node is pretty straightforward! You can either follow the guide here for all platforms, or follow a much-more user-friendly guide at <https://dria.co/guide> for MacOS in particular.
3
+
Running a Dria Compute Node is pretty straightforward! It comes with a cross-platform launcher, and is itself a cross-platform executable. By using platform-specific builds instead of Docker we ensure:
4
+
5
+
- Best performance from LLMs
6
+
- Best networking for the p2p network
7
+
8
+
You can either follow the guide here for all platforms, or follow a much-more user-friendly guide at <https://dria.co/guide> for MacOS in particular.
4
9
5
10
## Requirements
6
11
7
12
### Software
8
13
9
-
You only **Docker** to run the node! You can check if you have it by printing its version:
14
+
Depending the AI models of your choice, you may have to install software:
15
+
16
+
-**OpenAI models**: you don't have to do anything!
17
+
-**Ollama models**: you have to install Ollama
10
18
11
19
```sh
12
-
docker -v
20
+
# prints Ollama version
21
+
ollama -v
13
22
```
14
23
15
-
> [!CAUTION]
16
-
>
17
-
> In **Windows** machines, Docker Desktop is requried to be running with **WSL2**. You can check the Docker Desktop Windows installation guide from [here](https://docs.docker.com/desktop/install/windows-install/)
18
-
19
24
### Hardware
20
25
21
26
**To learn about hardware specifications such as required CPU and RAM, please refer to [node specifications](./NODE_SPECS.md).**
@@ -107,35 +112,37 @@ Download the appropriate ZIP file for your system using the commands below or fr
107
112
1. Check your architecture:
108
113
109
114
- Open System Information:
110
-
- Press `Win + R` to open the Run dialog.
111
-
- Type `msinfo32` and press Enter.
115
+
- Press <kbd>⊞ Win + R</kbd> to open the Run dialog.
116
+
- Type `msinfo32` and press <kbd>Enter</kbd>.
112
117
- Look for the line labeled "Processor" or "CPU":
113
118
- If it includes "x64" or refers to Intel or AMD, it is likely x86 (amd64).
114
119
- If it mentions ARM, then it's an ARM processor.
115
120
116
121
2. Download the ZIP file using a web browser or in PowerShell:
With our launcher, setting up the environment variables happen on the fly by just running the `dkn-compute-launcher` CLI application directly, it'll ask you to enter the required environment variables if you don't have them! This way, you won't have to manually do the copying and creating environment variables yourself, and instead let the CLI do it for you.
141
+
With our launcher, setting up the environment variables happen on the fly! The CLI application will ask you to enter the required environment variables if you don't have them.
142
+
143
+
This way, you won't have to manually do the copying and creating environment variables yourself, and instead let the CLI do it for you. You can move directly on to the [Usage](#usage) section.
137
144
138
-
If you prefer this method, you can move directly on to the [Usage](#usage) section. If you would like to do this part manually, you can continue reading this section.
145
+
>If you would like to do this part manually, you can continue reading this section.
139
146
140
147
#### Create `.env` File
141
148
@@ -191,11 +198,9 @@ OPENAI_API_KEY=<YOUR_KEY>
191
198
192
199
#### For Ollama
193
200
194
-
First you have to install Ollama, if you haven't already! See their [download page](https://ollama.com/download) and follow their instructions there. The models that we want to use have to be pulled to Ollama before we can use them.
201
+
First you have to install [Ollama](#requirements), if you haven't already! The compute node is set to download any missing model automatically at the start by default. This is enabled via the `OLLAMA_AUTO_PULL=true` in `.env`.
195
202
196
-
> [!TIP]
197
-
>
198
-
> The compute node is set to download any missing model automatically at the start by default. This is enabled via the `OLLAMA_AUTO_PULL=true` in `.env`. If you would like to disable this feature, set `OLLAMA_AUTO_PULL=false` and then continue reading this section, otherwise you can skip to [optional services](#optional-services).
203
+
If you would like to disable this feature, set `OLLAMA_AUTO_PULL=false` and then continue reading this section, otherwise you can skip to [optional services](#optional-services).
199
204
200
205
First, you must **first pull a small embedding model that is used internally**.
201
206
@@ -221,56 +226,16 @@ JINA_API_KEY=<key-here>
221
226
222
227
## Usage
223
228
224
-
With all setup steps above completed, we are ready to start a node!
225
-
226
-
### 1. Choose Model(s)
227
-
228
-
Based on the resources of your machine, you must decide which models that you will be running locally. For example, you can use OpenAI with their models, not running anything locally at all; or you can use Ollama with several models loaded to disk, and only one loaded to memory during its respective task. Available models (see [here](https://github.com/andthattoo/ollama-workflows/blob/main/src/program/atomics.rs#L269) for latest) are:
229
-
230
-
#### Ollama Models
231
-
232
-
-`finalend/hermes-3-llama-3.1:8b-q8_0`
233
-
-`phi3:14b-medium-4k-instruct-q4_1`
234
-
-`phi3:14b-medium-128k-instruct-q4_1`
235
-
-`phi3.5:3.8b`
236
-
-`phi3.5:3.8b-mini-instruct-fp16`
237
-
-`llama3.1:latest`
238
-
-`llama3.1:8b-instruct-q8_0`
239
-
-`gemma2:9b-instruct-q8_0`
240
-
241
-
#### OpenAI Models
242
-
243
-
-`gpt-3.5-turbo`
244
-
-`gpt-4-turbo`
245
-
-`gpt-4o`
246
-
-`gpt-4o-mini`
247
-
248
-
> [!TIP]
249
-
>
250
-
> If you are using Ollama, make sure you have pulled the required models, as specified in the [section above](#4-setup-ollama-for-ollama-users)!
251
-
252
-
### 2. Start Docker
253
-
254
-
Our node will be running within a Docker container, so we should make sure that Docker is running before the next step. You can launch Docker via its [desktop application](https://www.docker.com/products/docker-desktop/), or a command such as:
255
-
256
-
```sh
257
-
sudo systemctl start docker
258
-
```
259
-
260
-
> [!NOTE]
261
-
>
262
-
> You don't need to do this step if Docker is already running in the background.
263
-
264
-
### 3. Run Node
265
-
266
-
It's time to run our compute node. We have a launcher cli app that makes this much easier: you can either run it by double-clicking the `dkn-compute-launcher` app (`dkn-compute-launcher.exe` on Windows) from your file explorer, or use it from terminal (or cmd/powershell in Windows).
229
+
**With all setup steps above completed, we are ready to start a node!** Either double-click the downloaded launcher `dkn-compute-launcher` app (`dkn-compute-launcher.exe` on Windows), or run it from the terminal from your file explorer, or use it from terminal (or `cmd/powershell` in Windows).
267
230
268
231
See the available commands with:
269
232
270
233
```sh
271
234
# macos or linux
272
235
./dkn-compute-launcher --help
236
+
```
273
237
238
+
```sh
274
239
# windows
275
240
.\dkn-compute-launcher.exe --help
276
241
```
@@ -280,81 +245,58 @@ Then simply run the cli app, it will ask you to enter required inputs:
280
245
```sh
281
246
# macos or linux
282
247
./dkn-compute-launcher
283
-
284
-
# windows
285
-
.\dkn-compute-launcher.exe
286
248
```
287
249
288
-
Or you can directly pass the running models using `-m` flags
Launcher app will run the containers in the background. You can check their logs either via the terminal or from [Docker Desktop](https://www.docker.com/products/docker-desktop/).
299
-
300
-
#### Running in Debug Mode
301
-
302
-
To print DEBUG-level logs for the compute node, you can add `--dev` argument to the launcher app. For example:
303
-
304
-
```sh
305
-
./dkn-compute-launcher -m=gpt-4o-mini --dev
252
+
.\dkn-compute-launcher.exe
306
253
```
307
254
308
-
Running in debug mode will also allow you to see behind the scenes of Ollama Workflows, i.e. you can see the reasoning of the LLM as it executes the task.
309
-
310
-
> Similarly, you can run in trace mode with `--trace` to see trace logs, which cover low-level logs from the p2p client.
255
+
You will see logs of the compute node on the same terminal!
311
256
312
-
### 4. Looking at Logs
257
+
You can stop the node as usual by pressing <kbd>Control + C</kbd>, or kill it from the terminal.
313
258
314
-
To see your logs, you can go to [Docker Desktop](https://www.docker.com/products/docker-desktop/) and see the running containers and find `dkn-compute-node`. There, open the containers within the compose (click on `>` to the left) and click on any of the container to see its logs.
259
+
### Choosing Models
315
260
316
-
Alternatively, you can use `docker compose logs` such as below:
261
+
You will be asked to provide your choice of models within the CLI. You can also pass them from the command line using `-m` flags:
When you start your node with `dkn-compute-launcher`, it will wait for you in the same terminal to do CTRL+C before stopping. Once you do that, the containers will be stopped and removed. You can also kill the containers manually, doing CTRL+C afterwards will do nothing in such a case.
336
-
337
-
> [!NOTE]
338
-
>
339
-
> Sometimes it may not immediately exit whilst executing a task, if you REALLY need to quite the process you can kill it manually.
340
-
341
-
### Using Ollama
273
+
[Available models](https://github.com/andthattoo/ollama-workflows/blob/main/src/program/models.rs) are given below:
342
274
343
-
> If you don't have Ollama installed, you can ignore this section.
275
+
#### Ollama Models
344
276
345
-
If you have Ollama installed already (e.g. via `brew install ollama`) then the launcher script app always use it. Even if the Ollama server is not running, the launcher app will initiate it with `ollama serve` and terminate it when the node is being stopped.
277
+
-`finalend/hermes-3-llama-3.1:8b-q8_0`
278
+
-`phi3:14b-medium-4k-instruct-q4_1`
279
+
-`phi3:14b-medium-128k-instruct-q4_1`
280
+
-`phi3.5:3.8b`
281
+
-`phi3.5:3.8b-mini-instruct-fp16`
282
+
-`llama3.1:latest`
283
+
-`llama3.1:8b-instruct-q8_0`
284
+
-`gemma2:9b-instruct-q8_0`
346
285
347
-
If you would like to explicitly use Docker Ollama instead, you can do this by passing the `--docker-ollama` option.
286
+
#### OpenAI Models
348
287
349
-
```sh
350
-
# Run with local ollama
351
-
./dkn-compute-launcher -m=phi3 --docker-ollama
352
-
```
288
+
-`gpt-3.5-turbo`
289
+
-`gpt-4-turbo`
290
+
-`gpt-4o`
291
+
-`gpt-4o-mini`
353
292
354
-
> [!TIP]
355
-
>
356
-
> There are three Docker Compose Ollama options: `ollama-cpu`, `ollama-cuda`, and `ollama-rocm`. The launcher app will decide which option to use based on the host machine's GPU specifications.
293
+
Launcher app will run the containers in the background. You can check their logs either via the terminal or from [Docker Desktop](https://www.docker.com/products/docker-desktop/).
357
294
358
295
### Additional Static Nodes
359
296
360
297
You can add additional relay nodes & bootstrap nodes from environment, using the `DKN_RELAY_NODES` and `DKN_BOOTSTRAP_NODES` variables respectively. Simply write the `Multiaddr` string of the static nodes as comma-separated values, and the compute node will pick them up at the start.
0 commit comments