Skip to content

Commit e63e00a

Browse files
committed
docfixes and added welcome [skip ci]
1 parent fcc881d commit e63e00a

File tree

2 files changed

+66
-115
lines changed

2 files changed

+66
-115
lines changed

docs/NODE_GUIDE.md

Lines changed: 56 additions & 114 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,26 @@
11
## Running the Compute Node
22

3-
Running a Dria Compute Node is pretty straightforward! You can either follow the guide here for all platforms, or follow a much-more user-friendly guide at <https://dria.co/guide> for MacOS in particular.
3+
Running a Dria Compute Node is pretty straightforward! It comes with a cross-platform launcher, and is itself a cross-platform executable. By using platform-specific builds instead of Docker we ensure:
4+
5+
- Best performance from LLMs
6+
- Best networking for the p2p network
7+
8+
You can either follow the guide here for all platforms, or follow a much-more user-friendly guide at <https://dria.co/guide> for MacOS in particular.
49

510
## Requirements
611

712
### Software
813

9-
You only **Docker** to run the node! You can check if you have it by printing its version:
14+
Depending the AI models of your choice, you may have to install software:
15+
16+
- **OpenAI models**: you don't have to do anything!
17+
- **Ollama models**: you have to install Ollama
1018

1119
```sh
12-
docker -v
20+
# prints Ollama version
21+
ollama -v
1322
```
1423

15-
> [!CAUTION]
16-
>
17-
> In **Windows** machines, Docker Desktop is requried to be running with **WSL2**. You can check the Docker Desktop Windows installation guide from [here](https://docs.docker.com/desktop/install/windows-install/)
18-
1924
### Hardware
2025

2126
**To learn about hardware specifications such as required CPU and RAM, please refer to [node specifications](./NODE_SPECS.md).**
@@ -107,35 +112,37 @@ Download the appropriate ZIP file for your system using the commands below or fr
107112
1. Check your architecture:
108113

109114
- Open System Information:
110-
- Press `Win + R` to open the Run dialog.
111-
- Type `msinfo32` and press Enter.
115+
- Press <kbd>⊞ Win + R</kbd> to open the Run dialog.
116+
- Type `msinfo32` and press <kbd>Enter</kbd>.
112117
- Look for the line labeled "Processor" or "CPU":
113118
- If it includes "x64" or refers to Intel or AMD, it is likely x86 (amd64).
114119
- If it mentions ARM, then it's an ARM processor.
115120

116121
2. Download the ZIP file using a web browser or in PowerShell:
117122

118-
```cmd
123+
```sh
119124
# for x64, use amd64
120125
Invoke-WebRequest -Uri "https://github.com/firstbatchxyz/dkn-compute-launcher/releases/latest/download/dkn-compute-launcher-windows-amd64.zip" -OutFile "dkn-compute-node.zip"
121126
```
122127

123-
```cmd
128+
```sh
124129
# for ARM, use arm64
125130
Invoke-WebRequest -Uri "https://github.com/firstbatchxyz/dkn-compute-launcher/releases/latest/download/dkn-compute-launcher-windows-arm64.zip" -OutFile "dkn-compute-node.zip"
126131
```
127132

128133
3. Unzip the downloaded file using File Explorer or in PowerShell:
129-
```cmd
134+
```sh
130135
Expand-Archive -Path "dkn-compute-node.zip" -DestinationPath "dkn-compute-node"
131136
cd dkn-compute-node
132137
```
133138

134139
### 2. Prepare Environment Variables
135140

136-
With our launcher, setting up the environment variables happen on the fly by just running the `dkn-compute-launcher` CLI application directly, it'll ask you to enter the required environment variables if you don't have them! This way, you won't have to manually do the copying and creating environment variables yourself, and instead let the CLI do it for you.
141+
With our launcher, setting up the environment variables happen on the fly! The CLI application will ask you to enter the required environment variables if you don't have them.
142+
143+
This way, you won't have to manually do the copying and creating environment variables yourself, and instead let the CLI do it for you. You can move directly on to the [Usage](#usage) section.
137144

138-
If you prefer this method, you can move directly on to the [Usage](#usage) section. If you would like to do this part manually, you can continue reading this section.
145+
> If you would like to do this part manually, you can continue reading this section.
139146
140147
#### Create `.env` File
141148

@@ -191,11 +198,9 @@ OPENAI_API_KEY=<YOUR_KEY>
191198

192199
#### For Ollama
193200

194-
First you have to install Ollama, if you haven't already! See their [download page](https://ollama.com/download) and follow their instructions there. The models that we want to use have to be pulled to Ollama before we can use them.
201+
First you have to install [Ollama](#requirements), if you haven't already! The compute node is set to download any missing model automatically at the start by default. This is enabled via the `OLLAMA_AUTO_PULL=true` in `.env`.
195202

196-
> [!TIP]
197-
>
198-
> The compute node is set to download any missing model automatically at the start by default. This is enabled via the `OLLAMA_AUTO_PULL=true` in `.env`. If you would like to disable this feature, set `OLLAMA_AUTO_PULL=false` and then continue reading this section, otherwise you can skip to [optional services](#optional-services).
203+
If you would like to disable this feature, set `OLLAMA_AUTO_PULL=false` and then continue reading this section, otherwise you can skip to [optional services](#optional-services).
199204

200205
First, you must **first pull a small embedding model that is used internally**.
201206

@@ -221,56 +226,16 @@ JINA_API_KEY=<key-here>
221226

222227
## Usage
223228

224-
With all setup steps above completed, we are ready to start a node!
225-
226-
### 1. Choose Model(s)
227-
228-
Based on the resources of your machine, you must decide which models that you will be running locally. For example, you can use OpenAI with their models, not running anything locally at all; or you can use Ollama with several models loaded to disk, and only one loaded to memory during its respective task. Available models (see [here](https://github.com/andthattoo/ollama-workflows/blob/main/src/program/atomics.rs#L269) for latest) are:
229-
230-
#### Ollama Models
231-
232-
- `finalend/hermes-3-llama-3.1:8b-q8_0`
233-
- `phi3:14b-medium-4k-instruct-q4_1`
234-
- `phi3:14b-medium-128k-instruct-q4_1`
235-
- `phi3.5:3.8b`
236-
- `phi3.5:3.8b-mini-instruct-fp16`
237-
- `llama3.1:latest`
238-
- `llama3.1:8b-instruct-q8_0`
239-
- `gemma2:9b-instruct-q8_0`
240-
241-
#### OpenAI Models
242-
243-
- `gpt-3.5-turbo`
244-
- `gpt-4-turbo`
245-
- `gpt-4o`
246-
- `gpt-4o-mini`
247-
248-
> [!TIP]
249-
>
250-
> If you are using Ollama, make sure you have pulled the required models, as specified in the [section above](#4-setup-ollama-for-ollama-users)!
251-
252-
### 2. Start Docker
253-
254-
Our node will be running within a Docker container, so we should make sure that Docker is running before the next step. You can launch Docker via its [desktop application](https://www.docker.com/products/docker-desktop/), or a command such as:
255-
256-
```sh
257-
sudo systemctl start docker
258-
```
259-
260-
> [!NOTE]
261-
>
262-
> You don't need to do this step if Docker is already running in the background.
263-
264-
### 3. Run Node
265-
266-
It's time to run our compute node. We have a launcher cli app that makes this much easier: you can either run it by double-clicking the `dkn-compute-launcher` app (`dkn-compute-launcher.exe` on Windows) from your file explorer, or use it from terminal (or cmd/powershell in Windows).
229+
**With all setup steps above completed, we are ready to start a node!** Either double-click the downloaded launcher `dkn-compute-launcher` app (`dkn-compute-launcher.exe` on Windows), or run it from the terminal from your file explorer, or use it from terminal (or `cmd/powershell` in Windows).
267230

268231
See the available commands with:
269232

270233
```sh
271234
# macos or linux
272235
./dkn-compute-launcher --help
236+
```
273237

238+
```sh
274239
# windows
275240
.\dkn-compute-launcher.exe --help
276241
```
@@ -280,81 +245,58 @@ Then simply run the cli app, it will ask you to enter required inputs:
280245
```sh
281246
# macos or linux
282247
./dkn-compute-launcher
283-
284-
# windows
285-
.\dkn-compute-launcher.exe
286248
```
287249

288-
Or you can directly pass the running models using `-m` flags
289-
290250
```sh
291-
# macos or linux
292-
./dkn-compute-launcher -m=llama3.1:latest -m=gpt-3.5-turbo
293-
294251
# windows
295-
.\dkn-compute-launcher.exe -m=llama3.1:latest -m=gpt-3.5-turbo
296-
```
297-
298-
Launcher app will run the containers in the background. You can check their logs either via the terminal or from [Docker Desktop](https://www.docker.com/products/docker-desktop/).
299-
300-
#### Running in Debug Mode
301-
302-
To print DEBUG-level logs for the compute node, you can add `--dev` argument to the launcher app. For example:
303-
304-
```sh
305-
./dkn-compute-launcher -m=gpt-4o-mini --dev
252+
.\dkn-compute-launcher.exe
306253
```
307254

308-
Running in debug mode will also allow you to see behind the scenes of Ollama Workflows, i.e. you can see the reasoning of the LLM as it executes the task.
309-
310-
> Similarly, you can run in trace mode with `--trace` to see trace logs, which cover low-level logs from the p2p client.
255+
You will see logs of the compute node on the same terminal!
311256

312-
### 4. Looking at Logs
257+
You can stop the node as usual by pressing <kbd>Control + C</kbd>, or kill it from the terminal.
313258

314-
To see your logs, you can go to [Docker Desktop](https://www.docker.com/products/docker-desktop/) and see the running containers and find `dkn-compute-node`. There, open the containers within the compose (click on `>` to the left) and click on any of the container to see its logs.
259+
### Choosing Models
315260

316-
Alternatively, you can use `docker compose logs` such as below:
261+
You will be asked to provide your choice of models within the CLI. You can also pass them from the command line using `-m` flags:
317262

318263
```sh
319-
docker compose logs -f compute # compute node logs
320-
docker compose logs -f ollama # ollama logs
264+
# macos or linux
265+
./dkn-compute-launcher -m=llama3.1:latest -m=gpt-3.5-turbo
321266
```
322267

323-
The `-f` option is so that you can track the logs from terminal. If you prefer to simply check the latest logs, you can use a command such as:
324-
325268
```sh
326-
# logs from last 1 hour
327-
docker compose logs --since=1h compute
328-
329-
# logs from last 30 minutes
330-
docker compose logs --since=30m compute
269+
# windows
270+
.\dkn-compute-launcher.exe -m=llama3.1:latest -m=gpt-3.5-turbo
331271
```
332272

333-
### 5. Stopping the Node
334-
335-
When you start your node with `dkn-compute-launcher`, it will wait for you in the same terminal to do CTRL+C before stopping. Once you do that, the containers will be stopped and removed. You can also kill the containers manually, doing CTRL+C afterwards will do nothing in such a case.
336-
337-
> [!NOTE]
338-
>
339-
> Sometimes it may not immediately exit whilst executing a task, if you REALLY need to quite the process you can kill it manually.
340-
341-
### Using Ollama
273+
[Available models](https://github.com/andthattoo/ollama-workflows/blob/main/src/program/models.rs) are given below:
342274

343-
> If you don't have Ollama installed, you can ignore this section.
275+
#### Ollama Models
344276

345-
If you have Ollama installed already (e.g. via `brew install ollama`) then the launcher script app always use it. Even if the Ollama server is not running, the launcher app will initiate it with `ollama serve` and terminate it when the node is being stopped.
277+
- `finalend/hermes-3-llama-3.1:8b-q8_0`
278+
- `phi3:14b-medium-4k-instruct-q4_1`
279+
- `phi3:14b-medium-128k-instruct-q4_1`
280+
- `phi3.5:3.8b`
281+
- `phi3.5:3.8b-mini-instruct-fp16`
282+
- `llama3.1:latest`
283+
- `llama3.1:8b-instruct-q8_0`
284+
- `gemma2:9b-instruct-q8_0`
346285

347-
If you would like to explicitly use Docker Ollama instead, you can do this by passing the `--docker-ollama` option.
286+
#### OpenAI Models
348287

349-
```sh
350-
# Run with local ollama
351-
./dkn-compute-launcher -m=phi3 --docker-ollama
352-
```
288+
- `gpt-3.5-turbo`
289+
- `gpt-4-turbo`
290+
- `gpt-4o`
291+
- `gpt-4o-mini`
353292

354-
> [!TIP]
355-
>
356-
> There are three Docker Compose Ollama options: `ollama-cpu`, `ollama-cuda`, and `ollama-rocm`. The launcher app will decide which option to use based on the host machine's GPU specifications.
293+
Launcher app will run the containers in the background. You can check their logs either via the terminal or from [Docker Desktop](https://www.docker.com/products/docker-desktop/).
357294

358295
### Additional Static Nodes
359296

360297
You can add additional relay nodes & bootstrap nodes from environment, using the `DKN_RELAY_NODES` and `DKN_BOOTSTRAP_NODES` variables respectively. Simply write the `Multiaddr` string of the static nodes as comma-separated values, and the compute node will pick them up at the start.
298+
299+
```sh
300+
# dummy example
301+
DKN_BOOTSTRAP_NODES=/ip4/44.206.245.139/tcp/4001/p2p/16Uiu2HAm4q3LZU2TeeejKK4fff6KZdddq8Kcccyae4bbbF7uqaaa
302+
```

src/main.rs

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,17 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
1010
env_logger::builder()
1111
.format_timestamp(Some(env_logger::TimestampPrecision::Millis))
1212
.init();
13+
1314
log::info!(
14-
"Initializing Dria Compute Node (version {})",
15+
r#"
16+
17+
██████╗ ██████╗ ██╗ █████╗
18+
██╔══██╗██╔══██╗██║██╔══██╗ Dria Compute Node
19+
██║ ██║██████╔╝██║███████║ v{}
20+
██║ ██║██╔══██╗██║██╔══██║ https://dria.co
21+
██████╔╝██║ ██║██║██║ ██║
22+
╚═════╝ ╚═╝ ╚═╝╚═╝╚═╝ ╚═╝
23+
"#,
1524
dkn_compute::DRIA_COMPUTE_NODE_VERSION
1625
);
1726

0 commit comments

Comments
 (0)