Skip to content

Commit d4b3afb

Browse files
committed
prep for new launcher, add new models
1 parent 256d78a commit d4b3afb

File tree

7 files changed

+49
-757
lines changed

7 files changed

+49
-757
lines changed

Cargo.lock

Lines changed: 2 additions & 2 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

Cargo.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[package]
22
name = "dkn-compute"
3-
version = "0.1.9"
3+
version = "0.2.0"
44
edition = "2021"
55
license = "Apache-2.0"
66
readme = "README.md"
@@ -45,7 +45,7 @@ sha3 = "0.10.8"
4545
fastbloom-rs = "0.5.9"
4646

4747
# workflows
48-
ollama-workflows = { git = "https://github.com/andthattoo/ollama-workflows", rev = "91f3086" }
48+
ollama-workflows = { git = "https://github.com/andthattoo/ollama-workflows", rev = "aaa887e" }
4949

5050
# peer-to-peer
5151
libp2p = { git = "https://github.com/anilaltuner/rust-libp2p.git", rev = "7ce9f9e", features = [

docs/NODE_GUIDE.md

Lines changed: 33 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -6,24 +6,16 @@ Running a Dria Compute Node is pretty straightforward.
66

77
### Software
88

9-
You need the following applications to run compute node:
9+
You only **Docker** to run the node! You can check if you have it by printing its version:
1010

11-
- **Git**: We will use `git` to clone the repository from GitHub, and pull latest changes for updates later.
12-
- **Docker**: Our services will make use of Docker so that the node can run on any machine.
11+
```sh
12+
docker -v
13+
```
1314

1415
> [!CAUTION]
1516
>
1617
> > In **Windows** machines, Docker Desktop is requried to be running with **WSL2**. You can check the Docker Desktop Windows installation guide from [here](https://docs.docker.com/desktop/install/windows-install/)
1718
18-
> [!TIP]
19-
>
20-
> You can check if you have these via:
21-
>
22-
> ```sh
23-
> which git
24-
> which docker
25-
> ```
26-
2719
### Hardware
2820

2921
**To learn about hardware specifications such as required CPU and RAM, please refer to [node specifications](./NODE_SPECS.md).**
@@ -38,9 +30,9 @@ In general, if you are using Ollama you will need the memory to run large models
3830

3931
To be able to run a node, we need to make a few simple preparations. Follow the steps below one by one.
4032

41-
### 1. Download DKN-Compute-Launcher
33+
### 1. Download [Launcher](https://github.com/firstbatchxyz/dkn-compute-launcher)
4234

43-
We have a [dkn-launcher](https://github.com/firstbatchxyz/dkn-compute-launcher) cli app for easily setting up the environment and running the compute node. We will install that first.
35+
We have a [cross-platform node launcher](https://github.com/firstbatchxyz/dkn-compute-launcher) to easily set up the environment and running the compute node. We will install that first.
4436

4537
Download the appropriate ZIP file for your system using the commands below or from [browser](https://github.com/firstbatchxyz/dkn-compute-launcher/releases/tag/v0.0.1). Make sure to replace the URL with the correct version for your operating system and architecture.
4638

@@ -123,15 +115,19 @@ Download the appropriate ZIP file for your system using the commands below or fr
123115

124116
### 2. Prepare Environment Variables
125117

126-
> [!TIP]
127-
>
128-
> Speed-running the node execution:
129-
>
130-
> Optionally, you can also handle the environment variables on the fly by just running the `dkn-compute-launcher` cli-app directly, since it'll ask you to enter the required environment variables.
131-
>
132-
> If you prefer this you can move on to the [Usage](#usage) section
118+
With our launcher, setting up the environment variables happen on the fly by just running the `dkn-compute-launcher` CLI application directly, it'll ask you to enter the required environment variables if you don't have them! This way, you won't have to manually do the copying and creating environment variables yourself, and instead let the CLI do it for you.
119+
120+
If you prefer this method, you can move directly on to the [Usage](#usage) section. If you would like to do this part manually, you can continue reading this section.
121+
122+
#### Create `.env` File
133123

134-
Dria Compute Node makes use of several environment variables. We will fill out the missing parts witin `.env` file in a moment.
124+
Dria Compute Node makes use of several environment variables. Let's create an `.env` file from the given example first.
125+
126+
```sh
127+
cp .env.example .ev
128+
```
129+
130+
We will fill out the missing parts witin `.env` file in a moment.
135131

136132
> [!NOTE]
137133
>
@@ -153,15 +149,15 @@ Dria Compute Node makes use of several environment variables. We will fill out t
153149
154150
### 3. Prepare Ethereum Wallet
155151
156-
Dria makes use of the same Ethereum wallet, that is the recipient of your hard-earned rewards! Place your private key at `DKN_WALLET_SECRET_KEY` in `.env` without the 0x prefix. It should look something like:
152+
Dria makes use of the same Ethereum wallet, that is the recipient of your hard-earned rewards! Place your private key at `DKN_WALLET_SECRET_KEY` in `.env` without the `0x` prefix. It should look something like:
157153
158154
```sh
159155
DKN_WALLET_SECRET_KEY=ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80
160156
```
161157
162158
> [!CAUTION]
163159
>
164-
> Always make sure your private key is within the .gitignore'd `.env` file, nowhere else! To be even safer, you can use a throwaway wallet, you can always transfer your rewards to a main wallet afterwards.
160+
> Always make sure your private key is within the .gitignore'd `.env` file, nowhere else! To be even safer, you can use a throw-away wallet, you can always transfer your claimed rewards to a main wallet afterwards.
165161
166162
### 4. Setup LLM Provider
167163

@@ -177,22 +173,26 @@ OPENAI_API_KEY=<YOUR_KEY>
177173

178174
#### For Ollama
179175

180-
Of course, first you have to install Ollama; see their [download page](https://ollama.com/download). Then, you must **first pull a small embedding model that is used internally**.
176+
First you have to install Ollama, if you haven't already! See their [download page](https://ollama.com/download) and follow their instructions there. The models that we want to use have to be pulled to Ollama before we can use them.
177+
178+
> [!TIP]
179+
>
180+
> The compute node is set to download any missing model automatically at the start by default. This is enabled via the `OLLAMA_AUTO_PULL=true` in `.env`. If you would like to disable this feature, set `OLLAMA_AUTO_PULL=false` and then continue reading this section, otherwise you can skip to [optional services](#optional-services).
181+
182+
First, you must **first pull a small embedding model that is used internally**.
181183

182184
```sh
183185
ollama pull hellord/mxbai-embed-large-v1:f16
184186
```
185187

186-
For the models that you choose (see list of models just below [here](#1-choose-models)) you can download them with same command. Note that if your model size is large, pulling them may take a while.
188+
For the models that you choose (see list of models just below [here](#1-choose-models)) you can download them with same command. Note that if your model size is large, pulling them may take a while. For example:
187189

188190
```sh
189-
# example for phi3:3.8b
190-
ollama pull phi3:3.8b
191+
# example
192+
ollama pull llama3.1:latest
191193
```
192194

193195
> [!TIP]
194-
>
195-
> Alternatively, you can set `OLLAMA_AUTO_PULL=true` in the `.env` so that the compute node will always download the missing models for you.
196196
197197
#### Optional Services
198198

@@ -216,11 +216,11 @@ Based on the resources of your machine, you must decide which models that you wi
216216
- `adrienbrault/nous-hermes2theta-llama3-8b:q8_0`
217217
- `phi3:14b-medium-4k-instruct-q4_1`
218218
- `phi3:14b-medium-128k-instruct-q4_1`
219-
- `phi3:3.8b`
220-
- `llama3.1:latest`
221-
- `llama3.1:8b-instruct-q8_0`
222219
- `phi3.5:3.8b`
223220
- `phi3.5:3.8b-mini-instruct-fp16`
221+
- `llama3.1:latest`
222+
- `llama3.1:8b-instruct-q8_0`
223+
- `gemma2:9b-instruct-q8_0`
224224

225225
#### OpenAI Models
226226

0 commit comments

Comments
 (0)