Skip to content

fluencelabs/fluence-tashi-deployment

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tashi Multi-Node Deployment

Automated deployment of multiple Tashi DePIN nodes on Fluence Cloud VMs using Ansible and Docker.

Quick Start

1. Setup Environment

Install Python dependencies:

# Install dependencies
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

Configure environment file:

cp .env.example .env
# Edit .env with your Fluence credentials

Required environment variables:

FLUENCE_API_KEY=your_fluence_api_key_here
FLUENCE_SSH_KEY_NAME=your_ssh_key_name

2. Create and Configure VMs

Option A: Create VMs automatically (recommended)

# Create VM with default configuration (tashi-node-vm)
make create-vm

# Create custom VM with additional storage and configuration
VM_NAME=tashi-vm-1 ADDITIONAL_STORAGE=500 CONFIG=cpu-8-ram-16gb-storage-25gb make create-vm

# List all active VMs
make list-vms

# Delete VMs (interactive)
make delete-vm

Option B: Use existing VMs (manual inventory setup) Create inventory files for each existing VM:

# inventory/inventory_tashi.ini (default)
[ubuntu_vm]
<vm-public-ip> ansible_user=ubuntu ansible_ssh_private_key_file=<full-path-to-ssh-priv-key> ansible_python_interpreter=/usr/bin/python3

# inventory/vm-custom-vm.ini
[ubuntu_vm]
149.5.176.21 ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/id_ed25519 ansible_python_interpreter=/usr/bin/python3

3. Setup VMs (First Time Only)

# Setup each VM (repeat for each VM)
make setup-vm                                         # Setup default VM
INVENTORY_FILE=inventory/vm-tashi-node-vm.ini make setup-vm  # Setup created VM
INVENTORY_FILE=inventory/vm-custom-vm.ini make setup-vm      # Setup custom VM

4. Deploy Nodes

Choose your deployment strategy:

Option A: Multiple nodes on one VM

# 3 nodes on default VM
make deploy-nodes

# 5 nodes on created VM
NODES=5 INVENTORY_FILE=inventory/vm-tashi-node-vm.ini make deploy-nodes

Option B: Distributed nodes (1 node per VM)

# 1 node on each of 3 different VMs
NODES=1 INVENTORY_FILE=inventory/vm-tashi-vm-1.ini make deploy-nodes
NODES=1 INVENTORY_FILE=inventory/vm-tashi-vm-2.ini make deploy-nodes  
NODES=1 INVENTORY_FILE=inventory/vm-tashi-vm-3.ini make deploy-nodes

Option C: Mixed deployment

# 2 nodes on VM1, 3 nodes on VM2
NODES=2 INVENTORY_FILE=inventory/vm-tashi-vm-1.ini make deploy-nodes
NODES=3 INVENTORY_FILE=inventory/vm-tashi-vm-2.ini make deploy-nodes

5. Manage Nodes

# Check detailed container status on remote VM
make containers-status                                               # Default VM
INVENTORY_FILE=inventory/vm-tashi-node-vm.ini make containers-status # Created VM
INVENTORY_FILE=inventory/vm-custom-vm.ini make containers-status     # Custom VM

# Cleanup containers on remote VM
make cleanup-nodes                                         # Interactive cleanup (all containers)
CONTAINERS="all" make cleanup-nodes                       # Cleanup all without prompt
CONTAINERS="tashi-depin-worker-1,tashi-depin-worker-3" make cleanup-nodes  # Specific containers
INVENTORY_FILE=inventory/vm-tashi-node-vm.ini CONTAINERS="all" make cleanup-nodes  # Cleanup created VM

Configuration Options

Make Command Parameters

All deployment and management parameters can be specified as make variables:

VM Management:

# VM creation (default: tashi-node-vm)
VM_NAME=custom-vm make create-vm

# VM configuration options
VM_NAME=high-perf ADDITIONAL_STORAGE=1000 CONFIG=cpu-16-ram-32gb-storage-25gb make create-vm

# Valid CONFIG options:
# - cpu-2-ram-4gb-storage-25gb (default)
# - cpu-4-ram-8gb-storage-25gb  
# - cpu-8-ram-16gb-storage-25gb
# - cpu-16-ram-32gb-storage-25gb

Node Deployment:

# Number of nodes (default: 3)
NODES=5 make deploy-nodes

# Solana wallet file path (default: ./wallet.json)
SOLANA_WALLET_PATH=./my-wallet.json make deploy-nodes

# Ansible inventory file (default: inventory/inventory_tashi.ini)
INVENTORY_FILE=inventory/vm-tashi-node-vm.ini make deploy-nodes

# Sudo password for remote operations (optional)
BECOME_PASS=mypassword make deploy-nodes

# Container cleanup selection
CONTAINERS="all" make cleanup-nodes
CONTAINERS="tashi-depin-worker-1,tashi-depin-worker-3" make cleanup-nodes

# Combined example
NODES=3 SOLANA_WALLET_PATH=./wallet.json INVENTORY_FILE=inventory/vm-tashi-node-vm.ini make deploy-nodes

Node Configuration

Each deployed node gets:

  • Unique container name: tashi-depin-worker-1, tashi-depin-worker-2, etc.
  • Incremental ports: External 39065, 39066, 39067... → Internal 39065
  • HTTP endpoints: 127.0.0.1:9000, 9001, 9002... → Internal 9000
  • Separate auth volumes: tashi-depin-worker-auth-1, auth-2, etc.

Prerequisites

Required Files

  • Solana wallet file: JSON format with private key
  • SSH private key: For VM access
  • Ansible inventory: VM connection details

Wallet Management

Generate and manage Solana wallets for node authentication:

# Generate new wallet
solana-keygen new --outfile wallet.json

# Check wallet balance
solana balance --keypair wallet.json

# Deploy with specific wallet
SOLANA_WALLET_PATH=./production-wallet.json make deploy-nodes

Troubleshooting

# Check container status on remote VM
make containers-status
INVENTORY_FILE=inventory/vm-tashi-node-vm.ini make containers-status

# Check all containers (on remote VM)
ssh -i ~/.ssh/id_ed25519 ubuntu@<vm-ip> "docker ps --filter 'name=tashi-depin-worker'"

# Check specific node logs (on remote VM)
ssh -i ~/.ssh/id_ed25519 ubuntu@<vm-ip> "docker logs tashi-depin-worker-1 | grep 'successfully bonded'"

# Check auth volumes (on remote VM)
ssh -i ~/.ssh/id_ed25519 ubuntu@<vm-ip> "docker volume ls | grep tashi"

# Test node connectivity (from local machine to remote VM)
curl http://<vm-ip>:39065/health  # Node 1 external endpoint

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published