Last modified: 29-October-2025
- Introduction
- Prerequisites
- Deployment with Minikube
- Verifying the Deployment
- Production Considerations
- Other Resources
This guide provides instructions for deploying Teranode in a Kubernetes environment. While this guide shows the steps to deploy on a single server cluster using Minikube, these configurations can be adapted for production use with appropriate modifications.
For detailed hardware specifications including per-service resource requirements, see the System Requirements document.
Kubernetes deployments require:
- Sufficient cluster resources for Teranode pods
- External dependencies (Aerospike, PostgreSQL, Kafka) deployed separately or as managed services
- ReadWriteMany (RWX) storage for shared blob storage
Before you begin, ensure you have the following tools installed and configured:
Additionally, ensure you have a storage provider capable of providing ReadWriteMany (RWX) storage. As an example, this guide includes setting up an NFS server via Docker for this purpose.
cd $YOUR_WORKING_DIR
git clone git@github.com:bsv-blockchain/teranode.git
cd teranodeMinikube creates a local Kubernetes cluster on your machine. For running Teranode, we recommend the following process:
Start minikube with recommended resources and verify its status:
# Start minikube with recommended resources
minikube start --cpus=4 --memory=8192 --disk-size=20gb
# Verify minikube status
minikube statusTeranode requires several backing services. While these services should be deployed separately in production, for local development we'll deploy them within the same cluster.
Create a namespace for the deployment:
kubectl create namespace teranode-operatorDeploy all dependencies in the teranode namespace:
kubectl apply -f deploy/kubernetes/aerospike/ -n teranode-operator
kubectl apply -f deploy/kubernetes/postgres/ -n teranode-operator
kubectl apply -f deploy/kubernetes/kafka/ -n teranode-operatorTo know more, please refer to the Third Party Reference Documentation
For this example, we will create a local folder and expose it to Minikube via a docker based NFS server.
docker volume create nfs-volume
docker run -d \
--name nfs-server \
-e NFS_EXPORT_0='/minikube-storage *(rw,no_subtree_check,fsid=0,no_root_squash)' \
-v nfs-volume:/minikube-storage \
--cap-add SYS_ADMIN \
-p 2049:2049 \
erichough/nfs-server
# connect the nfs-server to the minikube network
docker network connect minikube nfs-server
# create the PersistentVolume
kubectl apply -f deploy/kubernetes/nfs/Pull and load the required Teranode images into Minikube:
You can find the latest available version published on GitHub Container Registry:
- https://github.com/bsv-blockchain/teranode/pkgs/container/teranode
- https://github.com/bsv-blockchain/teranode-operator/pkgs/container/teranode-operator
# Set image versions (please derive the right TERANODE_VERSION from the results of the previous command)
export OPERATOR_VERSION=v0.2.6
export TERANODE_VERSION=v0.13.1
export GHCR_REGISTRY=ghcr.io/bsv-blockchain# Load Teranode Operator
docker pull $GHCR_REGISTRY/teranode-operator:$OPERATOR_VERSION
minikube image load $GHCR_REGISTRY/teranode-operator:$OPERATOR_VERSION
# Load Teranode Public
docker pull $GHCR_REGISTRY/teranode:$TERANODE_VERSION
minikube image load $GHCR_REGISTRY/teranode:$TERANODE_VERSIONThe Teranode Operator manages the lifecycle of Teranode instances:
# Install CRDs first
kubectl apply --server-side -f https://raw.githubusercontent.com/bsv-blockchain/teranode-operator/$OPERATOR_VERSION/deploy/crds.yaml
# Install operator
helm upgrade --install teranode-operator oci://ghcr.io/bsv-blockchain/helm/teranode-operator \
-n teranode-operator \
-f deploy/kubernetes/teranode/teranode-operator.yamlApply the Teranode configuration and custom resources:
kubectl apply -f deploy/kubernetes/teranode/teranode-configmap.yaml -n teranode-operator
kubectl apply -f deploy/kubernetes/teranode/teranode-cr.yaml -n teranode-operatorNetwork Configuration:
By default, this configuration deploys Teranode to connect to the teratestnet network. To connect to a different network:
-
Edit
deploy/kubernetes/teranode/teranode-configmap.yamland change thenetworksetting:- For BSV testnet:
network: "testnet" - For BSV mainnet:
network: "mainnet"
- For BSV testnet:
-
For testnet or mainnet, you must enable the legacy service in
deploy/kubernetes/teranode/teranode-cr.yaml:legacy: enabled: true spec: deploymentOverrides: imagePullPolicy: Never replicas: 1 resources: requests: cpu: 100m memory: 256Mi
-
Apply the updated configuration:
kubectl apply -f deploy/kubernetes/teranode/teranode-configmap.yaml -n teranode-operator kubectl apply -f deploy/kubernetes/teranode/teranode-cr.yaml -n teranode-operator
A fresh Teranode starts up in IDLE state by default. To start syncing from the network, you can run:
kubectl exec -it $(kubectl get pods -n teranode-operator -l app=blockchain -o jsonpath='{.items[0].metadata.name}') -n teranode-operator -- teranode-cli setfsmstate -fsmstate runningTo know more about the syncing process, please refer to the Teranode Sync Guide
To verify your deployment:
# Check all pods are running
kubectl get pods -n teranode-operator | grep -E 'aerospike|postgres|kafka|teranode-operator'
# Check Teranode services are ready
kubectl wait --for=condition=ready pod -l app=blockchain -n teranode-operator --timeout=300s
# View Teranode logs
kubectl logs -n teranode-operator -l app=blockchain -fFor production deployments, consider:
- Deploying dependencies (Aerospike, PostgreSQL, Kafka) in separate clusters or using managed services
- Implementing proper security measures (network policies, RBAC, etc.)
- Setting up monitoring and alerting
- Configuring appropriate resource requests and limits
- Setting up proper backup and disaster recovery procedures
An example CR for a mainnet deployment is available in kubernetes/teranode/teranode-cr-mainnet.yaml.
If you need to reset your Teranode deployment, see the How to Reset Teranode guide for complete instructions on cleaning up Aerospike, PostgreSQL, and persistent volumes.