This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Teranode is a horizontally scalable BSV Blockchain node implementation using a microservices architecture. It achieves over 1 million transactions per second through distributed processing across multiple machines.
# Build main teranode binary with dashboard
make build
# Build without dashboard
make build-teranode
# Build for CI with race detection
make build-teranode-ci
# Build teranode CLI tool
make build-teranode-cli
# Build specific components
make build-chainintegrity
make build-tx-blaster# Run all tests except integration tests
make test
# Run long-running tests
make longtest
# Run sequential tests
make sequentialtest
# Run smoke tests (basic functionality)
make smoketest
# Run all tests
make testall
# Run a single test
go test -v -race -tags "testtxmetacache" -run TestNameHere ./path/to/package
# Test Retry Support (for flaky tests)
# Automatically retry failed tests to handle timing/race issues
# Default: 3 retries for all test targets
# Unit tests with retry (gotestsum native --rerun-fails)
make test TEST_RETRY_COUNT=3
make longtest TEST_RETRY_COUNT=5
# E2E tests with retry (custom retry wrapper with timeout extension)
make smoketest TEST_RETRY_COUNT=3
make sequentialtest TEST_RETRY_COUNT=5 TEST_RETRY_DELAY=3
# Disable retries (set to 0 or 1)
make test TEST_RETRY_COUNT=0
make smoketest TEST_RETRY_COUNT=1
# Database-specific sequential tests with retry
make sequentialtest-aerospike TEST_RETRY_COUNT=5
make sequentialtest-postgres TEST_RETRY_COUNT=3
make sequentialtest-sqlite TEST_RETRY_COUNT=3
# Flaky test reports (JSON format):
# - Unit tests: /tmp/teranode-test-results/unit-test-flaky.json
# - Long tests: /tmp/teranode-test-results/longtest-flaky.json
# - Sequential tests: console output with flaky test summary
# See docs/testing/test-retry-mechanism.md for full documentation# Check only changed files vs main branch
make lint
# Check only unstaged/untracked changes
make lint-new
# Check all files
make lint-full
# Fix gci linting for Go files
gci write --skip-generated -s standard -s default <filename># Run teranode and dashboard in development mode
make dev
# Run only teranode
make dev-teranode
# Run only dashboard
make dev-dashboardTeranode consists of multiple specialized services communicating via gRPC and Kafka:
Core Services:
- Asset Server (
services/asset/): HTTP/WebSocket interface to blockchain data stores - Propagation (
services/propagation/): Receives and forwards transactions (gRPC/UDP/HTTP) - Validator (
services/validator/): Validates transactions against consensus rules - Block Validation (
services/blockvalidation/): Validates complete blocks - Block Assembly (
services/blockassembly/): Assembles new blocks from validated transactions - Blockchain (
services/blockchain/): Manages blockchain state and FSM - Subtree Validation (
services/subtreevalidation/): Validates merkle subtrees
Overlay Services:
- P2P (
services/p2p/): Peer-to-peer network communication - RPC (
services/rpc/): Bitcoin-compatible JSON-RPC interface - Legacy (
services/legacy/): Backward compatibility with existing Bitcoin nodes
Data Stores:
- UTXO Store (
stores/utxo/): Manages unspent transaction outputs (Aerospike-backed) - Blob Store (
stores/blob/): Stores transactions and subtrees (S3/filesystem) - Blockchain Store (
stores/blockchain/): Block header and chain state (PostgreSQL)
- gRPC: Service-to-service synchronous communication
- Kafka: Asynchronous event streaming between services
- HTTP/WebSocket: External client interfaces
- UDP/IPv6 Multicast: High-performance transaction propagation
- Horizontal Scaling: Services can be deployed across multiple machines
- Event-Driven: Kafka topics for decoupled communication
- UTXO Model: Bitcoin's unspent transaction output tracking
- Merkle Trees: Binary hash trees for transaction inclusion proofs
- Two-Phase Commit: Distributed transaction consistency
settings.conf: Default settings and environment-specific overridessettings_local.conf: Local development overrides (not committed)- Environment contexts:
dev,test,docker,operator
Services use standardized ports with optional prefixes for multi-node setups:
- Asset Server: 8090
- RPC: 9292
- P2P: 9905
- Blockchain gRPC: 8087
- Validator gRPC: 8081
Claude will automatically use specialized agents in .claude/agents/ when appropriate:
- bitcoin-expert: Bitcoin protocol, consensus rules, cryptography (automatically consulted for Bitcoin-specific tasks)
- test-writer-fixer: Automatically runs tests after code changes
- api-tester: API load testing and contract validation
- backend-architect: System design and architecture decisions
These agents work together - for example, when implementing a new Bitcoin feature:
- bitcoin-expert provides protocol guidance
- backend-architect designs the implementation
- test-writer-fixer ensures tests pass
- performance-benchmarker validates performance
The project includes a Bitcoin expert agent (.claude/agents/bitcoin-expert.md) that should be consulted for:
- Protocol Questions: Bitcoin consensus rules, script validation, transaction structure
- Cryptography: ECDSA signatures, hash functions, Merkle trees
- BSV-Specific Features: Restored opcodes, unbounded script sizes, large block handling
- Implementation Guidance: When porting bitcoin-sv functionality to teranode
Usage: Reference .claude/agents/bitcoin-expert.md when working on:
- Transaction validation logic
- Script interpreter implementation
- Consensus rule enforcement
- UTXO management
- Block validation
- Any Bitcoin protocol-specific features
- ECDSA signatures on secp256k1 curve
- Bitcoin Script (stack-based, Forth-like)
- UTXO transaction model
- Merkle tree block structure
- Proof-of-Work consensus
- BSV's restored opcodes (OP_SUBSTR, OP_LEFT, OP_RIGHT, etc.)
- Unbounded transaction and script sizes
- Unit Tests: Package-level tests with mocks
- Integration Tests: Multi-service interaction tests
- Consensus Tests (
test/consensus/): Bitcoin script validation - E2E Tests (
test/e2e/): Full system tests with containers - Sequential Tests: Order-dependent test scenarios
testtxmetacache: Small cache for testinglargetxmetacache: Production cache sizeaerospike: Tests requiring Aerospike
All code must follow the standards defined in docs/references/codingConventions.md.
This includes:
- Naming conventions (packages, variables, functions, interfaces, types, files)
- Error handling patterns
- Concurrency best practices
- Testing standards (testify/require, table-driven tests)
- Code formatting and linting rules
- Commenting guidelines
Key highlights:
- File names:
snake_case.goformat (enforced by CI) - Variables: CamelCase for exported, camelCase for internal
- Functions: VerbNoun pattern (CalculateTotal, ReadFile)
- Getters: No "Get" prefix - use
Name()notGetName() - Interfaces: Single-method ends in "-er" (Reader, Writer)
- Tests: Use
requirefrom testify, avoidt.Parallel()unless testing concurrency - Error handling: Always check
if err != nil, useerrors.Newnotfmt.Errorf - Comments: Explain "why" not "what"
- Always run linting before commits:
make lint - Use
gcifor import formatting in Go files - Follow existing patterns for new services (check similar services first)
- Protobuf files generate Go code via
make gen - Dashboard is a Svelte application in
ui/dashboard/ - Use TestContainers for integration tests requiring external services
- Don't use mock blockchain client/store - you can use a real one using the sqlitememory store
- Don't use mock kafka - you can use in_memory_kafka.go
- Log messages must always be on a single line - never use multi-line log statements
When creating or updating service interfaces and clients, follow this pattern to avoid exposing protobuf/gRPC types:
Interface Layer (Interface.go):
- Define interfaces using native Go types and existing domain types (e.g.,
*PeerInfo,[]string,bool,error) - Do NOT expose protobuf types (e.g.,
*p2p_api.GetPeersResponse) in interface signatures - Use simple, idiomatic Go return types:
errorfor success/fail,boolfor yes/no,[]stringfor lists - Prefer existing domain structs over creating new minimal types
Client Layer (Client.go):
- Keep the protobuf/gRPC import for internal use (e.g.,
import "github.com/bsv-blockchain/teranode/services/p2p/p2p_api") - Maintain internal gRPC client field (e.g.,
client p2p_api.PeerServiceClient) - Public methods match the interface signatures (native types)
- Convert between native types and protobuf types internally using helper functions
Benefits:
- Cleaner API boundaries between services
- Reduces coupling to gRPC implementation details
- Makes interfaces more testable (no protobuf dependencies needed for mocks)
- Uses idiomatic Go types that are easier to work with
Example:
// Interface.go - Clean, no protobuf types
type ClientI interface {
GetPeers(ctx context.Context) ([]*PeerInfo, error)
BanPeer(ctx context.Context, peerID string, duration int64, reason string) error
IsBanned(ctx context.Context, peerID string) (bool, error)
ListBanned(ctx context.Context) ([]string, error)
}
// Client.go - Internal conversion
type Client struct {
client p2p_api.PeerServiceClient // gRPC client
}
func (c *Client) GetPeers(ctx context.Context) ([]*PeerInfo, error) {
resp, err := c.client.GetPeers(ctx, &emptypb.Empty{})
if err != nil {
return nil, err
}
// Convert p2p_api types to native PeerInfo
return convertFromAPIResponse(resp), nil
}All developers work in forked repositories with upstream remote pointing to the original repo.
# Always sync with upstream first
git fetch upstream
git reset --hard upstream/main
# If conflicts occur: STOP and ask user for resolution guidance
# After resolving (or if no conflicts):
git push origin <current-branch>
# Display push result message including PR creation linksImportant: Never auto-resolve merge conflicts. Always show conflicting files and wait for user approval on resolution strategy.
# Always branch from synced main
git checkout main
git fetch upstream
git reset --hard upstream/main
git checkout -b <new-branch-name>- Push work: Sync upstream → resolve conflicts (with user approval) → push to fork → show PR link
- New branch: Switch to main → sync upstream → create branch
- Sync with upstream:
git checkout main && git fetch upstream && git reset --hard upstream/main
This project is indexed by GitNexus as teranode (27349 symbols, 94911 relationships, 300 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
If any GitNexus tool warns the index is stale, run
npx gitnexus analyzein terminal first.
- MUST run impact analysis before editing any symbol. Before modifying a function, class, or method, run
gitnexus_impact({target: "symbolName", direction: "upstream"})and report the blast radius (direct callers, affected processes, risk level) to the user. - MUST run
gitnexus_detect_changes()before committing to verify your changes only affect expected symbols and execution flows. - MUST warn the user if impact analysis returns HIGH or CRITICAL risk before proceeding with edits.
- When exploring unfamiliar code, use
gitnexus_query({query: "concept"})to find execution flows instead of grepping. It returns process-grouped results ranked by relevance. - When you need full context on a specific symbol — callers, callees, which execution flows it participates in — use
gitnexus_context({name: "symbolName"}).
gitnexus_query({query: "<error or symptom>"})— find execution flows related to the issuegitnexus_context({name: "<suspect function>"})— see all callers, callees, and process participationREAD gitnexus://repo/teranode/process/{processName}— trace the full execution flow step by step- For regressions:
gitnexus_detect_changes({scope: "compare", base_ref: "main"})— see what your branch changed
- Renaming: MUST use
gitnexus_rename({symbol_name: "old", new_name: "new", dry_run: true})first. Review the preview — graph edits are safe, text_search edits need manual review. Then run withdry_run: false. - Extracting/Splitting: MUST run
gitnexus_context({name: "target"})to see all incoming/outgoing refs, thengitnexus_impact({target: "target", direction: "upstream"})to find all external callers before moving code. - After any refactor: run
gitnexus_detect_changes({scope: "all"})to verify only expected files changed.
- NEVER edit a function, class, or method without first running
gitnexus_impacton it. - NEVER ignore HIGH or CRITICAL risk warnings from impact analysis.
- NEVER rename symbols with find-and-replace — use
gitnexus_renamewhich understands the call graph. - NEVER commit changes without running
gitnexus_detect_changes()to check affected scope.
| Tool | When to use | Command |
|---|---|---|
query |
Find code by concept | gitnexus_query({query: "auth validation"}) |
context |
360-degree view of one symbol | gitnexus_context({name: "validateUser"}) |
impact |
Blast radius before editing | gitnexus_impact({target: "X", direction: "upstream"}) |
detect_changes |
Pre-commit scope check | gitnexus_detect_changes({scope: "staged"}) |
rename |
Safe multi-file rename | gitnexus_rename({symbol_name: "old", new_name: "new", dry_run: true}) |
cypher |
Custom graph queries | gitnexus_cypher({query: "MATCH ..."}) |
| Depth | Meaning | Action |
|---|---|---|
| d=1 | WILL BREAK — direct callers/importers | MUST update these |
| d=2 | LIKELY AFFECTED — indirect deps | Should test |
| d=3 | MAY NEED TESTING — transitive | Test if critical path |
| Resource | Use for |
|---|---|
gitnexus://repo/teranode/context |
Codebase overview, check index freshness |
gitnexus://repo/teranode/clusters |
All functional areas |
gitnexus://repo/teranode/processes |
All execution flows |
gitnexus://repo/teranode/process/{name} |
Step-by-step execution trace |
Before completing any code modification task, verify:
gitnexus_impactwas run for all modified symbols- No HIGH/CRITICAL risk warnings were ignored
gitnexus_detect_changes()confirms changes match expected scope- All d=1 (WILL BREAK) dependents were updated
- Re-index:
npx gitnexus analyze - Check freshness:
npx gitnexus status - Generate docs:
npx gitnexus wiki