Ferrix is a high-performance Rust trading system for Pump.fun to PumpSwap migration flow on Solana.
It is not just a "bot binary". It is a full stack made of:
- a live event-driven execution engine (
ferrix), - a telemetry and simulation toolchain (
telemetry_tool,walk_forward_split,backtest_logic), - and an always-on autotuning engine (
run_deep_evolution.sh+optimize_params) that continuously improvesbest_strategy_params.jsonwhile production trading stays online.
The design goal is simple: fast detection, strict safety, resilient execution, and continuous model adaptation.
At a high level, Ferrix runs two loops:
-
Live trading loop:
- Detect migration events in real time.
- Analyze token safety and tradability.
- Enter and manage positions through actor-based state management.
- Persist trades and portfolio state to SQLite.
-
Optimization loop:
- Transform raw telemetry into deterministic ground-truth simulation data.
- Perform walk-forward + holdout evaluation at massive parallel scale.
- Promote new champion params only when blended out-of-sample quality improves.
- Signal production to restart only when safe.
This separation of concerns is the core strength of the repo.
Core runtime:
src/main.rs- process bootstrap, logging, metrics endpoint, DB init, app entry.src/app/mod.rs- command routing and full runtime pipeline wiring.src/migration_watcher.rs- migration event ingestion and pre-warm logic.src/migrated_checker.rs- concurrent honeypot + contract safety checks.src/actors/position_manager/- actor-owned position state, entry observers, sell dispatch.src/price/andsrc/laserstream_client/- stream clients and price/vault state plumbing.src/services/trading.rs- trade signal processor, buy/sell execution flow.src/persistence/- SQLite schema + persistence tasks + caches.src/metrics.rs- Prometheus metrics catalog.
Research/backtesting/tuning stack:
src/bin/telemetry_tool.rs- converts raw telemetry into clean ground-truth lifecycle data.src/bin/walk_forward_split.rs- train/WF/holdout split generation.src/bin/optimize_params.rs- high-throughput evolutionary optimizer.src/bin/backtest_logic.rs- CLI wrapper around shared simulation engine.src/sim.rs- hot-path simulator used by optimizer and backtest CLI.run_deep_evolution.sh- continuous autotuning orchestrator.auto_restart.sh- safe rollout manager for champion updates.
src/main.rs does the production boot sequence:
- sets up Prometheus recorder and serves
/metrics(default127.0.0.1:9000), - initializes dual JSON logging (stdout + rolling file logs),
- loads strategy params from
best_strategy_params.json, - loads
known_wallets.jsonand starts wallet-file watcher, - initializes
trades.db, then callsapp::run(...).
CLI subcommands are defined in src/config.rs:
token --auto-trade- token creation watcher mode.migration [--dummy-event] [--checks-off]- migration watcher mode.autobuysell [--checks-off]- analysis-driven auto execution mode.test-analysis --token-list-path <file>- batch risk analysis over token list.
The main production path in src/app/mod.rs is:
MigrationWatcherManager
- Subscribes to LaserStream transaction feed.
- Extracts migration events.
- Emits telemetry migration events.
- Pre-warms vault state cache for execution realism.
MigratedCheckerManager
- Runs analysis tasks per event in independent async tasks.
- Uses
tokio::join!to run:- honeypot check,
- contract safety check.
- For passing events, emits verified trade signal.
PositionManagerActor
- Owns canonical in-memory position state (single-writer actor model).
- Manages observer states for entry gating.
- Evaluates sell conditions from unified tick snapshots.
- Dispatches
SellOrdercommands.
TradeSignalProcessor(src/services/trading.rs)
- Receives verified events.
- Performs final circuit-breaker gate.
- Executes buys through
PumpSwapTradingClient. - Sends
PositionCommandupdates back to actor.
SellOrderProcessor
- Executes sell transactions.
- Handles slippage retry paths and creator-cache refresh retries.
- Confirms and computes realized outputs.
- Sends
SellConfirmedorSellFailedto actor.
src/actors/position_manager/feature_engine.rs builds derived microstructure features (RVR, volume windows, momentum, drawdown, red streaks, whale dump flags, creator/exit-liquidity signals), then emits TickSnapshot for decision logic.
Sell logic in src/actors/position_manager/logic.rs is phase-aware (Chaos/Discovery/Trending/Mature) with adaptive rules for:
- shock-drop exits,
- drawdown caps by run-up tier,
- momentum reversal,
- churn/dud exits,
- adaptive trailing,
- creator/exit-liquidity preemptive exits,
- panic flow exits.
trades.db is initialized and maintained via src/persistence/.
Major tables include:
trades- open/closed trade lifecycle records.portfolio_summary- aggregate portfolio counters and PnL rollups.circuit_breaker_events- risk-trigger history.sniper_cacheandcreator_reputation_cache- analysis acceleration.test_analysis- batch analysis output.address_lookup_tables- ALT cache.- metadata and transaction cache tables for data-fetch acceleration.
On restart, open positions are loaded and re-hydrated into actor state.
Ferrix continuously updates:
- Prometheus metrics via
src/metrics.rs, - runtime status file
/dev/shm/ferrix_status.jsonvia actor scheduled metrics:open_positions,observers,active_telemetry.
That status file is consumed by auto_restart.sh so model updates only roll out during safe windows.
This is the input pipeline for autotuning.
src/bin/telemetry_tool.rs:
- parses raw telemetry event stream (
migration,flow,tick), - groups by mint,
- repairs missing timestamps/age fields for legacy entries,
- reconstructs 1s flow stats and net-flow signals,
- reconstructs pool lifecycle and writes deterministic lifecycle entries to:
tel_out/ground_truth_analysis.jsonl.
src/bin/walk_forward_split.rs:
- sorts lifecycle records chronologically by slot,
- creates
permanent_holdout.jsonlfrom the newest 5 percent of records, - creates
train_set.jsonlfrom the earlier 95 percent, - generates walk-forward rounds:
wf_round_<n>_train.jsonl,wf_round_<n>_test.jsonl.
Default deep-evo settings use sliding windows with embargo to reduce leakage.
This script is the always-on orchestration layer for model evolution.
It is currently labeled in-script as Data-Driven Nanocap Evolution v9.3.
- Champion file:
best_strategy_params.json.
- Exploration strategy:
- normal/explore/chaos modes based on stagnation.
- Data refresh per generation:
- telemetry reprocessing and WF split regeneration.
- Dynamic throughput policy:
- auto-calculated minimum entry rate from live migration cadence.
- Promotion policy:
- blended out-of-sample scoring and holdout gates.
- Production rollout signal:
- touches
/dev/shm/ferrix_pending_updatewhen champion is accepted.
- touches
run_deep_evolution.sh expects binaries in repository root:
./optimize_params./telemetry_tool./walk_forward_split
Build and place them first:
cargo build --release --bin optimize_params --bin telemetry_tool --bin walk_forward_split --bin backtest_logic
cp target/release/optimize_params .
cp target/release/telemetry_tool .
cp target/release/walk_forward_split .
cp target/release/backtest_logic .
chmod +x optimize_params telemetry_tool walk_forward_split backtest_logicThe script also sources .env_PROD when present (for values such as SIM_BUY_AMOUNT_SOL, slippage thresholds, and other execution realism knobs).
Each generation does:
- Choose exploration regime
NORMAL,EXPLORE, orCHAOSfrom stagnation counters.- During first
FORCED_CHAOS_GENSgenerations it forces wide exploration.
- Rotate sweep stage groups
- Chooses one 3-stage
multi-sweepsequence (entry/trail/dd/phase/churn/etc).
- Refresh datasets
./telemetry_tool telemetry.jsonl./walk_forward_split --min-train-trades 10 --test-window-trades 450 --embargo-trades 40 --n-rounds 5 --sliding
- Compute dynamic entry-rate target
- Tail-reads recent migrations from
telemetry.jsonl(Python in-script). - Derives migrations/day via median interval.
- Sets target entry rate to satisfy
TARGET_TRADES_PER_DAY(default 45/day), with clamps and smoothing.
- Run optimizer
- Calls
optimize_paramswith:- large random sample budget (
1.5Mto3.0M), --use-cma(CMA-ES sampling),- realistic execution model (
latency,slippage, fees), - WF rounds and holdout blend.
- large random sample budget (
- Parse and evaluate results
- Reads optimizer output for current champion CV/blended score.
- Detects new champion via
NEW CHAMPION CONFIRMEDmarker. - Tracks best-ever blended score and degradation flags.
- Fallback search when stuck
- On prolonged stagnation, runs a "hail mary" exploration pass with wider jitter focused on entry/trailing rules.
- Validate and promote
- Runs minimal sanity validation on new params.
- On pass:
- keeps champion,
- appends journal diff,
- writes update flag
/dev/shm/ferrix_pending_update.
- On fail:
- restores backup champion.
optimize_params combines:
- walk-forward CV score,
- holdout score,
- blended score (default 40 percent CV + 60 percent holdout in script via
HOLDOUT_WEIGHT=0.60).
Promotion is not based on raw in-sample score only. A candidate must:
- pass hard safety/tail-risk gates,
- beat champion beyond minimum improvement threshold,
- pass holdout degradation constraints,
- beat champion on blended score.
This is why the pipeline is robust to overfitting drift.
run_deep_evolution.sh never directly restarts the live process.
It only drops a signal flag.
auto_restart.sh independently:
- watches for update flag,
- polls
/dev/shm/ferrix_status.json, - waits until
open_positions + observers + active_telemetry == 0(or stale status timeout), - performs service restart,
- clears flag.
This is a strong production-safe rollout model: optimization and execution are decoupled but coordinated.
The speed does not come from one trick. It comes from stack-wide design choices:
- In-memory data cache for folds
optimize_paramsloads all WF/holdout datasets once into RAM.- No per-candidate disk reads inside the evaluation loop.
- Zero-I/O hot path
- Candidate evaluation loop is pure compute (
par_iterover param sets). - Writes happen only after winner selection.
- Fast data decode path
- prefers
.rkyvbinary datasets if present, - otherwise uses
simd-jsonwith serde fallback.
- Highly parallel execution
- Rayon parallel candidate scoring.
- Script drives high thread counts (
RAYON_NUM_THREADS, default 90).
- Hot simulation engine (
src/sim.rs)
- precomputed phase thresholds,
- enum-based exit reasons (no per-trade string allocation in hot path),
- tight run loop tuned for repeated invocation.
- Memory and IO locality
tel_outis linked into/dev/shmfor RAM-disk speed in tuning runs.
- Search efficiency
- sensitivity cache guides jitter width,
- CMA-ES + bounded wide exploration reduces wasted search.
- Compiler/runtime tuning
- release profile uses LTO, single codegen unit, stripped binary, overflow checks off,
- optional CPU-specific and PGO build scripts are provided (
build_release.sh,pgo_build.sh).
In this repo, "backtests/sec" in evolution context is most meaningfully interpreted as fold-level simulation evaluations per second (candidate-fold evaluations), not full end-to-end generation cycles.
With script defaults at full exploration budget:
- up to 3,000,000 candidates,
- 5 WF rounds,
- 3 sweep stages per optimizer call,
- around 45,000,000 fold evaluations per generation.
On tuned high-core hardware, this architecture is engineered for the 70,000+ fold-backtests/sec class.
cargo build --releasecargo fmt -- --check
cargo clippy --features backtest -- -D warnings
cargo test --features backtestMigration mode:
RUST_LOG=info cargo run --release --bin ferrix -- migrationMigration mode with checks disabled:
RUST_LOG=info cargo run --release --bin ferrix -- migration --checks-offAutobuysell mode:
RUST_LOG=info cargo run --release --bin ferrix -- autobuysellBatch test-analysis mode:
RUST_LOG=info cargo run --release --bin ferrix -- test-analysis --token-list-path ./test_tokens.txtTerminal 1 (optimizer):
./run_deep_evolution.shTerminal 2 (safe rollout manager):
./auto_restart.shRuntime artifacts:
trades.db- live and analysis persistence.telemetry.jsonl- continuous event/tick/flow telemetry stream.logs/ferrix.log*- rolling JSON logs.
Optimization artifacts:
best_strategy_params.json- current champion params consumed by runtime.deep_evo_archives/<timestamp>_DATADRIVEN/- archived winners.sensitivity_cache.json- optimizer sensitivity cache./tmp/ferrix_*files - blended/cv state and dynamic rate state./dev/shm/ferrix_pending_update- rollout signal./dev/shm/ferrix_status.json- live safety status for restart manager.
Ferrix is already built with the right production primitives:
- actor-owned trading state to avoid race-condition chaos,
- explicit event channels and clear stage boundaries,
- out-of-sample champion promotion (WF + holdout blend),
- non-blocking model rollout safety guards,
- persistent state and cache layers for fast restart and analysis,
- observability-first design (Prometheus + telemetry + structured logs),
- dedicated optimization stack that can keep adapting strategy DNA continuously.
This is exactly the foundation you want before scaling to larger capital, broader strategy families, or additional execution venues.
- Keep secrets in
.env,.env_PROD, or credential files under controlled paths. - Do not commit wallet keypairs, API keys, or private credentials.
- Use dedicated funded wallets for live trading.
- Monitor circuit-breaker status and degradation flags (
/dev/shm/ferrix_cv_degraded) during production runs.