Skip to content

Latest commit

 

History

History
733 lines (576 loc) · 29.3 KB

File metadata and controls

733 lines (576 loc) · 29.3 KB

CLAUDE.md - AI Assistant Guide for SOAR Project

This document provides essential guidance for AI assistants working on the SOAR (Soaring Observation And Records) project.

GitHub Copilot Resources

For GitHub Copilot users, see these additional resources:

  • .github/copilot-instructions.md - Project-specific coding patterns and conventions optimized for Copilot
  • .github/copilot-setup-steps.yml - Complete development environment setup guide
  • .github/COPILOT-RECOMMENDATIONS.md - Advanced tips for maximizing Copilot effectiveness

These files complement this document and provide Copilot-optimized guidance.

Project Overview

SOAR is a comprehensive aircraft tracking and club management system built with:

  • Backend: Rust with Axum web framework, PostgreSQL with PostGIS
  • Frontend: SvelteKit with TypeScript, Tailwind CSS, Skeleton UI components
  • Real-time: NATS messaging for live aircraft position updates
  • Data Sources: APRS-IS integration, FAA aircraft registry, airport databases

Critical Development Rules

DOCUMENTATION PRIORITY

  • KEEP DOCUMENTATION UP TO DATE - Documentation (including README.md, CLAUDE.md, and other docs) must be updated when features change
  • When renaming services, commands, or changing architecture, update all relevant documentation
  • Documentation changes should be part of the same PR that changes the implementation
  • Outdated documentation is a bug - treat it with the same priority as code bugs

NO BYPASSING QUALITY CONTROLS

  • NEVER commit directly to main - The main branch is protected. ALWAYS create a feature/topic branch first
  • NEVER use git commit --no-verify - All commits must pass pre-commit hooks
  • NEVER push to main - Pushing to feature branches is okay, but never push directly to main
  • NEVER skip CI checks - Local development must match GitHub Actions pipeline
  • ASK BEFORE removing large amounts of working code - Get confirmation before major deletions
  • AVOID duplicate code - Check for existing implementations before writing new code
  • Pre-commit hooks run: cargo fmt, cargo clippy, cargo test, bun run lint, bun run check, bun run test

COMMIT AND DATABASE RULES

  • NEVER add Co-Authored-By lines - Do not include Claude Code attribution in commits
  • AVOID raw SQL in Diesel - Only use raw SQL if absolutely necessary, and ask first before using it
  • Always prefer Diesel's query builder and type-safe methods over raw SQL
  • CREATE INDEX CONCURRENTLY requires a metadata.toml - Diesel migrations run in transactions by default, which don't support CONCURRENTLY. To use CONCURRENTLY, add a metadata.toml file to the migration directory with run_in_transaction = false. Example:
    # migrations/2026-01-30-123456-0000_add_index/metadata.toml
    run_in_transaction = false

SERVER ACCESS AND DEPLOYMENT

  • You are running on the staging server. The staging server is named "supervillain". You can always run commands that do not modify anything. Ask before running commands that modify something.
  • You have access to the production server by running "ssh glider.flights". The user you are running as already has "sudo" access. Ask before connecting or using sudo unless I give you permission in advance.
  • NEVER attempt to deploy or restart services without explicit instructions - Only build/check code, do not deploy
  • Use cargo check for validation - Do not run deployment scripts or restart systemd services unless instructed
  • Ask before any service modifications - This includes systemctl restart, deployment scripts, or copying binaries to production locations
  • CRITICAL: Updating running binaries - You CANNOT copy over a running binary (will fail with "Text file busy"). Always delete the binary first, then copy:
    # WRONG: This will fail if service is running
    sudo cp target/debug/soar /usr/local/bin/soar-staging
    
    # CORRECT: Delete first, then copy
    sudo rm -f /usr/local/bin/soar-staging
    sudo cp target/debug/soar /usr/local/bin/soar-staging

CONFIGURATION FILE SYNCHRONIZATION (CRITICAL)

When modifying configuration files on the local system (e.g., /etc/tempo/config.yml, /etc/prometheus/prometheus.yml), you MUST also update the corresponding config file in the infrastructure/ directory in this repository. Both copies must be kept identical.

Config files and their repo locations:

  • /etc/tempo/config.ymlinfrastructure/tempo-config.yml
  • /etc/loki/config.ymlinfrastructure/loki-config.yml
  • /etc/prometheus/prometheus.ymlinfrastructure/prometheus.yml
  • /etc/alloy/config.alloyinfrastructure/alloy-config.alloy.template (template - processed by soar-deploy with git commit for profiling source links)
  • /etc/pyroscope/config.ymlinfrastructure/pyroscope-config.yml
  • /etc/netdata/netdata.confinfrastructure/netdata-config.conf
  • /etc/grafana/provisioning/datasources/soar-postgres.yamlinfrastructure/grafana-provisioning/datasources/soar-postgres.yaml.template (template - processed by soar-deploy)
  • /etc/soar/ingest.tomlinfrastructure/ingest.toml

Process for config changes:

  1. Edit the config file in infrastructure/ first
  2. For manual deployment: Copy to the system location: sudo cp infrastructure/<file> /etc/<service>/<file>
  3. Restart the service if needed: sudo systemctl restart <service>
  4. Commit the infrastructure/ change to git

Auto-deployed by soar-deploy: tempo-config.yml, loki-config.yml, pyroscope-config.yml, alloy-config.alloy, prometheus.yml, grafana-provisioning/ (including datasource templates) Managed by scripts/setup-pgdog: /etc/pgdog/ (PgDog connection pooler config — generated from DATABASE_URL, not synced from repo) Manual deployment required: netdata-config.conf

This ensures config changes are tracked in version control and can be reproduced across environments.

DATABASE SAFETY RULES (CRITICAL)

  • Development Database: soar_dev - This is where you work
  • Staging Database: soar_staging - This should be queried before the production database; its schema will be more up-to-date and it should contain approximately the same data. It is read-only for development purposes.
  • Production Database: soar - This is read-only for development purposes
  • NEVER run UPDATE, INSERT, or DELETE on production database (soar) - Only run these via Diesel migrations
  • ONLY DDL queries (CREATE, ALTER, DROP) via migrations - Never run DDL queries manually on production
  • SELECT queries are allowed on both databases - For investigation and analysis
  • All data modifications must go through migrations - This ensures they're tracked and reproducible
  • Deleting data before adding constraints - You can include DELETE statements in the same migration before constraint creation. The constraint validates against the final state of the transaction, so the DELETE will complete first.

METRICS AND MONITORING

CRITICAL - Grafana Dashboard Synchronization:

  • ANY code change that adds, removes, or renames a metric MUST include corresponding Grafana dashboard updates in the same commit/PR. This includes removing code that emitted metrics — search infrastructure/dashboards/ for the metric name and remove/update any panels that reference it.
  • Verify dashboard queries after changes - After updating code, run grep -r "old_metric_name" infrastructure/dashboards/ to find all references and update them
  • Run python3 infrastructure/dashboards/build.py --verify after any dashboard changes to ensure all dashboards build correctly
  • Dashboard locations (generated from infrastructure/dashboards/):
    • grafana-dashboard-run.json - Main run command metrics (core, routing, flights)
    • grafana-dashboard-run-geocoding.json - Pelias geocoding service
    • grafana-dashboard-run-elevation.json - Elevation processing and AGL
    • grafana-dashboard-ingest.json - Data ingestion (ingest command) - OGN/APRS and ADS-B
    • grafana-dashboard-web.json - Web server (web command)
    • grafana-dashboard-nats.json - NATS metrics
    • grafana-dashboard-analytics.json - Analytics API and cache performance
    • grafana-dashboard-coverage.json - Coverage API metrics

Metric Standards:

  • Naming convention - Use dot notation (e.g., aprs.aircraft.device_upsert_ms)
  • Document metric changes - Note metric name changes in PR description for ops team awareness
  • Remove obsolete dashboard queries - If a metric is removed from code, remove it from panel files in dashboards/panels/ and rebuild

Recent Metric Changes:

  • aprs.aircraft.aircraft_lookup_msaprs.aircraft.aircraft_upsert_ms (2025-01-07, PR #312)
    • Updated in code and Grafana dashboard (2025-01-12)
  • REMOVED: aprs.elevation.dropped and nats_publisher.dropped_fixes (2025-01-12)
    • These metrics were removed from dashboard as messages can no longer be dropped

Grafana Alerting:

  • Alert Configuration - Managed via infrastructure as code in infrastructure/grafana-provisioning/alerting/
  • Email Notifications - Alerts sent via SMTP (credentials from /etc/soar/env or /etc/soar/env-staging)
  • Template Files - Use .template suffix for files with credential placeholders (e.g., contact-points.yml.template)
  • Deployment - soar-deploy script automatically processes templates and installs configs
  • Documentation - See infrastructure/GRAFANA-ALERTING.md for complete guide
  • NEVER commit credentials - Template files use placeholders, actual values extracted during deployment

Dashboard Builder:

Dashboards are built from modular panel files using infrastructure/dashboards/build.py:

# Build all dashboards
python3 infrastructure/dashboards/build.py

# Build specific dashboard
python3 infrastructure/dashboards/build.py run-geocoding

# Extract panels from existing dashboards (one-time setup)
python3 infrastructure/dashboards/build.py --extract

# Verify all dashboards build correctly
python3 infrastructure/dashboards/build.py --verify

Structure:

  • dashboards/panels/{dashboard}/ - Individual panel JSON files
  • dashboards/definitions/{dashboard}.json - Dashboard definitions (panel order, rows, metadata)
  • dashboards/common/ - Shared configs (annotations, templating variables)

Editing dashboards:

  1. Edit individual panel files in dashboards/panels/{dashboard}/
  2. Edit panel order/layout in dashboards/definitions/{dashboard}.json
  3. Run python3 infrastructure/dashboards/build.py to verify your changes build correctly
  4. Commit only the panel/definition source files (built grafana-dashboard-*.json files are generated during deployment)

Frontend Development Standards

Initial Setup (REQUIRED)

Before working with the frontend code, you MUST have Node.js 24+ installed and dependencies installed:

  1. Install Bun:

    curl -fsSL https://bun.sh/install | bash
    
    # Verify version
    bun --version  # Should be v1.x.x
  2. Install Dependencies:

    cd web
    bun install
  3. Verify Setup:

    # Check formatting and linting pass
    bun run lint
    
    # Check TypeScript compilation
    bun run check

CRITICAL: The web/ directory requires node_modules to be installed before you can run any bun commands like bun run lint, bun run format, or bun run check. If you get "command not found" errors for prettier or oxlint, you need to run bun install first.

Static Site Generation (CRITICAL)

  • NO Server-Side Rendering (SSR) ANYWHERE - The frontend MUST be compiled statically
  • Use export const ssr = false; in +page.ts files to disable SSR for specific pages
  • The compiled static site is embedded in the Rust binary for deployment
  • All pages must work as a pure client-side Single Page Application (SPA)
  • Authentication and route protection must be handled client-side

Svelte 5 Syntax (REQUIRED)

<!-- � CORRECT: Use Svelte 5 event handlers -->
<button onclick={handleClick}>Click me</button>
<input oninput={handleInput} onkeydown={handleKeydown} />

<!-- L WRONG: Don't use Svelte 4 syntax -->
<button on:click={handleClick}>Click me</button>
<input on:input={handleInput} on:keydown={handleKeydown} />

Icons (REQUIRED)

<!-- � CORRECT: Use @lucide/svelte exclusively -->
import { Search, User, Settings, ChevronDown } from '@lucide/svelte';

<!-- L WRONG: Don't use other icon libraries -->

Component Libraries

  • Skeleton UI: Use @skeletonlabs/skeleton-svelte components (Svelte 5 compatible)
  • Tailwind CSS: Use utility-first CSS approach
  • TypeScript: Full type safety required

Skeleton UI Class Names (CRITICAL)

Our version of Skeleton UI uses preset- classes, NOT variant- classes. The variant-* prefix is from an older version of Skeleton and does not exist in our codebase. Always use preset- equivalents (e.g., preset-tonal-surface-500, preset-outlined, preset-filled-primary-500).

Dropdown / Popover Background Styling (CRITICAL)

For dropdown or popover backgrounds, always use explicit Tailwind background classes with both light and dark mode:

<!-- CORRECT: Explicit background classes -->
<div class="bg-surface-50 dark:bg-surface-800 border border-surface-300 dark:border-surface-600 shadow-lg">

<!-- WRONG: variant-* classes don't exist in our Skeleton version, resulting in no background -->
<div class="variant-filled-surface border border-surface-400 shadow-lg">

For Skeleton UI <Combobox.Content> or any [data-popover-content] elements, you must also add CSS overrides:

:global(.my-wrapper [data-popover-content]) {
    background-color: var(--color-surface-50);
    color: var(--color-surface-900);
}
:global(.dark .my-wrapper [data-popover-content]) {
    background-color: var(--color-surface-800);
    color: var(--color-surface-50);
}

See ClubSelector.svelte and AirportSelector.svelte for working examples.

TypeScript Type Generation (CRITICAL)

All data types returned from the Rust backend to the TypeScript frontend MUST be generated using ts-rs.

This ensures type safety across the API boundary and prevents drift between backend and frontend types.

How it works:

  1. Add #[derive(TS)] and #[ts(export, export_to = "../web/src/lib/types/generated/")] to Rust structs
  2. Add export call in src/ts_export.rs
  3. Run cargo test ts_export to generate .ts files in web/src/lib/types/generated/
  4. Import and re-export from web/src/lib/types/index.ts

Example Rust struct:

use ts_rs::TS;

#[derive(Debug, Clone, Serialize, Deserialize, TS)]
#[ts(export, export_to = "../web/src/lib/types/generated/")]
#[serde(rename_all = "camelCase")]
pub struct MyApiResponse {
    pub id: Uuid,
    pub name: String,
    pub created_at: DateTime<Utc>,
}

Then in src/ts_export.rs:

use crate::my_module::MyApiResponse;

#[test]
fn export_types() {
    MyApiResponse::export().expect("Failed to export MyApiResponse type");
}

NEVER manually write TypeScript interfaces for API response types - always generate them from Rust.

Backend Development Standards

Rust Code Quality (REQUIRED)

  • ALWAYS run cargo fmt after editing Rust files to ensure consistent formatting
  • Pre-commit hooks automatically run cargo fmt - but format manually for immediate feedback
  • Use cargo clippy to catch common issues and improve code quality
  • All Rust code must pass formatting, clippy, and tests before commit

Rust Build Commands (IMPORTANT)

  • NEVER run cargo build --release unless explicitly instructed - release builds are slow and unnecessary for development
  • Use cargo check for quick validation - this checks that code compiles without building binaries
  • Use cargo build (debug mode) only when you need to run the binary locally for testing
  • Pre-commit hooks run cargo clippy automatically - this is more thorough than cargo check and runs on every commit
  • NEVER run cargo test manually during development - pre-commit hooks will run tests automatically when you commit
  • Release builds are only needed for deployment, which is handled by CI/CD

Rust Patterns

// � Use anyhow::Result for error handling
use anyhow::Result;

// � Use tracing for logging
use tracing::{info, warn, error, debug};

// � Proper async function signatures
pub async fn handler(State(state): State<AppState>) -> impl IntoResponse {
    // Handler implementation
}

Database Patterns

// � Use Diesel ORM patterns
use diesel::prelude::*;

// � PostGIS integration
use postgis_diesel::geography::Geography;

Technology-Specific Documentation

Svelte 5 + Skeleton UI

Reference the official Skeleton UI documentation for Svelte 5 components:

Project Architecture

Database Layer (PostgreSQL + PostGIS)

-- � Spatial data patterns
CREATE TABLE airports (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    location GEOGRAPHY(POINT, 4326) NOT NULL,
    elevation_ft INTEGER
);

-- � Indexes for spatial queries
CREATE INDEX CONCURRENTLY idx_airports_location ON airports USING GIST (location);

API Layer (Rust + Axum)

// � Route structure
#[derive(Clone)]
pub struct AppState {
    pub pool: PgPool,
    pub nats_client: Arc<async_nats::Client>,
}

// � Handler patterns
pub async fn get_aircraft(
    State(state): State<AppState>,
    Query(params): Query<DeviceSearchParams>,
) -> Result<impl IntoResponse, ApiError> {
    // Implementation
}

Frontend Layer (SvelteKit + TypeScript)

<!-- � Component structure -->
<script lang="ts">
    import { Search, Filter } from '@lucide/svelte';
    import { Segment } from '@skeletonlabs/skeleton-svelte';

    let searchQuery = '';

    function handleSearch() {
        // Implementation using onclick, not on:click
    }
</script>

<button onclick={handleSearch} class="btn variant-filled-primary">
    <Search class="h-4 w-4" />
    Search
</button>

Real-time Features (NATS)

// � NATS message patterns
#[derive(Serialize, Deserialize)]
pub struct LiveFix {
    pub aircraft_id: String,
    pub latitude: f64,
    pub longitude: f64,
    pub timestamp: String,
}

Analytics Layer

SOAR includes a comprehensive analytics system for tracking flight statistics and performance metrics.

Database Tables:

  • flight_analytics_daily - Daily flight aggregations with automatic triggers
  • flight_analytics_hourly - Hourly flight statistics for recent trends
  • Materialized automatically via PostgreSQL triggers on flight insert/update

Backend Components:

  • src/analytics.rs - Core data models for analytics responses
  • src/analytics_repo.rs - Database queries using Diesel ORM
  • src/analytics_cache.rs - 60-second TTL cache using Moka
  • src/actions/analytics.rs - REST API endpoints

API Endpoints (/data/analytics/...):

  • /flights/daily - Daily flight counts and statistics
  • /flights/hourly - Hourly flight trends
  • /flights/duration-distribution - Flight duration buckets
  • /devices/outliers - Devices with anomalous flight patterns (z-score > threshold)
  • /devices/top - Top devices by flight count
  • /clubs/daily - Club-level analytics
  • /airports/activity - Airport usage statistics
  • /summary - Dashboard summary with key metrics

Caching Strategy:

  • All queries cached for 60 seconds to reduce database load
  • Cache hit/miss metrics tracked in analytics.cache.hit and analytics.cache.miss
  • Query latency tracked per endpoint (e.g., analytics.query.daily_flights_ms)

Metrics & Monitoring:

// Analytics API metrics
metrics::counter!("analytics.api.daily_flights.requests").increment(1);
metrics::counter!("analytics.api.errors").increment(1);
metrics::histogram!("analytics.query.daily_flights_ms").record(duration_ms);
metrics::counter!("analytics.cache.hit").increment(1);

// Background task updates these gauges every 60 seconds
metrics::gauge!("analytics.flights.today").set(summary.flights_today as f64);
metrics::gauge!("analytics.flights.last_7d").set(summary.flights_7d as f64);
metrics::gauge!("analytics.aircraft.active_7d").set(summary.active_aircraft_7d as f64);

Grafana Dashboard:

  • Location: infrastructure/grafana-dashboard-analytics.json
  • Tracks: API request rates, cache hit rates, query latency percentiles, error rates
  • Background task runs every 60 seconds to update summary metrics

Adding New Analytics:

  1. Add database query to analytics_repo.rs
  2. Add caching method to analytics_cache.rs with metrics
  3. Add API handler to actions/analytics.rs with request/error metrics
  4. Register route in web.rs
  5. Add metrics to metrics.rs::initialize_analytics_metrics()
  6. Update Grafana dashboard with new metric queries

Code Quality Standards

Pre-commit Hooks (REQUIRED)

All changes must pass these checks locally:

  1. Rust Quality:

    • cargo fmt --check (formatting)
    • cargo clippy --all-targets --all-features -- -D warnings (linting)
    • cargo test --verbose (unit tests)
    • cargo audit (security audit)
  2. Frontend Quality:

    • bun run format (Prettier - auto-fix formatting)
    • bun run lint (oxlint + Prettier check)
    • bun run check (TypeScript validation)
    • bun run test (Playwright E2E tests)
    • bun run build (build verification)

    Note: If formatting issues are found by bun run lint, run bun run format to auto-fix them.

  3. File Quality:

    • No trailing whitespace
    • Proper file endings
    • Valid YAML/JSON/TOML syntax

MCP Server Setup (Optional - For Claude Code Database Access)

Claude Code can connect directly to the PostgreSQL database using the pgEdge Postgres MCP Server, enabling natural language database queries and schema introspection.

Installation:

  1. Clone and build the MCP server:

    cd /tmp
    git clone https://github.com/pgEdge/pgedge-postgres-mcp.git
    cd pgedge-postgres-mcp
    go build -v -o bin/pgedge-postgres-mcp ./cmd/pgedge-pg-mcp-svr
    cp bin/pgedge-postgres-mcp ~/.local/bin/
  2. Configure for your project:

    # Copy the example configuration
    cp .mcp.json.example .mcp.json
    
    # Edit .mcp.json with your settings
    # Update the "command" path to where you installed the binary
    # Update "PGUSER" to your PostgreSQL username
  3. Restart Claude Code to load the MCP server

What you get:

  • 🔍 Schema introspection - Query tables, columns, indexes, constraints
  • 📊 Database queries - Execute SQL queries (read-only for safety)
  • 📈 Performance metrics - Access pg_stat_statements and other stats
  • 🧠 Natural language queries - Ask questions about the database in plain English

Security Notes:

  • The MCP server runs in read-only mode by default
  • .mcp.json is gitignored to prevent committing local paths and credentials
  • Use .pgpass file for password management instead of storing in .mcp.json

Documentation: https://www.pgedge.com/blog/introducing-the-pgedge-postgres-mcp-server

Common Patterns

Error Handling

// � Rust error handling
use anyhow::{Context, Result};

pub async fn process_data() -> Result<ProcessedData> {
    let data = fetch_data()
        .await
        .context("Failed to fetch data")?;

    Ok(process(data))
}
// � TypeScript error handling
try {
    const response = await serverCall<AircraftResponse>('/devices');
    devices = response.devices || [];
} catch (err) {
    const errorMessage = err instanceof Error ? err.message : 'Unknown error';
    error = `Failed to load devices: ${errorMessage}`;
}

State Management

<!-- � Svelte stores -->
<script lang="ts">
    import { writable } from 'svelte/store';

    const aircraftStore = writable<Aircraft[]>([]);

    // Use $aircraftStore for reactive access
</script>

API Integration

// � Server communication
import { serverCall } from '$lib/api/server';

const response = await serverCall<AircraftListResponse>('/devices', {
    method: 'GET',
    params: { limit: 50 }
});

Security Requirements

  1. Input Validation: All user inputs must be validated
  2. SQL Injection Prevention: Use Diesel ORM query builder
  3. XSS Prevention: Proper HTML escaping in Svelte
  4. Authentication: JWT tokens for API access
  5. HTTPS Only: All production traffic encrypted

Testing Requirements

Rust Tests

#[cfg(test)]
mod tests {
    use super::*;

    #[tokio::test]
    async fn test_aircraft_search() {
        // Test implementation
    }
}

Frontend E2E Tests (Playwright)

  • Framework: Playwright v1.56+
  • Test Directory: web/e2e/
  • Documentation: See web/e2e/README.md for comprehensive testing guide
  • Running tests: cd web && bun test

Performance Guidelines

  1. Database: Use proper indexes, limit query results
  2. Frontend: Lazy loading, virtual scrolling for large lists
  3. API: Pagination for large datasets
  4. Real-time: Efficient NATS subscription management

Remember: This project maintains high code quality standards. All changes must pass pre-commit hooks and CI/CD pipeline. When in doubt, check existing patterns and follow established conventions.

  • The rust backend for this project is in src/ and the frontend is a Svelte 5 project in web/
  • You should absolutely never use --no-verify
  • When running clippy or cargo build, set the timeout to ten minutes
  • Use a timeout of 10 minutes for running "cargo test" or "cargo clippy"

Branch Protection Rules

CRITICAL: The main branch is protected and does not allow direct commits.

Always Use Feature Branches

  • NEVER commit directly to main - Always create a feature/topic branch first
  • Use descriptive branch names:
    • feature/description for new features
    • fix/description for bug fixes
    • refactor/description for code refactoring
    • docs/description for documentation changes

Proper Development Workflow

  1. Create a topic branch: git checkout -b feature/my-feature
  2. Make changes and commit: Only stage specific files you modified
  3. Push to remote: git push origin feature/my-feature
  4. Create Pull Request: Use GitHub UI to create PR for review

If You Accidentally Commit to Main

If you accidentally commit to main, follow these steps to fix it:

  1. git reset --hard HEAD~N (where N is the number of commits to undo)
  2. git checkout -b topic/branch-name commit-hash (create branch for each commit)
  3. git checkout main (return to main)

Example:

# You accidentally made 3 commits to main
git log --oneline -3  # See the commits
# c3c3c3c third commit
# b2b2b2b second commit
# a1a1a1a first commit

# Undo the commits on main
git reset --hard HEAD~3

# Create branches for each
git checkout -b feature/third-feature c3c3c3c
git checkout -b feature/second-feature b2b2b2b
git checkout -b fix/first-fix a1a1a1a

# Return to main
git checkout main
  • Any time that you have enter a filename in the shell, use quotes. We are using zsh, and shell expansion treats our Svelte pages incorrectly if you do not because "[]" has a special meaning.
  • Whenever you commit, you should also push with -u to set the upstream branch.
  • When opening a PR, do not set it to squash commits. Set it to create a merge commit.

Release Process

IMPORTANT: Version numbers are automatically derived from git tags. Do NOT manually edit version numbers in Cargo.toml or package.json.

How Versioning Works

  • Rust: Version is derived from git tags at build time using the vergen crate
  • SvelteKit: Version is generated from git tags by web/scripts/generate-version.js during prebuild
  • Source of Truth: Git tags (format: v0.1.5)
  • No Version Commits: Version files (Cargo.toml, package.json) contain placeholder values only

Creating a Release

Use the simplified release script:

# Semantic version bump (recommended)
./scripts/create-release patch    # 0.1.4 → 0.1.5
./scripts/create-release minor    # 0.1.4 → 0.2.0
./scripts/create-release major    # 0.1.4 → 1.0.0

# Explicit version (if needed)
./scripts/create-release v0.1.5

# Create as draft (for review before publishing)
./scripts/create-release patch --draft

# Create with custom release notes
./scripts/create-release patch --notes "Custom release notes here"

What happens automatically:

  1. ✅ Script creates GitHub Release with tag v0.1.5
  2. ✅ CI builds x64 static binary (version derived from tag) - ARM64 available via manual workflow
  3. ✅ CI runs all tests and security audits
  4. ✅ CI deploys to production automatically (via ci.yml deploy-production job)

Manual Release via GitHub CLI

# Alternative: Use gh CLI directly
gh release create v0.1.5 --generate-notes

Version Format

  • Tagged release: v0.1.4
  • Development build: v0.1.4-2-ge930185 (2 commits after v0.1.4)
  • Dirty working tree: v0.1.4-dirty
  • No git repo: 0.0.0-dev

Checking Current Version

# Binary version (shows git-derived version)
./target/release/soar --version

# Git describe (shows current version from git)
git describe --tags --always --dirty

Old Release Process (Deprecated)

The old ./scripts/release script has been archived to ./scripts/archive/release-old. It is no longer needed because:

  • ❌ Required creating release branch → PR → auto-merge → tag push
  • ❌ Manually edited version files and committed changes
  • ❌ Complex multi-step process with potential for errors

The new process eliminates all of this complexity.