This document provides essential guidance for AI assistants working on the SOAR (Soaring Observation And Records) project.
For GitHub Copilot users, see these additional resources:
.github/copilot-instructions.md- Project-specific coding patterns and conventions optimized for Copilot.github/copilot-setup-steps.yml- Complete development environment setup guide.github/COPILOT-RECOMMENDATIONS.md- Advanced tips for maximizing Copilot effectiveness
These files complement this document and provide Copilot-optimized guidance.
SOAR is a comprehensive aircraft tracking and club management system built with:
- Backend: Rust with Axum web framework, PostgreSQL with PostGIS
- Frontend: SvelteKit with TypeScript, Tailwind CSS, Skeleton UI components
- Real-time: NATS messaging for live aircraft position updates
- Data Sources: APRS-IS integration, FAA aircraft registry, airport databases
- KEEP DOCUMENTATION UP TO DATE - Documentation (including README.md, CLAUDE.md, and other docs) must be updated when features change
- When renaming services, commands, or changing architecture, update all relevant documentation
- Documentation changes should be part of the same PR that changes the implementation
- Outdated documentation is a bug - treat it with the same priority as code bugs
- NEVER commit directly to main - The main branch is protected. ALWAYS create a feature/topic branch first
- NEVER use
git commit --no-verify- All commits must pass pre-commit hooks - NEVER push to main - Pushing to feature branches is okay, but never push directly to main
- NEVER skip CI checks - Local development must match GitHub Actions pipeline
- ASK BEFORE removing large amounts of working code - Get confirmation before major deletions
- AVOID duplicate code - Check for existing implementations before writing new code
- Pre-commit hooks run:
cargo fmt,cargo clippy,cargo test,bun run lint,bun run check,bun run test
- NEVER add Co-Authored-By lines - Do not include Claude Code attribution in commits
- AVOID raw SQL in Diesel - Only use raw SQL if absolutely necessary, and ask first before using it
- Always prefer Diesel's query builder and type-safe methods over raw SQL
- CREATE INDEX CONCURRENTLY requires a
metadata.toml- Diesel migrations run in transactions by default, which don't support CONCURRENTLY. To use CONCURRENTLY, add ametadata.tomlfile to the migration directory withrun_in_transaction = false. Example:# migrations/2026-01-30-123456-0000_add_index/metadata.toml run_in_transaction = false
- You are running on the staging server. The staging server is named "supervillain". You can always run commands that do not modify anything. Ask before running commands that modify something.
- You have access to the production server by running "ssh glider.flights". The user you are running as already has "sudo" access. Ask before connecting or using sudo unless I give you permission in advance.
- NEVER attempt to deploy or restart services without explicit instructions - Only build/check code, do not deploy
- Use
cargo checkfor validation - Do not run deployment scripts or restart systemd services unless instructed - Ask before any service modifications - This includes systemctl restart, deployment scripts, or copying binaries to production locations
- CRITICAL: Updating running binaries - You CANNOT copy over a running binary (will fail with "Text file busy"). Always delete the binary first, then copy:
# WRONG: This will fail if service is running sudo cp target/debug/soar /usr/local/bin/soar-staging # CORRECT: Delete first, then copy sudo rm -f /usr/local/bin/soar-staging sudo cp target/debug/soar /usr/local/bin/soar-staging
When modifying configuration files on the local system (e.g., /etc/tempo/config.yml, /etc/prometheus/prometheus.yml), you MUST also update the corresponding config file in the infrastructure/ directory in this repository. Both copies must be kept identical.
Config files and their repo locations:
/etc/tempo/config.yml→infrastructure/tempo-config.yml/etc/loki/config.yml→infrastructure/loki-config.yml/etc/prometheus/prometheus.yml→infrastructure/prometheus.yml/etc/alloy/config.alloy→infrastructure/alloy-config.alloy.template(template - processed by soar-deploy with git commit for profiling source links)/etc/pyroscope/config.yml→infrastructure/pyroscope-config.yml/etc/netdata/netdata.conf→infrastructure/netdata-config.conf/etc/grafana/provisioning/datasources/soar-postgres.yaml→infrastructure/grafana-provisioning/datasources/soar-postgres.yaml.template(template - processed by soar-deploy)/etc/soar/ingest.toml→infrastructure/ingest.toml
Process for config changes:
- Edit the config file in
infrastructure/first - For manual deployment: Copy to the system location:
sudo cp infrastructure/<file> /etc/<service>/<file> - Restart the service if needed:
sudo systemctl restart <service> - Commit the infrastructure/ change to git
Auto-deployed by soar-deploy: tempo-config.yml, loki-config.yml, pyroscope-config.yml, alloy-config.alloy, prometheus.yml, grafana-provisioning/ (including datasource templates)
Managed by scripts/setup-pgdog: /etc/pgdog/ (PgDog connection pooler config — generated from DATABASE_URL, not synced from repo)
Manual deployment required: netdata-config.conf
This ensures config changes are tracked in version control and can be reproduced across environments.
- Development Database:
soar_dev- This is where you work - Staging Database:
soar_staging- This should be queried before the production database; its schema will be more up-to-date and it should contain approximately the same data. It is read-only for development purposes. - Production Database:
soar- This is read-only for development purposes - NEVER run UPDATE, INSERT, or DELETE on production database (
soar) - Only run these via Diesel migrations - ONLY DDL queries (CREATE, ALTER, DROP) via migrations - Never run DDL queries manually on production
- SELECT queries are allowed on both databases - For investigation and analysis
- All data modifications must go through migrations - This ensures they're tracked and reproducible
- Deleting data before adding constraints - You can include DELETE statements in the same migration before constraint creation. The constraint validates against the final state of the transaction, so the DELETE will complete first.
CRITICAL - Grafana Dashboard Synchronization:
- ANY code change that adds, removes, or renames a metric MUST include corresponding Grafana dashboard updates in the same commit/PR. This includes removing code that emitted metrics — search
infrastructure/dashboards/for the metric name and remove/update any panels that reference it. - Verify dashboard queries after changes - After updating code, run
grep -r "old_metric_name" infrastructure/dashboards/to find all references and update them - Run
python3 infrastructure/dashboards/build.py --verifyafter any dashboard changes to ensure all dashboards build correctly - Dashboard locations (generated from
infrastructure/dashboards/):grafana-dashboard-run.json- Main run command metrics (core, routing, flights)grafana-dashboard-run-geocoding.json- Pelias geocoding servicegrafana-dashboard-run-elevation.json- Elevation processing and AGLgrafana-dashboard-ingest.json- Data ingestion (ingestcommand) - OGN/APRS and ADS-Bgrafana-dashboard-web.json- Web server (webcommand)grafana-dashboard-nats.json- NATS metricsgrafana-dashboard-analytics.json- Analytics API and cache performancegrafana-dashboard-coverage.json- Coverage API metrics
Metric Standards:
- Naming convention - Use dot notation (e.g.,
aprs.aircraft.device_upsert_ms) - Document metric changes - Note metric name changes in PR description for ops team awareness
- Remove obsolete dashboard queries - If a metric is removed from code, remove it from panel files in
dashboards/panels/and rebuild
Recent Metric Changes:
aprs.aircraft.aircraft_lookup_ms→aprs.aircraft.aircraft_upsert_ms(2025-01-07, PR #312)- Updated in code and Grafana dashboard (2025-01-12)
- REMOVED:
aprs.elevation.droppedandnats_publisher.dropped_fixes(2025-01-12)- These metrics were removed from dashboard as messages can no longer be dropped
Grafana Alerting:
- Alert Configuration - Managed via infrastructure as code in
infrastructure/grafana-provisioning/alerting/ - Email Notifications - Alerts sent via SMTP (credentials from
/etc/soar/envor/etc/soar/env-staging) - Template Files - Use
.templatesuffix for files with credential placeholders (e.g.,contact-points.yml.template) - Deployment -
soar-deployscript automatically processes templates and installs configs - Documentation - See
infrastructure/GRAFANA-ALERTING.mdfor complete guide - NEVER commit credentials - Template files use placeholders, actual values extracted during deployment
Dashboard Builder:
Dashboards are built from modular panel files using infrastructure/dashboards/build.py:
# Build all dashboards
python3 infrastructure/dashboards/build.py
# Build specific dashboard
python3 infrastructure/dashboards/build.py run-geocoding
# Extract panels from existing dashboards (one-time setup)
python3 infrastructure/dashboards/build.py --extract
# Verify all dashboards build correctly
python3 infrastructure/dashboards/build.py --verifyStructure:
dashboards/panels/{dashboard}/- Individual panel JSON filesdashboards/definitions/{dashboard}.json- Dashboard definitions (panel order, rows, metadata)dashboards/common/- Shared configs (annotations, templating variables)
Editing dashboards:
- Edit individual panel files in
dashboards/panels/{dashboard}/ - Edit panel order/layout in
dashboards/definitions/{dashboard}.json - Run
python3 infrastructure/dashboards/build.pyto verify your changes build correctly - Commit only the panel/definition source files (built
grafana-dashboard-*.jsonfiles are generated during deployment)
Before working with the frontend code, you MUST have Node.js 24+ installed and dependencies installed:
-
Install Bun:
curl -fsSL https://bun.sh/install | bash # Verify version bun --version # Should be v1.x.x
-
Install Dependencies:
cd web bun install -
Verify Setup:
# Check formatting and linting pass bun run lint # Check TypeScript compilation bun run check
CRITICAL: The web/ directory requires node_modules to be installed before you can run any bun commands like bun run lint, bun run format, or bun run check. If you get "command not found" errors for prettier or oxlint, you need to run bun install first.
- NO Server-Side Rendering (SSR) ANYWHERE - The frontend MUST be compiled statically
- Use
export const ssr = false;in+page.tsfiles to disable SSR for specific pages - The compiled static site is embedded in the Rust binary for deployment
- All pages must work as a pure client-side Single Page Application (SPA)
- Authentication and route protection must be handled client-side
<!-- � CORRECT: Use Svelte 5 event handlers -->
<button onclick={handleClick}>Click me</button>
<input oninput={handleInput} onkeydown={handleKeydown} />
<!-- L WRONG: Don't use Svelte 4 syntax -->
<button on:click={handleClick}>Click me</button>
<input on:input={handleInput} on:keydown={handleKeydown} /><!-- � CORRECT: Use @lucide/svelte exclusively -->
import { Search, User, Settings, ChevronDown } from '@lucide/svelte';
<!-- L WRONG: Don't use other icon libraries -->- Skeleton UI: Use
@skeletonlabs/skeleton-sveltecomponents (Svelte 5 compatible) - Tailwind CSS: Use utility-first CSS approach
- TypeScript: Full type safety required
Our version of Skeleton UI uses preset- classes, NOT variant- classes. The variant-* prefix is from an older version of Skeleton and does not exist in our codebase. Always use preset- equivalents (e.g., preset-tonal-surface-500, preset-outlined, preset-filled-primary-500).
For dropdown or popover backgrounds, always use explicit Tailwind background classes with both light and dark mode:
<!-- CORRECT: Explicit background classes -->
<div class="bg-surface-50 dark:bg-surface-800 border border-surface-300 dark:border-surface-600 shadow-lg">
<!-- WRONG: variant-* classes don't exist in our Skeleton version, resulting in no background -->
<div class="variant-filled-surface border border-surface-400 shadow-lg">For Skeleton UI <Combobox.Content> or any [data-popover-content] elements, you must also add CSS overrides:
:global(.my-wrapper [data-popover-content]) {
background-color: var(--color-surface-50);
color: var(--color-surface-900);
}
:global(.dark .my-wrapper [data-popover-content]) {
background-color: var(--color-surface-800);
color: var(--color-surface-50);
}See ClubSelector.svelte and AirportSelector.svelte for working examples.
All data types returned from the Rust backend to the TypeScript frontend MUST be generated using ts-rs.
This ensures type safety across the API boundary and prevents drift between backend and frontend types.
How it works:
- Add
#[derive(TS)]and#[ts(export, export_to = "../web/src/lib/types/generated/")]to Rust structs - Add export call in
src/ts_export.rs - Run
cargo test ts_exportto generate.tsfiles inweb/src/lib/types/generated/ - Import and re-export from
web/src/lib/types/index.ts
Example Rust struct:
use ts_rs::TS;
#[derive(Debug, Clone, Serialize, Deserialize, TS)]
#[ts(export, export_to = "../web/src/lib/types/generated/")]
#[serde(rename_all = "camelCase")]
pub struct MyApiResponse {
pub id: Uuid,
pub name: String,
pub created_at: DateTime<Utc>,
}Then in src/ts_export.rs:
use crate::my_module::MyApiResponse;
#[test]
fn export_types() {
MyApiResponse::export().expect("Failed to export MyApiResponse type");
}NEVER manually write TypeScript interfaces for API response types - always generate them from Rust.
- ALWAYS run
cargo fmtafter editing Rust files to ensure consistent formatting - Pre-commit hooks automatically run
cargo fmt- but format manually for immediate feedback - Use
cargo clippyto catch common issues and improve code quality - All Rust code must pass formatting, clippy, and tests before commit
- NEVER run
cargo build --releaseunless explicitly instructed - release builds are slow and unnecessary for development - Use
cargo checkfor quick validation - this checks that code compiles without building binaries - Use
cargo build(debug mode) only when you need to run the binary locally for testing - Pre-commit hooks run
cargo clippyautomatically - this is more thorough thancargo checkand runs on every commit - NEVER run
cargo testmanually during development - pre-commit hooks will run tests automatically when you commit - Release builds are only needed for deployment, which is handled by CI/CD
// � Use anyhow::Result for error handling
use anyhow::Result;
// � Use tracing for logging
use tracing::{info, warn, error, debug};
// � Proper async function signatures
pub async fn handler(State(state): State<AppState>) -> impl IntoResponse {
// Handler implementation
}// � Use Diesel ORM patterns
use diesel::prelude::*;
// � PostGIS integration
use postgis_diesel::geography::Geography;Reference the official Skeleton UI documentation for Svelte 5 components:
- Skeleton UI Svelte 5 Guide: https://www.skeleton.dev/llms-svelte.txt
-- � Spatial data patterns
CREATE TABLE airports (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
location GEOGRAPHY(POINT, 4326) NOT NULL,
elevation_ft INTEGER
);
-- � Indexes for spatial queries
CREATE INDEX CONCURRENTLY idx_airports_location ON airports USING GIST (location);// � Route structure
#[derive(Clone)]
pub struct AppState {
pub pool: PgPool,
pub nats_client: Arc<async_nats::Client>,
}
// � Handler patterns
pub async fn get_aircraft(
State(state): State<AppState>,
Query(params): Query<DeviceSearchParams>,
) -> Result<impl IntoResponse, ApiError> {
// Implementation
}<!-- � Component structure -->
<script lang="ts">
import { Search, Filter } from '@lucide/svelte';
import { Segment } from '@skeletonlabs/skeleton-svelte';
let searchQuery = '';
function handleSearch() {
// Implementation using onclick, not on:click
}
</script>
<button onclick={handleSearch} class="btn variant-filled-primary">
<Search class="h-4 w-4" />
Search
</button>// � NATS message patterns
#[derive(Serialize, Deserialize)]
pub struct LiveFix {
pub aircraft_id: String,
pub latitude: f64,
pub longitude: f64,
pub timestamp: String,
}SOAR includes a comprehensive analytics system for tracking flight statistics and performance metrics.
Database Tables:
flight_analytics_daily- Daily flight aggregations with automatic triggersflight_analytics_hourly- Hourly flight statistics for recent trends- Materialized automatically via PostgreSQL triggers on flight insert/update
Backend Components:
src/analytics.rs- Core data models for analytics responsessrc/analytics_repo.rs- Database queries using Diesel ORMsrc/analytics_cache.rs- 60-second TTL cache using Mokasrc/actions/analytics.rs- REST API endpoints
API Endpoints (/data/analytics/...):
/flights/daily- Daily flight counts and statistics/flights/hourly- Hourly flight trends/flights/duration-distribution- Flight duration buckets/devices/outliers- Devices with anomalous flight patterns (z-score > threshold)/devices/top- Top devices by flight count/clubs/daily- Club-level analytics/airports/activity- Airport usage statistics/summary- Dashboard summary with key metrics
Caching Strategy:
- All queries cached for 60 seconds to reduce database load
- Cache hit/miss metrics tracked in
analytics.cache.hitandanalytics.cache.miss - Query latency tracked per endpoint (e.g.,
analytics.query.daily_flights_ms)
Metrics & Monitoring:
// Analytics API metrics
metrics::counter!("analytics.api.daily_flights.requests").increment(1);
metrics::counter!("analytics.api.errors").increment(1);
metrics::histogram!("analytics.query.daily_flights_ms").record(duration_ms);
metrics::counter!("analytics.cache.hit").increment(1);
// Background task updates these gauges every 60 seconds
metrics::gauge!("analytics.flights.today").set(summary.flights_today as f64);
metrics::gauge!("analytics.flights.last_7d").set(summary.flights_7d as f64);
metrics::gauge!("analytics.aircraft.active_7d").set(summary.active_aircraft_7d as f64);Grafana Dashboard:
- Location:
infrastructure/grafana-dashboard-analytics.json - Tracks: API request rates, cache hit rates, query latency percentiles, error rates
- Background task runs every 60 seconds to update summary metrics
Adding New Analytics:
- Add database query to
analytics_repo.rs - Add caching method to
analytics_cache.rswith metrics - Add API handler to
actions/analytics.rswith request/error metrics - Register route in
web.rs - Add metrics to
metrics.rs::initialize_analytics_metrics() - Update Grafana dashboard with new metric queries
All changes must pass these checks locally:
-
Rust Quality:
cargo fmt --check(formatting)cargo clippy --all-targets --all-features -- -D warnings(linting)cargo test --verbose(unit tests)cargo audit(security audit)
-
Frontend Quality:
bun run format(Prettier - auto-fix formatting)bun run lint(oxlint + Prettier check)bun run check(TypeScript validation)bun run test(Playwright E2E tests)bun run build(build verification)
Note: If formatting issues are found by
bun run lint, runbun run formatto auto-fix them. -
File Quality:
- No trailing whitespace
- Proper file endings
- Valid YAML/JSON/TOML syntax
Claude Code can connect directly to the PostgreSQL database using the pgEdge Postgres MCP Server, enabling natural language database queries and schema introspection.
Installation:
-
Clone and build the MCP server:
cd /tmp git clone https://github.com/pgEdge/pgedge-postgres-mcp.git cd pgedge-postgres-mcp go build -v -o bin/pgedge-postgres-mcp ./cmd/pgedge-pg-mcp-svr cp bin/pgedge-postgres-mcp ~/.local/bin/
-
Configure for your project:
# Copy the example configuration cp .mcp.json.example .mcp.json # Edit .mcp.json with your settings # Update the "command" path to where you installed the binary # Update "PGUSER" to your PostgreSQL username
-
Restart Claude Code to load the MCP server
What you get:
- 🔍 Schema introspection - Query tables, columns, indexes, constraints
- 📊 Database queries - Execute SQL queries (read-only for safety)
- 📈 Performance metrics - Access
pg_stat_statementsand other stats - 🧠 Natural language queries - Ask questions about the database in plain English
Security Notes:
- The MCP server runs in read-only mode by default
.mcp.jsonis gitignored to prevent committing local paths and credentials- Use
.pgpassfile for password management instead of storing in.mcp.json
Documentation: https://www.pgedge.com/blog/introducing-the-pgedge-postgres-mcp-server
// � Rust error handling
use anyhow::{Context, Result};
pub async fn process_data() -> Result<ProcessedData> {
let data = fetch_data()
.await
.context("Failed to fetch data")?;
Ok(process(data))
}// � TypeScript error handling
try {
const response = await serverCall<AircraftResponse>('/devices');
devices = response.devices || [];
} catch (err) {
const errorMessage = err instanceof Error ? err.message : 'Unknown error';
error = `Failed to load devices: ${errorMessage}`;
}<!-- � Svelte stores -->
<script lang="ts">
import { writable } from 'svelte/store';
const aircraftStore = writable<Aircraft[]>([]);
// Use $aircraftStore for reactive access
</script>// � Server communication
import { serverCall } from '$lib/api/server';
const response = await serverCall<AircraftListResponse>('/devices', {
method: 'GET',
params: { limit: 50 }
});- Input Validation: All user inputs must be validated
- SQL Injection Prevention: Use Diesel ORM query builder
- XSS Prevention: Proper HTML escaping in Svelte
- Authentication: JWT tokens for API access
- HTTPS Only: All production traffic encrypted
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_aircraft_search() {
// Test implementation
}
}- Framework: Playwright v1.56+
- Test Directory:
web/e2e/ - Documentation: See
web/e2e/README.mdfor comprehensive testing guide - Running tests:
cd web && bun test
- Database: Use proper indexes, limit query results
- Frontend: Lazy loading, virtual scrolling for large lists
- API: Pagination for large datasets
- Real-time: Efficient NATS subscription management
Remember: This project maintains high code quality standards. All changes must pass pre-commit hooks and CI/CD pipeline. When in doubt, check existing patterns and follow established conventions.
- The rust backend for this project is in src/ and the frontend is a Svelte 5 project in web/
- You should absolutely never use --no-verify
- When running clippy or cargo build, set the timeout to ten minutes
- Use a timeout of 10 minutes for running "cargo test" or "cargo clippy"
CRITICAL: The main branch is protected and does not allow direct commits.
- NEVER commit directly to main - Always create a feature/topic branch first
- Use descriptive branch names:
feature/descriptionfor new featuresfix/descriptionfor bug fixesrefactor/descriptionfor code refactoringdocs/descriptionfor documentation changes
- Create a topic branch:
git checkout -b feature/my-feature - Make changes and commit: Only stage specific files you modified
- Push to remote:
git push origin feature/my-feature - Create Pull Request: Use GitHub UI to create PR for review
If you accidentally commit to main, follow these steps to fix it:
git reset --hard HEAD~N(where N is the number of commits to undo)git checkout -b topic/branch-name commit-hash(create branch for each commit)git checkout main(return to main)
Example:
# You accidentally made 3 commits to main
git log --oneline -3 # See the commits
# c3c3c3c third commit
# b2b2b2b second commit
# a1a1a1a first commit
# Undo the commits on main
git reset --hard HEAD~3
# Create branches for each
git checkout -b feature/third-feature c3c3c3c
git checkout -b feature/second-feature b2b2b2b
git checkout -b fix/first-fix a1a1a1a
# Return to main
git checkout main- Any time that you have enter a filename in the shell, use quotes. We are using zsh, and shell expansion treats our Svelte pages incorrectly if you do not because "[]" has a special meaning.
- Whenever you commit, you should also push with -u to set the upstream branch.
- When opening a PR, do not set it to squash commits. Set it to create a merge commit.
IMPORTANT: Version numbers are automatically derived from git tags. Do NOT manually edit version numbers in Cargo.toml or package.json.
- Rust: Version is derived from git tags at build time using the
vergencrate - SvelteKit: Version is generated from git tags by
web/scripts/generate-version.jsduring prebuild - Source of Truth: Git tags (format:
v0.1.5) - No Version Commits: Version files (
Cargo.toml,package.json) contain placeholder values only
Use the simplified release script:
# Semantic version bump (recommended)
./scripts/create-release patch # 0.1.4 → 0.1.5
./scripts/create-release minor # 0.1.4 → 0.2.0
./scripts/create-release major # 0.1.4 → 1.0.0
# Explicit version (if needed)
./scripts/create-release v0.1.5
# Create as draft (for review before publishing)
./scripts/create-release patch --draft
# Create with custom release notes
./scripts/create-release patch --notes "Custom release notes here"What happens automatically:
- ✅ Script creates GitHub Release with tag
v0.1.5 - ✅ CI builds x64 static binary (version derived from tag) - ARM64 available via manual workflow
- ✅ CI runs all tests and security audits
- ✅ CI deploys to production automatically (via
ci.ymldeploy-productionjob)
# Alternative: Use gh CLI directly
gh release create v0.1.5 --generate-notes- Tagged release:
v0.1.4 - Development build:
v0.1.4-2-ge930185(2 commits after v0.1.4) - Dirty working tree:
v0.1.4-dirty - No git repo:
0.0.0-dev
# Binary version (shows git-derived version)
./target/release/soar --version
# Git describe (shows current version from git)
git describe --tags --always --dirtyThe old ./scripts/release script has been archived to ./scripts/archive/release-old. It is no longer needed because:
- ❌ Required creating release branch → PR → auto-merge → tag push
- ❌ Manually edited version files and committed changes
- ❌ Complex multi-step process with potential for errors
The new process eliminates all of this complexity.