Project: Rackula — Rack Layout Designer for Homelabbers Version: 0.7.6
We follow Cargo semver conventions:
Pre-1.0 semantics (current):
0.MINOR.patch— minor version acts like major (breaking changes allowed)0.minor.PATCH— bug fixes and small improvements only- Pre-1.0 means "API unstable, in active development"
When to bump versions:
| Change Type | Version Bump | Example |
|---|---|---|
| Feature milestone | 0.X.0 |
New major capability (e.g., multi-rack, new export format) |
| Bug fixes / polish | 0.x.Y |
Only when releasing to users, not every commit |
| Breaking changes | 0.X.0 |
Format changes, removed features |
Workflow:
- Don't tag every commit — accumulate changes on
main - Tag releases when there's a coherent set of changes worth announcing
- Use pre-release tags for development checkpoints:
v0.5.0-alpha.1,v0.5.0-beta.1 - Batch related fixes into single patch releases
Release Process:
Use the /release skill to create releases with proper changelog entries:
/release patch # Bug fixes: 0.5.8 → 0.5.9
/release minor # Features: 0.5.9 → 0.6.0
/release major # Breaking: 0.6.0 → 1.0.0
/release 1.0.0 # Explicit versionThe /release skill will:
- Gather changes since last release (commits, PRs, issues)
- Draft a changelog entry in Keep a Changelog format
- Preview and confirm with you
- Update CHANGELOG.md, bump version, tag, and push
Important: CHANGELOG.md is the source of truth. GitHub releases are auto-generated from changelog entries. The release workflow will fail if no changelog entry exists.
Tag format: Always use v prefix (e.g., v0.5.8, not 0.5.8)
Current milestones:
0.5.x— Unified type system, NetBox-compatible data model1.0.0— Production-ready, stable API
Documentation is organized by purpose:
docs/
├── ARCHITECTURE.md → High-level overview and entry points
├── deployment/ → Deployment-specific docs (auth, hosting)
├── guides/
│ ├── TESTING.md → Testing patterns and commands
│ └── ACCESSIBILITY.md → A11y compliance checklist
├── reference/
│ ├── SPEC.md → Technical overview and design principles
│ ├── BRAND.md → Design system quick reference
│ └── GITHUB-WORKFLOW.md → GitHub Issues workflow
├── planning/
│ └── ROADMAP.md → Version planning
├── plans/ → Implementation plans (YYYY-MM-DD-kebab-case.md)
├── research/ → Research spikes by issue ({ISSUE}-{type}.md)
├── spikes/ → Active spike investigations
└── superpowers/
└── specs/ → Brainstorming design specs (created on first use)
Start here: docs/ARCHITECTURE.md for codebase overview.
Reference: docs/reference/SPEC.md for technical overview and design principles.
Project overrides for Superpowers v5 document locations:
| Document Type | Location | Naming Convention |
|---|---|---|
| Specs (brainstorming) | docs/superpowers/specs/ |
YYYY-MM-DD-<topic>-design.md |
| Plans (execution) | docs/plans/ |
YYYY-MM-DD-<feature-name>.md |
| Research spikes | docs/research/ |
{ISSUE}-{type}.md |
Plans use docs/plans/ (project override — v5 defaults to docs/superpowers/plans/).
GitHub Issues is the source of truth for task tracking.
Querying work:
# Find next task
gh issue list --label ready --milestone v0.6.0 --state open
# Get issue details
gh issue view <number>After completing an issue:
gh issue close <number> --comment "Implemented in <commit-hash>"Issue structure provides:
- Acceptance Criteria → Requirements checklist
- Technical Notes → Implementation guidance
- Test Requirements → TDD test cases
See docs/reference/GITHUB-WORKFLOW.md for full workflow documentation.
CodeRabbit provides AI code review on every PR. Claude Code must wait for CodeRabbit approval before merging.
- Create PR with
gh pr create - Wait for CodeRabbit review (7-30 min) — check with
gh pr checks <number> - If CodeRabbit requests changes:
- Read the CodeRabbit comments
- Address each issue in follow-up commits
- Push changes and wait for re-review
- Only merge after CodeRabbit approves
Run local review before pushing to catch issues early:
# Review uncommitted changes (token-efficient output for AI)
coderabbit --prompt-only --type uncommitted
# Review committed changes on current branch
coderabbit --prompt-only --type committedAlways use --prompt-only — provides concise, token-efficient output optimized for Claude Code.
# After creating PR, wait for CodeRabbit
gh pr checks <number> --watch
# View CodeRabbit's review comments
gh pr view <number> --commentsImportant: Never use gh pr merge until CodeRabbit has approved the PR.
Greenfield approach: Do not use migration or legacy support concepts in this project. Implement features as if they are the first and only implementation.
When given an overnight execution prompt:
Execution model: Plan execution uses subagent-driven development. Stopping conditions below apply to the orchestrating session, not individual subagent turns.
- You have explicit permission to work without pausing between prompts
- Do NOT ask for review or confirmation mid-session
- Do NOT pause to summarise progress until complete
- Continue until: all prompts done, test failure after 2 attempts, or genuine ambiguity requiring human decision
- I will review asynchronously via git commits and session-report.md
Stopping conditions (ONLY these):
- All prompts in current
prompt_plan.mdmarked complete - Test failure you cannot resolve after 2 attempts
- Ambiguity that genuinely requires human input (document in
blockers.md)
If none of those conditions are met, proceed immediately to the next prompt.
- Svelte 5 with runes (
$state,$derived,$effect) - TypeScript strict mode
- Vitest + @testing-library/svelte + Playwright
- CSS custom properties (design tokens in
src/lib/styles/tokens.css) - SVG rendering
<!-- ✅ CORRECT -->
<script lang="ts">
let count = $state(0);
let doubled = $derived(count * 2);
</script>
<!-- ❌ WRONG: Svelte 4 stores -->
<script lang="ts">
import { writable } from 'svelte/store';
</script>When implementing bits-ui components:
- Fetch docs:
WebFetch https://bits-ui.com/docs/components/{name}/llms.txt - Available: dialog, tabs, accordion, tooltip, popover, select, combobox
- Validate with Svelte MCP:
svelte-autofixertool - Follow existing wrapper patterns in
src/lib/components/ui/
First, decide if tests are needed. Ask: "What behavior can I test that TypeScript doesn't already verify?"
Skip tests entirely for:
- Visual-only components (icons, decorative SVGs, layout wrappers)
- Thin wrappers with no logic of their own
- Components where the only possible test is "renders without throwing"
If an issue's Acceptance Criteria requests tests for something in this list, the testing policy overrides the AC. Don't write low-value tests just because an issue asked for them.
If tests ARE needed, follow TDD:
- Write tests FIRST
- Run tests (should fail)
- Implement to pass
- Commit
What to test (high value):
These are the ONLY categories worth testing. If your component doesn't fit one of these, it probably doesn't need tests:
- Complex logic (collision detection, coordinate math, state machines)
- User-facing behavior (can user place device? does undo work?)
- Error paths and edge cases
- Integration between components
What NOT to test (low value):
- Static data (brand packs, device libraries) — schema validates this
- Hardcoded counts (
expect(devices).toHaveLength(68)) — breaks on intentional changes - Properties already validated by Zod schemas
- Simple getters, trivial functions, pass-through code
The Zero-Change Rule: Adding a device to a brand pack should require ZERO test file changes. If tests break, they're testing data, not behavior.
Trust the Schema: If DeviceTypeSchema.parse() passes, don't re-test individual
fields. One schema validation test covers all devices.
See docs/guides/TESTING.md for comprehensive testing guidelines.
These rules apply when you've decided tests ARE needed (see TDD Protocol above).
BEFORE writing any test, ask: "Would this test break if I made a legitimate code change?" If yes, DON'T WRITE IT.
❌ Assert exact array lengths on data arrays
// BAD: Breaks when you add a device to brand pack
expect(dellDevices).toHaveLength(68);
// GOOD: Test existence, not count
expect(dellDevices.length).toBeGreaterThan(0);Exception: Behavioral invariants (deduplication, pagination) may use exact lengths
with eslint-disable-next-line and justification:
// GOOD: Behavioral invariant with justification
// eslint-disable-next-line no-restricted-syntax -- deduplication should leave exactly 2 unique devices
expect(deduplicateDevices([device1, device1, device2])).toHaveLength(2);❌ Assert hardcoded color values
// BAD: Breaks on design token changes
expect(element).toHaveStyle("color: #4A7A8A");
expect(color).toBe("#FFFFFF");❌ Check if a function exists
// BAD: Zero value, TypeScript already does this
expect(typeof placeholderDeviceType).toBe("function");❌ Assert CSS class names
// BAD: Breaks on refactoring, tests implementation details
expect(button).toHaveClass("primary");❌ Test that a component renders
// BAD: If it compiles in TypeScript, it renders
expect(container.querySelector(".rack")).toBeInTheDocument();❌ Test component structure/DOM queries
// BAD: Fragile, tests implementation not behavior
const header = container.querySelector(".panel-header");
expect(header).toHaveTextContent("Settings");❌ Duplicate schema validation
// BAD: DeviceTypeSchema already validates this
expect(device.slug).toBeDefined();
expect(typeof device.u_height).toBe("number");✅ Test user-visible behavior
// GOOD: Tests what user experiences
it("user can place a device in rack", () => {
store.placeDevice("server-slug", 10);
expect(store.rack.devices).toContain(
expect.objectContaining({ slug: "server-slug" }),
);
});✅ Test core algorithms and edge cases
// GOOD: Complex logic with many edge cases
it("detects collision when devices overlap", () => {
// ... collision detection test
});✅ Use factories from src/tests/factories.ts
// GOOD: Shared, maintainable test data
import { createTestDeviceType, createTestRack } from "./factories";
const device = createTestDeviceType({ u_height: 2 });✅ Follow patterns in KEEP tests
// GOOD: Store tests, core algorithms, E2E tests
// Check src/tests/*-store.test.ts for examplesESLint hard-blocks:
querySelector()/ DOM node access in teststoHaveClass()assertionstoHaveLength(literal)exact length assertions- Hardcoded color assertions
These rules are enforced by ESLint on every commit and will fail the build if violated.
Why these rules exist: The project had 136 unit test files (46k LOC) causing OOM crashes and high token usage. We deleted 78 low-value files (57% reduction) to fix this. ESLint rules prevent re-accumulation by blocking the specific anti-patterns that caused bloat.
npm run dev # Dev server
npm run test # Unit tests (watch)
npm run test:run # Unit tests (CI)
npm run test:e2e # Playwright E2E
npm run build # Production build
npm run lint # ESLint check
npm run refresh-lockfile # Regenerate package-lock.json from scratchLockfile issues: If CI fails with "package.json and package-lock.json are out of
sync", run npm run refresh-lockfile to regenerate the lockfile from a clean state.
Uses the debug npm package with namespace filtering.
Enable in browser console:
localStorage.debug = "rackula:*"; // All logs
localStorage.debug = "rackula:layout:*"; // Layout module only
localStorage.debug = "rackula:*,-rackula:canvas:*"; // All except canvasNamespaces:
| Namespace | Purpose |
|---|---|
rackula:layout:state |
Layout store state |
rackula:layout:device |
Device placement/move |
rackula:canvas:transform |
Pan/zoom calculations |
rackula:canvas:panzoom |
Panzoom lifecycle |
rackula:cable:validation |
Cable validation |
rackula:app:mobile |
Mobile interactions |
Usage:
import { layoutDebug } from "$lib/utils/debug";
layoutDebug.device("placed device %s at U%d", slug, position);| Key | Action |
|---|---|
Ctrl+Z |
Undo |
Ctrl+Shift+Z |
Redo |
Ctrl+Y |
Redo (alternative) |
Ctrl+S |
Save layout |
Ctrl+O |
Load layout |
Ctrl+E |
Export |
Ctrl+H |
Share |
Ctrl+D |
Duplicate selected device/rack |
I |
Toggle display mode |
F |
Fit all |
Delete |
Delete selection |
? |
Show help |
Escape |
Clear selection / close |
↑↓ |
Move device in rack |
Before starting any task, check if a skill applies:
| Task Type | Skill | Why |
|---|---|---|
| Bug/issue investigation | /superpowers:systematic-debugging |
Prevents guessing, forces evidence |
| New feature or component | /superpowers:brainstorming |
Explores requirements before code |
| Multi-step implementation | /superpowers:writing-plans |
Plans auto-route to subagent execution |
| Working on GitHub issue | /dev-issue <number> |
Full workflow with worktree isolation |
| Research question | /research-spike <number> |
Structured investigation |
| Finishing a branch | /superpowers:finishing-a-development-branch |
Merge/PR decision flow |
| Worktree cleanup needed | /worktree-cleanup |
List and remove stale worktrees |
| Debugging with context | /debug-with-memory |
Memory-assisted systematic debugging |
| User-facing documentation | /technical-writing |
Enforces verification, style, structure |
Default rule: If uncertain, invoke /superpowers:brainstorming first.
| Location | URL |
|---|---|
| Production | https://count.racku.la/ |
| Dev/Preview | https://d.racku.la/ |
| Primary | https://github.com/RackulaLives/Rackula |
| Issues | https://github.com/RackulaLives/Rackula/issues |
Two environments with different deployment triggers:
| Environment | URL | Trigger | Infrastructure |
|---|---|---|---|
| Dev | d.racku.la | Push to main |
GitHub Pages |
| Prod | count.racku.la | Git tag v* |
VPS (Docker) |
Automatically deploys on every push to main (after lint/tests pass):
git push origin main # Triggers: lint → test → build → deploy to GitHub PagesDeploys when a version tag is pushed:
npm version patch # Creates v0.5.9 tag
git push && git push --tags # Triggers: Docker build → push to ghcr.io → VPS pulls and runs- Develop locally (
npm run dev) - Push to
main→ auto-deploys to d.racku.la - Test on dev environment
- Tag release → auto-deploys to count.racku.la
Analytics: Umami (self-hosted at t.racku.la) - privacy-focused, no cookies.
Separate website IDs for dev and prod environments. Configure via VITE_UMAMI_* env vars.
Analytics utility at src/lib/utils/analytics.ts.