This document turns the project roadmaps into an executable delivery plan. It is meant to be used with:
ROADMAP_TO_ARCHITECTURE.mdIMPLEMENTATION_STRATEGY_PHASE2.mdcms/theories/PROJECT_STATE_AND_ROADMAP.mdNEXT_STEPS_OVERVIEW.md
| Source document | What it contributes | Implemented as track |
|---|---|---|
ROADMAP_TO_ARCHITECTURE.md |
Macro architecture phases and target platform shape | A, B, E |
IMPLEMENTATION_STRATEGY_PHASE2.md |
Mid-phase delivery and integration sequencing | B, C, D |
cms/theories/PROJECT_STATE_AND_ROADMAP.md |
Current-state inventory + quarterly objectives | A, C, E |
NEXT_STEPS_OVERVIEW.md |
Tactical next actions and commercialization flow | C, D, E |
This crosswalk exists so every roadmap statement is traceable to an engineering workstream.
- Python backend (FastAPI + orchestration + simulator/HAL)
- Database (PostgreSQL / TimescaleDB)
- Optional cache/bus (Redis)
- Frontend clients (React / Vue / dashboard HTML)
Use:
./tools/bootstrap_and_audit.shThe script creates .venv, installs Python deps, installs Node deps in all frontend workspaces, and prints npm/pip diagnostics.
Goal: production-safe telemetry and command path.
- Implement circuit breaker retries + timeout budgets in HAL adapters.
- Add machine profile registry (
Fanuc,Siemens,Mock) to avoid hardcoded assumptions. - Acceptance:
- Graceful degradation to simulator when hardware unavailable.
- 0 unhandled exceptions on adapter disconnect/reconnect chaos test.
- Move high-frequency shared state to Redis streams or Arrow IPC.
- Add p50/p95/p99 telemetry pipeline timing metrics.
- Acceptance:
- p95 ingestion-to-dashboard < 100ms in simulator load tests.
Goal: AI suggestions never bypass deterministic guardrails.
- Encode hard constraints for load/vibration/thermal/curvature bounds.
- Expose policy decisions + reasoning traces over API.
- Acceptance:
- Any violating proposal is blocked with explicit reasons.
- Creator generates strategy candidates.
- Accountant scores economics/time/risk.
- Auditor performs final deterministic gate.
- Acceptance:
- Decision packet contains proposal, economics score, and pass/fail rationale.
Goal: roadmap-aligned multi-machine operations.
- Keep machine-specific websocket route as primary (
/ws/telemetry/{machine_id}) with fallback. - Add machine selector and persisted last-machine state.
- Acceptance:
- Operator can switch among 3+ machines without page refresh.
- Add hub-level card metrics (status, load trend, alert count) per machine.
- Acceptance:
- Hub view updates in near real-time and highlights critical nodes.
Goal: improve model quality safely offline.
- Generate normal + fault scenarios (chatter, thermal drift, stall).
- Persist traces in consistent schema.
- Build SFT examples and pairwise preference data.
- Include auditor verdicts and economics outcomes.
- Model can propose only; deterministic systems decide execution.
- Acceptance:
- Safety violation rate and rejection rate dashboards available.
Goal: roadmap Q2/Q3 scalability.
- Add site and machine tenancy boundaries.
- Implement RBAC scoped by site/role.
- Broadcast relevant learnings across machines (with policy controls).
- Acceptance:
- Cross-machine strategy propagation with audit trail.
- Dependency baseline, CI checks, API health hardening.
- Deliverables: repeatable local bootstrap, passing lints/checks.
- HAL resiliency + telemetry latency instrumentation.
- Deliverables: circuit breaker + latency dashboard.
- Shadow Council decision packet + deterministic policy traceability.
- Deliverables: pass/fail auditor API with reason codes.
- Fleet UX (machine selector, hub rollup metrics, alerts).
- Deliverables: live multi-node dashboard.
- Simulator scenario generation + dataset pipeline.
- Deliverables: exportable SFT/preference datasets.
- Shadow deployment and go/no-go gates for pilot.
- Deliverables: pilot checklist + rollback plan.
A feature is done only if all conditions hold:
- Unit/integration checks pass.
- Feature has monitoring signals (health, latency, error rate).
- Auditor safety constraints are applied where relevant.
- Docs updated (README + architecture + API notes).
- Simulator regression scenarios pass.
- Run
./tools/bootstrap_and_audit.sh. - Bring up backend and verify
/api/healthand websocket routes. - Implement Fleet selector persistence in dashboard UI.
- Add Auditor reason-code schema and include in websocket payload.
- Create simulator dataset export command for SFT + preference data.
This blueprint follows documentation conventions commonly used in mature industrial software projects:
- Traceability: each workstream references roadmap sources.
- Acceptance criteria: every feature track has measurable outcomes.
- Operational readiness: runability and fallback behavior are first-class requirements.
- Safety by design: deterministic controls are explicit and mandatory.
- Lifecycle clarity: includes implementation sequence, DoD, and immediate actions.
For low-level operational and contract details, see docs/TECHNICAL_REFERENCE.md.
Contributor workflow docs:
docs/DEVELOPER_EDITING_GUIDE.mddocs/METHODOLOGY_OF_ANALOGY.mddocs/CODER_LEXICON.md