Skip to content

Commit 6a17db2

Browse files
committed
Add monetization articles plus completion proof and next-week plan
1 parent 5618899 commit 6a17db2

14 files changed

+1082
-33
lines changed
Lines changed: 179 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,179 @@
1+
# FANUC RISE Feature Implementation Blueprint
2+
3+
This document turns the project roadmaps into an executable delivery plan. It is meant to be used with:
4+
- `ROADMAP_TO_ARCHITECTURE.md`
5+
- `IMPLEMENTATION_STRATEGY_PHASE2.md`
6+
- `cms/theories/PROJECT_STATE_AND_ROADMAP.md`
7+
- `NEXT_STEPS_OVERVIEW.md`
8+
9+
## 0) Roadmap Source Crosswalk (MD -> Implementation)
10+
11+
| Source document | What it contributes | Implemented as track |
12+
|---|---|---|
13+
| `ROADMAP_TO_ARCHITECTURE.md` | Macro architecture phases and target platform shape | A, B, E |
14+
| `IMPLEMENTATION_STRATEGY_PHASE2.md` | Mid-phase delivery and integration sequencing | B, C, D |
15+
| `cms/theories/PROJECT_STATE_AND_ROADMAP.md` | Current-state inventory + quarterly objectives | A, C, E |
16+
| `NEXT_STEPS_OVERVIEW.md` | Tactical next actions and commercialization flow | C, D, E |
17+
18+
This crosswalk exists so every roadmap statement is traceable to an engineering workstream.
19+
20+
## 1) Dependency Baseline (must pass first)
21+
22+
### Required runtime layers
23+
1. **Python backend** (FastAPI + orchestration + simulator/HAL)
24+
2. **Database** (PostgreSQL / TimescaleDB)
25+
3. **Optional cache/bus** (Redis)
26+
4. **Frontend clients** (React / Vue / dashboard HTML)
27+
28+
### One-command bootstrap
29+
Use:
30+
```bash
31+
./tools/bootstrap_and_audit.sh
32+
```
33+
34+
The script creates `.venv`, installs Python deps, installs Node deps in all frontend workspaces, and prints npm/pip diagnostics.
35+
36+
---
37+
38+
## 2) Feature Delivery Tracks (from MD roadmaps)
39+
40+
## Track A — Hardware + HAL Reliability
41+
**Goal**: production-safe telemetry and command path.
42+
43+
### A1. FOCAS bridge hardening
44+
- Implement circuit breaker retries + timeout budgets in HAL adapters.
45+
- Add machine profile registry (`Fanuc`, `Siemens`, `Mock`) to avoid hardcoded assumptions.
46+
- Acceptance:
47+
- Graceful degradation to simulator when hardware unavailable.
48+
- 0 unhandled exceptions on adapter disconnect/reconnect chaos test.
49+
50+
### A2. Latency pipeline
51+
- Move high-frequency shared state to Redis streams or Arrow IPC.
52+
- Add p50/p95/p99 telemetry pipeline timing metrics.
53+
- Acceptance:
54+
- p95 ingestion-to-dashboard < 100ms in simulator load tests.
55+
56+
## Track B — Shadow Council Safety Governance
57+
**Goal**: AI suggestions never bypass deterministic guardrails.
58+
59+
### B1. Auditor policy engine
60+
- Encode hard constraints for load/vibration/thermal/curvature bounds.
61+
- Expose policy decisions + reasoning traces over API.
62+
- Acceptance:
63+
- Any violating proposal is blocked with explicit reasons.
64+
65+
### B2. Creator + Accountant integration
66+
- Creator generates strategy candidates.
67+
- Accountant scores economics/time/risk.
68+
- Auditor performs final deterministic gate.
69+
- Acceptance:
70+
- Decision packet contains proposal, economics score, and pass/fail rationale.
71+
72+
## Track C — Probability Canvas + Fleet UX
73+
**Goal**: roadmap-aligned multi-machine operations.
74+
75+
### C1. Fleet switching
76+
- Keep machine-specific websocket route as primary (`/ws/telemetry/{machine_id}`) with fallback.
77+
- Add machine selector and persisted last-machine state.
78+
- Acceptance:
79+
- Operator can switch among 3+ machines without page refresh.
80+
81+
### C2. Fleet health overview
82+
- Add hub-level card metrics (status, load trend, alert count) per machine.
83+
- Acceptance:
84+
- Hub view updates in near real-time and highlights critical nodes.
85+
86+
## Track D — Simulator-to-LLM Training Loop
87+
**Goal**: improve model quality safely offline.
88+
89+
### D1. Scenario generation service
90+
- Generate normal + fault scenarios (chatter, thermal drift, stall).
91+
- Persist traces in consistent schema.
92+
93+
### D2. Dataset builder
94+
- Build SFT examples and pairwise preference data.
95+
- Include auditor verdicts and economics outcomes.
96+
97+
### D3. Shadow deployment gate
98+
- Model can propose only; deterministic systems decide execution.
99+
- Acceptance:
100+
- Safety violation rate and rejection rate dashboards available.
101+
102+
## Track E — Multi-site Cloud + ERP Integration
103+
**Goal**: roadmap Q2/Q3 scalability.
104+
105+
### E1. Fleet registry + tenancy
106+
- Add site and machine tenancy boundaries.
107+
- Implement RBAC scoped by site/role.
108+
109+
### E2. Event-driven sync
110+
- Broadcast relevant learnings across machines (with policy controls).
111+
- Acceptance:
112+
- Cross-machine strategy propagation with audit trail.
113+
114+
---
115+
116+
## 3) Suggested Sprint Plan (12-week template)
117+
118+
### Sprint 1-2
119+
- Dependency baseline, CI checks, API health hardening.
120+
- Deliverables: repeatable local bootstrap, passing lints/checks.
121+
122+
### Sprint 3-4
123+
- HAL resiliency + telemetry latency instrumentation.
124+
- Deliverables: circuit breaker + latency dashboard.
125+
126+
### Sprint 5-6
127+
- Shadow Council decision packet + deterministic policy traceability.
128+
- Deliverables: pass/fail auditor API with reason codes.
129+
130+
### Sprint 7-8
131+
- Fleet UX (machine selector, hub rollup metrics, alerts).
132+
- Deliverables: live multi-node dashboard.
133+
134+
### Sprint 9-10
135+
- Simulator scenario generation + dataset pipeline.
136+
- Deliverables: exportable SFT/preference datasets.
137+
138+
### Sprint 11-12
139+
- Shadow deployment and go/no-go gates for pilot.
140+
- Deliverables: pilot checklist + rollback plan.
141+
142+
---
143+
144+
## 4) Engineering Definition of Done
145+
146+
A feature is done only if all conditions hold:
147+
1. Unit/integration checks pass.
148+
2. Feature has monitoring signals (health, latency, error rate).
149+
3. Auditor safety constraints are applied where relevant.
150+
4. Docs updated (README + architecture + API notes).
151+
5. Simulator regression scenarios pass.
152+
153+
---
154+
155+
## 5) Immediate next actions
156+
157+
1. Run `./tools/bootstrap_and_audit.sh`.
158+
2. Bring up backend and verify `/api/health` and websocket routes.
159+
3. Implement Fleet selector persistence in dashboard UI.
160+
4. Add Auditor reason-code schema and include in websocket payload.
161+
5. Create simulator dataset export command for SFT + preference data.
162+
163+
164+
## 6) Documentation Standards Used
165+
166+
This blueprint follows documentation conventions commonly used in mature industrial software projects:
167+
- **Traceability**: each workstream references roadmap sources.
168+
- **Acceptance criteria**: every feature track has measurable outcomes.
169+
- **Operational readiness**: runability and fallback behavior are first-class requirements.
170+
- **Safety by design**: deterministic controls are explicit and mandatory.
171+
- **Lifecycle clarity**: includes implementation sequence, DoD, and immediate actions.
172+
173+
For low-level operational and contract details, see `docs/TECHNICAL_REFERENCE.md`.
174+
175+
Contributor workflow docs:
176+
- `docs/DEVELOPER_EDITING_GUIDE.md`
177+
- `docs/METHODOLOGY_OF_ANALOGY.md`
178+
- `docs/CODER_LEXICON.md`
179+

advanced_cnc_copilot/README.md

Lines changed: 106 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -139,6 +139,112 @@ A three-agent system ensuring safe AI integration:
139139
- Interface topology approach for connecting disparate systems
140140
- Nightmare Training for offline learning through simulation
141141

142+
143+
## Documentation Set (Complete)
144+
145+
- `README.md`: project entrypoint, setup, and quickstart.
146+
- `SYSTEM_ARCHITECTURE.md`: architecture and data-flow map.
147+
- `FEATURE_IMPLEMENTATION_BLUEPRINT.md`: roadmap-driven delivery blueprint.
148+
- `docs/TECHNICAL_REFERENCE.md`: production-style technical contracts, NFRs, safety and release criteria.
149+
- `docs/DEVELOPER_EDITING_GUIDE.md`: safe code editing workflow and PR checklist.
150+
- `docs/METHODOLOGY_OF_ANALOGY.md`: analogy methodics and validation protocol.
151+
- `docs/CODER_LEXICON.md`: canonical project vocabulary for consistent implementation language.
152+
- `docs/COMPONENT_COMPLETION_REPORT.md`: evidence-based status of what is done vs in progress.
153+
- `docs/NEXT_WEEK_DEVELOPMENT_PLAN.md`: concrete next-week implementation commitments.
154+
- `docs/MONETIZATION_ARTICLE_PRODUCTIZED_AI_CNC.md`: monetization strategy for product tiers and outcome-based pricing.
155+
- `docs/MONETIZATION_ARTICLE_SERVICES_AND_ECOSYSTEM.md`: services/ecosystem-led commercialization strategy.
156+
157+
## Delivery Blueprint (Roadmap -> Execution)
158+
159+
- **Feature execution plan**: `FEATURE_IMPLEMENTATION_BLUEPRINT.md` (maps roadmap docs into tracks, sprints, DoD, and acceptance criteria).
160+
- **Bootstrap + dependency audit script**: `tools/bootstrap_and_audit.sh` (creates Python env, installs dependencies across workspaces, runs quick diagnostics).
161+
162+
## Dependency Bootstrap & Environment Debugging
163+
164+
To avoid mixed environments, use one Python environment and install frontend dependencies per app folder.
165+
166+
### 1) Python backend environment
167+
```bash
168+
cd advanced_cnc_copilot
169+
python -m venv .venv
170+
source .venv/bin/activate
171+
pip install -U pip
172+
pip install -r flask_service/requirements.txt
173+
```
174+
175+
If you prefer conda:
176+
```bash
177+
conda env create -f environment.yml
178+
conda activate fanuc-rise
179+
```
180+
181+
### 2) Frontend dependencies
182+
```bash
183+
cd advanced_cnc_copilot/frontend-react && npm install
184+
cd ../frontend-vue && npm install
185+
```
186+
187+
### 3) Quick dependency diagnostics
188+
```bash
189+
python --version
190+
pip check
191+
npm --version
192+
npm ls --depth=0
193+
```
194+
195+
### 4) Runtime connectivity checks
196+
```bash
197+
# API health
198+
curl -s http://localhost:8000/api/health
199+
200+
# Dashboard websocket (machine-scoped)
201+
# ws://localhost:8000/ws/telemetry/CNC-001
202+
```
203+
204+
If machine-scoped telemetry is unavailable in your current backend mode, the dashboard now falls back to the global stream endpoint (`/ws/telemetry`).
205+
206+
## How to Train the LLM with the Simulator (Practical Loop)
207+
208+
Use this loop to improve planning quality without risking hardware:
209+
210+
1. **Generate scenario batches**
211+
- Use simulator variants (normal, chatter, thermal drift, spindle stall) to create trajectories.
212+
- Save each run as `{input_intent, telemetry_trace, action_trace, outcome, safety_flags}`.
213+
214+
2. **Build supervised preference data**
215+
- For each scenario, keep:
216+
- `creator_proposal` (candidate plan)
217+
- `auditor_verdict` (pass/fail + rule trace)
218+
- `accountant_score` (time/cost impact)
219+
- Convert this into preference pairs (`good_plan`, `bad_plan`) for fine-tuning or ranking.
220+
221+
3. **Train in phases**
222+
- **SFT phase**: train on accepted plans and corrective rewrites.
223+
- **Reward/ranking phase**: train a scorer on safety + economics labels.
224+
- **Policy improvement phase**: optimize for high reward under strict safety constraints.
225+
226+
4. **Gate with deterministic safety**
227+
- Keep Physics Auditor and hard constraints outside the model.
228+
- Reject any proposal violating vibration/load/curvature bounds even if model confidence is high.
229+
230+
5. **Deploy as shadow mode first**
231+
- Run the model in recommendation-only mode.
232+
- Compare proposed actions vs executed actions and measure regret/safety deltas before enabling active control.
233+
234+
### Suggested training metrics
235+
- Safety violation rate
236+
- Auditor rejection rate
237+
- Cycle-time improvement
238+
- Surface finish proxy / quality score
239+
- Recovery latency after injected fault
240+
241+
## Additional Critical Recommendations
242+
243+
- **Do not remove deterministic guardrails** when increasing model autonomy.
244+
- **Version telemetry schemas** so training data stays compatible over time.
245+
- **Record full reasoning traces** for post-incident audits.
246+
- **Keep a simulator parity suite** that replays historical failure windows before each release.
247+
142248
## Usage Examples
143249

144250
### Starting the Application

advanced_cnc_copilot/SYSTEM_ARCHITECTURE.md

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -93,3 +93,60 @@ graph TD
9393
* Real-time telemetry (Load/Vibration) feeds the **Dopamine Engine**.
9494
* Post-job quality (Vision) feeds the **Reinforcement Learning** model.
9595
* The system updates its "Risk Tolerance" weights for the next cycle.
96+
97+
## Operational Debug Playbook (Dependencies + Runtime)
98+
99+
### Dependency layers
100+
- **Python services**: FastAPI backend, orchestration core, simulator and HAL adapters.
101+
- **Frontend services**: dashboard clients (React/Vue/HTML) consuming REST + WebSocket.
102+
- **Data services**: Postgres/TimescaleDB and optional Redis.
103+
104+
### Minimal boot order
105+
1. Install Python dependencies and activate environment.
106+
2. Install frontend dependencies in each UI package.
107+
3. Start backend API (`uvicorn backend.main:app --reload`).
108+
4. Open dashboard and verify `/api/health` + websocket stream.
109+
110+
### Failure isolation checklist
111+
- If REST works but live widgets do not update: inspect WebSocket route mismatch first.
112+
- If machine-specific stream fails: verify `/ws/telemetry/{machine_id}` and then fallback to `/ws/telemetry`.
113+
- If optimization endpoints fail: validate model/agent initialization and dependency imports before checking UI.
114+
115+
## LLM Training Architecture on Simulator
116+
117+
```mermaid
118+
graph LR
119+
A[Scenario Generator\n(normal/fault/thermal/chatter)] --> B[Telemetry + Action Dataset]
120+
B --> C[SFT Dataset Builder\n(prompt->plan)]
121+
B --> D[Preference Dataset Builder\n(good vs bad plan)]
122+
C --> E[Policy Model (Creator)]
123+
D --> F[Reward/Rank Model]
124+
E --> G[Shadow Deployment]
125+
F --> G
126+
G --> H[Auditor + Physics Constraints]
127+
H --> I[Accepted Actions + Outcomes]
128+
I --> B
129+
```
130+
131+
### Safety-first training contract
132+
- Creator model can **propose**, never unilaterally execute.
133+
- Auditor/physics layer remains deterministic and blocks out-of-bounds actions.
134+
- Training data must include both successful and failed episodes to avoid optimism bias.
135+
136+
### Recommended dataset schema
137+
- `intent_text`
138+
- `machine_context` (tool, material, wear state)
139+
- `telemetry_window` (time-series)
140+
- `proposed_action`
141+
- `auditor_result` + `reasoning_trace`
142+
- `execution_outcome` (cycle time, quality, fault/no-fault)
143+
- `economic_score`
144+
145+
### Deployment maturity gates
146+
1. Offline simulator benchmark pass.
147+
2. Shadow mode acceptance thresholds met.
148+
3. Controlled pilot on non-critical operations.
149+
4. Progressive rollout with rollback triggers.
150+
151+
## Further Reading
152+
- Operational and contract details: `docs/TECHNICAL_REFERENCE.md`.

0 commit comments

Comments
 (0)