You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Produces `consistency_report.json` in the task folder
1368
+
- **Fix any CRITICAL issues before generating the report**
1369
+
- Common issue: external study data (e.g., Gudrun paper) measuring different quantities than notebook calculations — these need clarification in the report, not "fixing"
1370
+
1371
+
18. **Run the report generator** to produce the engineering report (Word + HTML):
1361
1372
```
1362
1373
Run in terminal: python step3_report/generate_report.py
1363
1374
```
@@ -1376,19 +1387,19 @@ Document the independent check in `step2_analysis/notes.md` under a
1376
1387
- All formatting renders automatically when corresponding keys exist in
1377
1388
`results.json` — no custom rendering code needed per task
1378
1389
1379
-
18. **Update the task README** (`README.md` in the task folder):
1390
+
19. **Update the task README** (`README.md` in the task folder):
1380
1391
- Fill in the Problem Statement
1381
1392
- Check off completed steps
1382
1393
- Write the Key Results section
1383
1394
1384
1395
### Phase 4: Knowledge Capture & Contribution
1385
1396
1386
-
19. **Identify reusable outputs**:
1397
+
20. **Identify reusable outputs**:
1387
1398
- If the notebook is generally useful → mention it could go to `examples/notebooks/`
1388
1399
- If a NeqSim API gap was found → document it for future development
1389
1400
- If a new pattern was discovered → note it for `CODE_PATTERNS.md`
1390
1401
1391
-
20. **Fix and improve documentation** encountered during the task:
1402
+
21. **Fix and improve documentation** encountered during the task:
1392
1403
- If you found **errors** in existing docs (wrong API signatures, outdated
1393
1404
patterns, incorrect examples), fix them and include the fixes in the PR.
1394
1405
- If you discovered **missing documentation** (undocumented classes, missing
@@ -1399,7 +1410,7 @@ Document the independent check in `step2_analysis/notes.md` under a
1399
1410
when adding new doc pages.
1400
1411
- Documentation fixes go in the **same PR** as the task outputs.
1401
1412
1402
-
21. **Draft a task log entry** (but don't write to the file directly):
1413
+
22. **Draft a task log entry** (but don't write to the file directly):
1403
1414
```
1404
1415
### YYYY-MM-DD — Task Title
1405
1416
**Type:** X (TypeName)
@@ -1409,7 +1420,7 @@ Document the independent check in `step2_analysis/notes.md` under a
1409
1420
```
1410
1421
Show this to the user for them to add to `docs/development/TASK_LOG.md`.
1411
1422
1412
-
22. **Create a Pull Request** (if the user asks, or if reusable outputs were produced):
1423
+
23. **Create a Pull Request** (if the user asks, or if reusable outputs were produced):
1413
1424
1414
1425
When the task produces reusable code (tests, notebooks, docs, API extensions),
1415
1426
offer to create a PR. If the user confirms, execute these steps:
Copy file name to clipboardExpand all lines: .github/copilot-instructions.md
+7-1Lines changed: 7 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1365,7 +1365,13 @@ docs, or the workspace root.
1365
1365
9. **For cost estimation:** Use component-level NeqSim classes (e.g., `SURFCostEstimator`, `SubseaCostEstimator`) instead of flat lump-sum estimates. Break down CAPEX into verifiable subcategories.
1366
1366
10. **Self-review before delivering:** Re-read all formulas checking for sign errors, double-counting, wrong time indexing, and missing terms. Compare key outputs against industry benchmarks.
1367
1367
11. **Benchmark validation (MANDATORY):** Create a separate benchmark notebook (`XX_benchmark_validation.ipynb`) comparing NeqSim results against independent reference data (NIST, textbook examples, published cases, industry benchmarks). Include at least 3 data points, a parity/deviation plot, and save `benchmark_validation` results to `results.json`. Include benchmark comparison in the final report.
1368
-
12. **Uncertainty analysis (MANDATORY):** Create a separate uncertainty notebook (`XX_uncertainty_risk_analysis.ipynb`) that:
1368
+
12. **Consistency check (MANDATORY before report):** Run `python devtools/consistency_checker.py task_solve/YYYY-MM-DD_slug/` before generating reports. This tool:
1369
+
- Extracts numerical values from all notebooks and results.json
0 commit comments