You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- New `executive-architecture` template (6th infographic template) in `scripts/extract-infographic-data.py`; groups components into architectural layers via existing `_compute_trust_zones()`, filters Critical/High findings, selects one callout per layer
103
+
- Portrait JPEG output: `threat-executive-architecture.jpg` generated via existing Gemini integration (no new API calls or dependencies)
104
+
- PDF integration via `scripts/extract-report-data.py``detect_images()` and `templates/tachi/security-report/main.typ`: new page placed immediately after Executive Summary (pages 2-3) using existing `infographic-page()` Typst function -- NO new template function
105
+
- Schema additions in `schemas/infographic.yaml` (executive-architecture template enum entry with section structure and visual directive constants)
106
+
- Command updates in `.claude/commands/tachi.infographic.md`: `exec` alias dispatch + inclusion in `all` shorthand expansion
107
+
- New reference doc: `.claude/skills/tachi-infographics/references/executive-architecture.md`
108
+
-**pytest bootstrap** (first-time addition of Python test infrastructure to tachi): new `pyproject.toml`, `requirements-dev.txt` (pytest>=8.0, pytest-cov>=4.1), `tests/` directory with `conftest.py` and `tests/scripts/` containing 6 test files covering 150+ tests across the extraction pipeline. `Makefile` gains `test:` target. Developer-only — runtime `scripts/*.py` remain stdlib-only per zero-dependency constraint.
109
+
- New fixtures and golden files: `tests/scripts/fixtures/exec_arch/` (8 variations), `tests/scripts/fixtures/report_data/`, `tests/scripts/fixtures/golden/` (5 golden JSON files)
110
+
- Baseline PDFs for backward-compatibility test: `examples/{web-app,microservices,ascii-web-api,mermaid-agentic-app,free-text-microservice}/security-report.pdf.baseline` (committed; use `SOURCE_DATE_EPOCH=1700000000` per ADR-021 for byte-deterministic comparison)
111
+
- New ADR-021: SOURCE_DATE_EPOCH for deterministic PDF comparison (reproducible-builds convention for the backward-compatibility test; production pipeline unchanged)
112
+
- Backward compatible: 5 example PDFs byte-identical without the new executive-architecture JPEG present; the 6th example (agentic-app) is intentionally regenerated as the feature demonstration
101
113
-**Feature 120**: Architecture Lifecycle Command
102
114
- Version tracking: `/tachi.architecture` adds YAML frontmatter (version, date, description, checksum, previous_version) to generated architecture files
103
115
- Archive mechanism: previous versions archived to `{parent_dir}/.archive/v{N}/architecture.md` before updates; legacy files (no frontmatter) archived as v0
### KB-026: Parallel Session Workflow Can Cross-Contaminate Uncommitted State
529
+
530
+
**Date**: 2026-04-10
531
+
**Category**: Process
532
+
**Source**: Feature 128 retrospective
533
+
**Severity**: Informational
534
+
535
+
**Problem**: Running two Claude Code sessions in parallel against the same working directory — one for a feature branch (F-128 executive infographic) and one for an unrelated bug fix — produced a mixed uncommitted state at delivery time. The F-128 delivery session encountered seven modified files and twenty untracked files belonging to the parallel bug-fix session, forcing a manual scope-filter step before the delivery commit could be made.
536
+
537
+
**Root Cause**: Claude Code sessions share the filesystem, not the git index. When two sessions edit files concurrently, git status reflects the union of both sessions' changes. Automated delivery workflows like `/aod.deliver` that "stage and commit all uncommitted changes" will misattribute the other session's work unless the operator intercepts and filters manually. The workflow assumes a single-session working directory.
538
+
539
+
**Solution**: When running parallel sessions, either (a) use separate worktrees via `git worktree add` so each session has its own index and working directory, or (b) explicitly list the files that belong to the current session's scope and stage only those at commit time. For F-128 delivery, the operator listed the three F-128 docs files (`docs/architecture/01_system_design/README.md`, `docs/product/02_PRD/INDEX.md`, `docs/product/_backlog/BACKLOG.md`) and explicitly left the seven parallel-session files and twenty untracked files untouched.
540
+
541
+
**Result**: F-128 delivery committed cleanly with only F-128 scope. The parallel bug-fix session's changes remained intact in the working directory for its own delivery path. No cross-contamination, no accidental commits, no lost work — but the delivery took longer because manual scope triage was required.
542
+
543
+
**When to Apply**: Any time two or more Claude Code sessions operate against the same clone simultaneously. Prefer `git worktree add ../tachi-bugfix <bug-branch>` for true isolation. If worktrees are not used, add a pre-delivery checklist step: "list all uncommitted files, classify each by feature scope, stage only current-feature files explicitly." The `/aod.deliver` command should consider adding scope-filter heuristics when it detects files outside the expected feature directories.
Copy file name to clipboardExpand all lines: docs/architecture/00_Tech_Stack/README.md
+37Lines changed: 37 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -208,6 +208,42 @@ These are tools used by the AOD Kit itself (not the adopter's application stack)
208
208
| `scripts/extract-report-data.py` | Deterministic extraction of structured data from tachi pipeline markdown artifacts (threats.md, risk-scores.md, compensating-controls.md, threat-report.md) into Typst data file (report-data.typ). Replaces LLM-based markdown parsing in the report-assembler agent. Supports 3-tier severity source hierarchy, validates internal consistency (severity sums, scope counts, unique finding IDs), and produces byte-identical output on identical inputs. MAESTRO data extraction (Feature 091): emits `has-maestro-data` boolean flag and per-layer finding variables for conditional `maestro-findings.typ` page inclusion. Baseline data extraction (Feature 104): emits baseline metadata variables (source, date, finding count, run ID) and delta lifecycle counts (new, unchanged, updated, resolved) from threats.md frontmatter and Section 8 Delta Summary; emits `has-baseline-data` boolean flag for conditional report section inclusion. Attack path extraction (Feature 112): `parse_attack_trees()` scans `attack-trees/` directory for Mermaid attack tree files, extracts metadata and cross-references findings; `render_mermaid_to_png()` converts Mermaid to PNG via `mmdc` subprocess (graceful fallback to raw text when `mmdc` unavailable); emits `has-attack-trees` boolean flag and structured attack tree array for conditional `attack-path.typ` page inclusion. Imports shared parsers from `tachi_parsers.py`. | Feature 067, refactored Feature 071, extended Feature 091, extended Feature 104, extended Feature 112 |
209
209
| `scripts/extract-infographic-data.py` | Deterministic extraction of structured infographic data (~1,100 lines) from tachi pipeline markdown artifacts into JSON data files for infographic templates (baseball-card, system-architecture, risk-funnel, maestro-stack, maestro-heatmap). Replaces LLM-based data extraction in the threat-infographic agent. Auto-detects richest data source (compensating-controls.md > risk-scores.md > threats.md), uses Largest Remainder Method for integer percentage rounding, and produces byte-identical JSON output on identical inputs. MAESTRO layer parsing extracts per-layer finding counts and severity distributions from `maestro_layer` field in source artifacts; gated by `has-maestro-data` flag (Feature 091). Baseline data extraction (Feature 104): extracts baseline metadata and delta lifecycle counts for infographic delta annotations; emits baseline fields in JSON output when baseline data is present. Imports shared parsers from `tachi_parsers.py`. | Feature 071, extended Feature 091, extended Feature 104 |
210
210
211
+
### Python Test Infrastructure (Feature 128)
212
+
213
+
**pytest 8.0+** (developer-only; not required by end users or the runtime pipeline)
214
+
- First-time addition of a Python test harness to tachi. Prior to Feature 128, `scripts/*.py` modules had no automated test coverage — ad-hoc manual verification only. Feature 128 bootstrapped the harness to cover the extraction pipeline (`extract-infographic-data.py`, `extract-report-data.py`, `tachi_parsers.py`) as part of the `executive-architecture` infographic template work.
215
+
- Why pytest (not `unittest`): fixture ergonomics, parametrized tests (one-to-many fixture coverage), rich assertion introspection, and `pytest-cov` coverage reporting — all standard choices for modern Python test suites. `unittest` would have required substantially more boilerplate for the same coverage.
216
+
- Why developer-only: runtime constraint from the Python Scripts section above (`scripts/*.py` must be stdlib-only, zero-dependency). The test harness lives outside runtime — adopters running `/tachi.threat-model`, `/tachi.security-report`, or `/tachi.infographic` do NOT install pytest. The harness is exclusively for tachi contributors verifying the extraction pipeline locally or in CI.
217
+
218
+
**Configuration files** (all added in Feature 128):
219
+
220
+
| File | Purpose |
221
+
|------|---------|
222
+
|`pyproject.toml`| Project config and `[tool.pytest.ini_options]` section; sets `testpaths = ["tests"]`, `python_files = ["test_*.py"]`, `addopts = "-ra --strict-markers"`. Non-disruptive to existing `scripts/*.py` runtime (no runtime imports of `pyproject.toml`). |
python3 -m pytest tests/scripts/test_extract_infographic.py -v # single module
242
+
python3 -m pytest tests/ --cov=scripts # with coverage
243
+
```
244
+
245
+
**Backward-compatibility harness** (Feature 128 Wave 4): `tests/scripts/test_backward_compatibility.py` is a parametrized test that compiles the 5 unmodified example projects through the full PDF pipeline and compares the output byte-for-byte against committed `examples/*/security-report.pdf.baseline` files. The test sets `SOURCE_DATE_EPOCH=1700000000` before `typst compile` to neutralize PDF metadata timestamps — see [ADR-021](../02_ADRs/ADR-021-source-date-epoch-for-deterministic-pdf-comparison.md) for the reproducible-builds rationale.
246
+
211
247
### Shell Scripts
212
248
213
249
**Bash 3.2** (macOS default `/bin/bash`)
@@ -234,6 +270,7 @@ These are tools used by the AOD Kit itself (not the adopter's application stack)
234
270
|`python3`|`scripts/extract-report-data.py` (invoked by report-assembler agent) | Deterministic markdown-to-Typst data extraction for security report pipeline; stdlib only, no pip dependencies (Feature 067) | Pre-installed on macOS; `apt-get install python3` (Linux) |
235
271
|`typst`|`/tachi.security-report` command (report-assembler agent) | PDF compilation from modular `.typ` templates; renders security assessment reports with brand assets, auto-generated TOC, and conditional page inclusion (Feature 054, extended Feature 060); MAESTRO Findings page conditionally included via `has-maestro-data` flag (Feature 091); Attack Path pages conditionally included via `has-attack-trees` flag (Feature 112) |`brew install typst` (macOS) / `cargo install typst-cli` / [typst.app](https://github.com/typst/typst/releases)|
236
272
|`mmdc`|`scripts/extract-report-data.py` (invoked by report-assembler agent, optional) | Mermaid CLI for rendering attack tree Mermaid diagrams to PNG images for PDF report embedding; graceful fallback to raw Mermaid text display when unavailable (Feature 112) |`npm install -g @mermaid-js/mermaid-cli`|
273
+
|`pytest`|`tests/scripts/*.py` (developer-only; not runtime) | Python test runner for the `scripts/*.py` extraction pipeline and `tachi_parsers.py` shared module; invoked via `make test` or `python3 -m pytest tests/` (Feature 128) |`pip install -r requirements-dev.txt`|
237
274
238
275
**Note**: `gh` degrades gracefully -- the orchestrator falls back to artifact-only detection when `gh` is unavailable or unauthenticated. Similarly, `scripts/init.sh` skips GitHub Projects board creation when `gh` is missing, unauthenticated, or lacks the `project` OAuth scope, reporting status in the init summary with remediation guidance.
0 commit comments