diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..7a60b85 --- /dev/null +++ b/.gitignore @@ -0,0 +1,2 @@ +__pycache__/ +*.pyc diff --git a/README.md b/README.md index c4f86e0..2fb2cbe 100644 --- a/README.md +++ b/README.md @@ -1 +1,50 @@ -# solid-meme \ No newline at end of file +# Secure Data Destruction USB Orchestrator + +This repository contains a bootable USB orchestrator for secure data destruction with forensic-grade evidence logging. It relies on trusted, external erasure utilities and focuses on orchestration, isolation, and reporting. + +## Goals +- Bootable, locked-down Linux that auto-launches the orchestrator UI +- Per-batch collection isolation on a Windows-readable (exFAT) logs partition +- Hardware diagnostics captured as structured JSON before erasure +- Handoff to trusted erasure tools (e.g., `nwipe`, vendor SSD/NVMe utilities) +- Post-erase verification and immutable evidence records + +## Project Layout +- `orchestrator/` – Python application modules (diagnostics, logging, UI shell) +- `schemas/` – JSON Schemas for manifests and machine records +- `docs/` – Build, boot, and operational guidance +- `system/` – Systemd units and autologin configuration snippets for the live OS + +## Getting Started (development laptop) +1) Build a minimal Linux rootfs or reuse an existing live image (e.g., Debian live base). +2) Partition a 64GB USB stick: + - **Partition 1** (Linux, bootable): ext4, labeled `LINUX_OS`. + - **Partition 2** (logs): exFAT, labeled `EVIDENCE_LOGS`. +3) Copy this repo onto Partition 1 under `/opt/orchestrator`. +4) Install runtime dependencies into the live image: + - Python 3 (stdlib only), `tk`, `lsblk`, `lscpu`, `lspci`, `dmidecode`, `iproute2` + - Trusted erasure tools you plan to expose (e.g., `nwipe`, `nvme-cli`, `blkdiscard`) +5) Install the provided systemd service and getty override from `system/` (see `docs/BOOT.md`). +6) Boot from USB. The UI should launch fullscreen without offering a shell. + +## Safety Constraints +- No custom overwrite algorithms; only orchestrates trusted tools. +- Never auto-wipes: operator must confirm each run. +- Refuses to proceed if the logs partition is missing or unwritable. +- Collection switching is explicit; writing outside the active collection is treated as a hard failure. +- No network assumed; all tooling must be present on the USB. + +## Running the orchestrator (test harness) +``` +python -m orchestrator.main 2024-05-ACME nwipe OperatorName +``` +This mounts the `EVIDENCE_LOGS` partition, creates a new collection folder, displays detected disks and summary diagnostics, and waits for confirmation before launching the selected erasure tool. + +## Documentation +- `docs/BOOT.md` – USB layout, boot configuration, autologin, and service wiring +- `docs/SCHEMAS.md` – JSON schema references and logging expectations +- `docs/WORKFLOW.md` – Operator flow, failure handling, and evidence artifacts + +## Notes +- The UI is intentionally minimal but locked to fullscreen to avoid shell escape. +- Failure states are loud and block progress (missing tools, missing logs partition, detection failures). diff --git a/docs/BOOT.md b/docs/BOOT.md new file mode 100644 index 0000000..438efe9 --- /dev/null +++ b/docs/BOOT.md @@ -0,0 +1,32 @@ +# Boot & Autostart Configuration + +## USB Partition Layout (required) +- **Partition 1 (Linux OS)**: ext4, label `LINUX_OS`, bootable, contains minimal rootfs and orchestrator app under `/opt/orchestrator`. +- **Partition 2 (Logs)**: exFAT, label `EVIDENCE_LOGS`, Windows-readable. Only logs/evidence live here. + +## Kernel & Bootloader +- Use a minimal, read-only oriented Linux (e.g., Debian live with overlay). Disable GRUB menu and boot straight to the default entry. +- Kernel params to consider: `quiet splash`, `net.ifnames=0`, `rd.systemd.show_status=0` for clean boot; disable network via `systemd` unit (see below). + +## Autologin & Shell Lockdown +- Configure `getty@tty1.service.d` override to auto-login a dedicated user (e.g., `operator`) without a shell prompt. See `system/getty@tty1.service.d/override.conf`. +- The user's shell should be `/usr/sbin/nologin` or a wrapper that immediately starts the orchestrator. +- Disable TTY switching if possible (e.g., with `chvt` restrictions) and remove additional virtual consoles. + +## Orchestrator Autostart +- Install `system/orchestrator.service` to `/etc/systemd/system/`. +- Enable at build time: `systemctl enable orchestrator.service`. +- Service runs `/usr/bin/python3 -m orchestrator.main [operator]`. For unattended boot, wrap this in `/usr/local/bin/orchestrator-launch` that prompts for collection/operator via Tk. + +## Network Hardening +- Disable networking by default: mask `NetworkManager.service` or `systemd-networkd.service` unless explicitly needed. +- Remove wireless supplicants. No package managers should be exposed in the UI. + +## Logs Partition Mount +- The service uses label `EVIDENCE_LOGS` and mounts it to `/mnt/evidence_logs` (configurable in `config.py`). +- If missing or unwritable, the UI will surface a blocking error and refuse to continue. + +## Image Build Notes +- Bake all required binaries into the initramfs/rootfs: `python3`, `tk`, `lsblk`, `lscpu`, `lspci`, `dmidecode`, `ip`, `upower`, and the erasure tools you plan to support (`nwipe`, `nvme`, `blkdiscard`). +- Keep the OS read-only where practical; store transient state under `/run/orchestrator`. +- Verify `EVIDENCE_LOGS` survives reboots and is still readable on Windows. diff --git a/docs/SCHEMAS.md b/docs/SCHEMAS.md new file mode 100644 index 0000000..42aec73 --- /dev/null +++ b/docs/SCHEMAS.md @@ -0,0 +1,10 @@ +# JSON Schemas + +## Collection Manifest (`schemas/collection_manifest.schema.json`) +- Captures collection metadata and per-machine summary entries. +- Enforced on write to catch malformed updates. + +## Machine Record (`schemas/machine_record.schema.json`) +- Immutable per-machine evidence file containing diagnostics, storage inventory, erasure tool used, timestamps, and result. + +Both schemas are referenced in `orchestrator/config.py` for validation or offline tooling. They are kept alongside the application to avoid network lookups. diff --git a/docs/WORKFLOW.md b/docs/WORKFLOW.md new file mode 100644 index 0000000..bdb8257 --- /dev/null +++ b/docs/WORKFLOW.md @@ -0,0 +1,33 @@ +# Operator Workflow + +## Collection setup (once per batch) +1. Operator enters Collection ID (or accepts auto `YYYY-MM-DD_CLIENT` suggestion) and optional operator name. +2. Application mounts `EVIDENCE_LOGS` to `/mnt/evidence_logs` and creates: + - `Collections//manifest.json` + - `Collections//summary.csv` + - `Collections//machines/` +3. A lock file (`.collection.lock`) is placed to prevent accidental mixing. + +If any step fails (partition missing, unwritable, existing collection), the UI blocks and surfaces the error. + +## Per-machine flow +1. Detect storage devices via `lsblk -O -J`; render them for operator review. +2. Run diagnostics (CPU, GPU, RAM from `lscpu`/`lspci`, network MACs via `ip -json link`, system info via `dmidecode`, boot mode detection). +3. Store diagnostics in memory; show summarized view in UI. +4. Operator must click **Confirm erase**. No auto-wipe. +5. Orchestrator launches the selected trusted erasure tool (e.g., `nwipe`, `blkdiscard`, `nvme format`) with the chosen devices. +6. On return, post-erase verification probes each device for remaining filesystems/partitions. +7. A machine record JSON is written and `summary.csv` row appended. Results: `PASS`, `FAIL`, or `OPERATOR_ABORTED`. + +## Evidence artifacts +- **`manifest.json`**: collection metadata and per-machine summary entries. +- **`machines/.json`**: detailed record with diagnostics, storage inventory, tool used, timestamps, result, optional operator/notes. +- **`summary.csv`**: quick view for client delivery. + +## Failure handling (hard stops) +- Logs partition missing or not exFAT. +- Collection directory cannot be created or lock is missing (potential mixing risk). +- Required diagnostic binaries missing (`lsblk`, `lscpu`, `lspci`, `dmidecode`, `ip`). +- No internal storage detected. + +Failures are both displayed and logged; the operator must resolve before proceeding. diff --git a/orchestrator/__init__.py b/orchestrator/__init__.py new file mode 100644 index 0000000..a4395f7 --- /dev/null +++ b/orchestrator/__init__.py @@ -0,0 +1 @@ +"""Secure data destruction orchestrator package.""" diff --git a/orchestrator/collections.py b/orchestrator/collections.py new file mode 100644 index 0000000..eaed637 --- /dev/null +++ b/orchestrator/collections.py @@ -0,0 +1,110 @@ +"""Collection isolation and lifecycle management.""" +from __future__ import annotations + +import json +from dataclasses import dataclass +from pathlib import Path +from typing import Optional + +from . import config +from .logger import CollectionLogger, SummaryWriter + + +class CollectionError(RuntimeError): + """Raised when collection lifecycle fails.""" + + +@dataclass +class CollectionContext: + collection_id: str + root: Path + manifest_path: Path + summary_path: Path + machines_dir: Path + lock_file: Path + + +class CollectionManager: + """Creates and validates isolated collections within the logs partition.""" + + def __init__(self, logs_root: Path) -> None: + self.logs_root = logs_root + + def _ensure_partition(self) -> Path: + if not self.logs_root.exists(): + raise CollectionError("Logs partition is not mounted or missing") + if not self.logs_root.is_dir() or not (self.logs_root / config.COLLECTIONS_DIRNAME).parent.exists(): + raise CollectionError("Logs location is invalid or not writable") + return self.logs_root + + def create_collection(self, collection_id: str) -> CollectionContext: + base = self._ensure_partition() / config.COLLECTIONS_DIRNAME + base.mkdir(parents=True, exist_ok=True) + collection_dir = base / collection_id + if collection_dir.exists(): + raise CollectionError(f"Collection '{collection_id}' already exists; refuse to mix data") + collection_dir.mkdir(parents=True, exist_ok=False) + machines_dir = collection_dir / config.MACHINES_DIRNAME + machines_dir.mkdir(parents=True, exist_ok=True) + + manifest_path = collection_dir / config.MANIFEST_FILENAME + summary_path = collection_dir / config.SUMMARY_FILENAME + lock_file = collection_dir / config.LOCK_FILENAME + lock_file.write_text("active", encoding="utf-8") + + # seed manifest + manifest = { + "collection_id": collection_id, + "version": config.APP_VERSION, + "machines": [], + } + manifest_path.write_text(json.dumps(manifest, indent=2), encoding="utf-8") + summary_path.write_text("machine_id,device_count,result,start_ts,end_ts\n", encoding="utf-8") + + return CollectionContext( + collection_id=collection_id, + root=collection_dir, + manifest_path=manifest_path, + summary_path=summary_path, + machines_dir=machines_dir, + lock_file=lock_file, + ) + + def load_collection(self, collection_id: str) -> CollectionContext: + base = self._ensure_partition() / config.COLLECTIONS_DIRNAME + collection_dir = base / collection_id + if not collection_dir.exists(): + raise CollectionError(f"Collection '{collection_id}' not found") + if not (collection_dir / config.LOCK_FILENAME).exists(): + raise CollectionError("Collection lock missing; abort to avoid mixing") + + return CollectionContext( + collection_id=collection_id, + root=collection_dir, + manifest_path=collection_dir / config.MANIFEST_FILENAME, + summary_path=collection_dir / config.SUMMARY_FILENAME, + machines_dir=collection_dir / config.MACHINES_DIRNAME, + lock_file=collection_dir / config.LOCK_FILENAME, + ) + + def close_collection(self, context: CollectionContext) -> None: + if context.lock_file.exists(): + context.lock_file.unlink() + + def new_machine_logger(self, context: CollectionContext, machine_id: str) -> tuple[CollectionLogger, SummaryWriter]: + machine_json = context.machines_dir / f"{machine_id}.json" + manifest = context.manifest_path + summary = context.summary_path + return CollectionLogger(manifest, machine_json), SummaryWriter(summary) + + +def next_sequence(collection_dir: Path) -> str: + existing = sorted(collection_dir.glob("*.json")) + if not existing: + return "0001" + last = existing[-1].stem.split("_")[0] + try: + value = int(last) + except ValueError: + value = len(existing) + return f"{value + 1:04d}" diff --git a/orchestrator/config.py b/orchestrator/config.py new file mode 100644 index 0000000..e691f49 --- /dev/null +++ b/orchestrator/config.py @@ -0,0 +1,65 @@ +"""Configuration constants for the orchestrator.""" +from __future__ import annotations + +from pathlib import Path + + +# Partition labels +LOGS_PARTITION_LABEL = "EVIDENCE_LOGS" + +# Directory structure within the logs partition +COLLECTIONS_DIRNAME = "Collections" +MANIFEST_FILENAME = "manifest.json" +SUMMARY_FILENAME = "summary.csv" +MACHINES_DIRNAME = "machines" +LOCK_FILENAME = ".collection.lock" + +# UI defaults +UI_TITLE = "Secure Erasure Orchestrator" +UI_BG = "#101010" +UI_FG = "#e8e8e8" +UI_WARN = "#ff8800" +UI_ERROR = "#ff3b30" +UI_SUCCESS = "#45d483" +UI_FONT = ("Roboto", 14) + +# Diagnostics commands (must exist in the rootfs) +REQUIRED_BINARIES = [ + "lsblk", + "lscpu", + "lsusb", + "lspci", + "dmidecode", +] + +# Paths used when running inside the live USB OS +MOUNT_BASE = Path("/mnt") +DEFAULT_LOGS_MOUNTPOINT = MOUNT_BASE / "evidence_logs" +DEFAULT_RUNTIME_STATE = Path("/run/orchestrator") + +# Erasure tools we allow to invoke (must be installed separately) +SUPPORTED_ERASURE_TOOLS = { + "nwipe": { + "display": "nwipe (DoD 5220.22-M, verify)", + "command": ["/usr/bin/nwipe"], + "requires_tty": True, + }, + "blkdiscard": { + "display": "blkdiscard (SSD/NVMe secure discard)", + "command": ["/usr/bin/blkdiscard", "--force"], + "requires_tty": False, + }, + "nvme_format": { + "display": "nvme format (secure erase)", + "command": ["/usr/bin/nvme", "format", "--ses=1"], + "requires_tty": False, + }, +} + +# JSON schema locations (relative to project root) +SCHEMAS_ROOT = Path(__file__).resolve().parent.parent / "schemas" +COLLECTION_MANIFEST_SCHEMA = SCHEMAS_ROOT / "collection_manifest.schema.json" +MACHINE_RECORD_SCHEMA = SCHEMAS_ROOT / "machine_record.schema.json" + +# Misc +APP_VERSION = "0.1.0" diff --git a/orchestrator/diagnostics.py b/orchestrator/diagnostics.py new file mode 100644 index 0000000..075a9b0 --- /dev/null +++ b/orchestrator/diagnostics.py @@ -0,0 +1,133 @@ +"""Hardware diagnostics routines.""" +from __future__ import annotations + +import json +import shutil +import subprocess +from typing import Any, Dict, List + +from . import config + + +class DiagnosticError(RuntimeError): + """Raised when diagnostics cannot be collected.""" + + +def require_binaries() -> None: + missing = [cmd for cmd in config.REQUIRED_BINARIES if shutil.which(cmd) is None] + if missing: + raise DiagnosticError(f"Missing required diagnostic tools: {', '.join(missing)}") + + +def _run_json(command: List[str]) -> Any: + result = subprocess.run(command, check=True, capture_output=True, text=True) + return json.loads(result.stdout) + + +def collect_cpu() -> Dict[str, Any]: + data = subprocess.run(["lscpu", "--json"], check=True, capture_output=True, text=True) + parsed = json.loads(data.stdout) + return { + "model": parsed.get("lscpu", [{}])[0].get("data", {}).get("Model name", "unknown"), + "cores": parsed.get("lscpu", [{}])[0].get("data", {}).get("CPU(s)", "unknown"), + "raw": parsed, + } + + +def collect_storage() -> List[Dict[str, Any]]: + lsblk_data = _run_json(["lsblk", "-O", "-J"]) + devices: List[Dict[str, Any]] = [] + for block in lsblk_data.get("blockdevices", []): + if block.get("type") != "disk": + continue + devices.append( + { + "name": block.get("name"), + "model": block.get("model"), + "serial": block.get("serial"), + "size_bytes": block.get("size"), + "rota": block.get("rota"), + "tran": block.get("tran"), + "type": _infer_storage_type(block), + } + ) + return devices + + +def _infer_storage_type(block: Dict[str, Any]) -> str: + tran = block.get("tran") + rota = block.get("rota") + if tran == "nvme": + return "NVMe" + if tran in {"sata", "ata"} and rota is False: + return "SATA SSD" + if rota: + return "HDD" + return "unknown" + + +def collect_network() -> List[Dict[str, Any]]: + try: + output = subprocess.run( + ["ip", "-json", "link"], check=True, capture_output=True, text=True + ).stdout + parsed = json.loads(output) + except FileNotFoundError: + parsed = [] + devices: List[Dict[str, Any]] = [] + for iface in parsed: + devices.append( + { + "name": iface.get("ifname"), + "mac": iface.get("address"), + "operstate": iface.get("operstate"), + } + ) + return devices + + +def collect_system_info() -> Dict[str, Any]: + try: + dmidecode = subprocess.run( + ["dmidecode", "-t", "system"], check=True, capture_output=True, text=True + ).stdout + except subprocess.CalledProcessError as exc: + raise DiagnosticError("dmidecode failed") from exc + return {"raw": dmidecode} + + +def collect_gpu() -> List[str]: + try: + lspci = subprocess.run(["lspci"], check=True, capture_output=True, text=True).stdout + except Exception: + return [] + return [line for line in lspci.splitlines() if "VGA" in line or "3D" in line] + + +def collect_battery() -> Dict[str, Any]: + power_supplies = subprocess.run( + ["upower", "-e"], check=False, capture_output=True, text=True + ).stdout.splitlines() + batteries = [p for p in power_supplies if "BAT" in p] + return {"present": bool(batteries), "devices": batteries} + + +def collect_boot_mode() -> str: + try: + with open("/sys/firmware/efi", "rb"): + return "UEFI" + except FileNotFoundError: + return "Legacy BIOS" + + +def gather() -> Dict[str, Any]: + require_binaries() + return { + "cpu": collect_cpu(), + "network": collect_network(), + "storage": collect_storage(), + "gpu": collect_gpu(), + "system": collect_system_info(), + "battery": collect_battery(), + "boot_mode": collect_boot_mode(), + } diff --git a/orchestrator/erasure.py b/orchestrator/erasure.py new file mode 100644 index 0000000..b9a4868 --- /dev/null +++ b/orchestrator/erasure.py @@ -0,0 +1,35 @@ +"""Handoff to trusted erasure utilities.""" +from __future__ import annotations + +import subprocess +from typing import List + +from . import config + + +class ErasureError(RuntimeError): + """Raised when erasure handoff fails.""" + + +def available_tools() -> dict: + return {name: data for name, data in config.SUPPORTED_ERASURE_TOOLS.items() if _exists(data)} + + +def _exists(tool: dict) -> bool: + cmd = tool["command"][0] + return subprocess.run(["/usr/bin/env", "bash", "-lc", f"test -x {cmd}"], check=False).returncode == 0 + + +def launch(tool_name: str, devices: List[str]) -> subprocess.Popen: + if tool_name not in config.SUPPORTED_ERASURE_TOOLS: + raise ErasureError(f"Unsupported tool: {tool_name}") + tool = config.SUPPORTED_ERASURE_TOOLS[tool_name] + if not _exists(tool): + raise ErasureError(f"Tool {tool_name} not installed") + + command = tool["command"] + devices + try: + proc = subprocess.Popen(command) + except OSError as exc: + raise ErasureError(f"Unable to start {tool_name}") from exc + return proc diff --git a/orchestrator/logger.py b/orchestrator/logger.py new file mode 100644 index 0000000..3348860 --- /dev/null +++ b/orchestrator/logger.py @@ -0,0 +1,88 @@ +"""Evidence logging helpers.""" +from __future__ import annotations + +import csv +import json +from dataclasses import dataclass +from datetime import datetime +from pathlib import Path +from typing import Any, Dict, List + + +class LoggingError(RuntimeError): + """Raised when evidence logging fails.""" + + +def utcnow() -> str: + return datetime.utcnow().isoformat(timespec="seconds") + "Z" + + +@dataclass +class MachineRecord: + collection_id: str + machine_id: str + start_ts: str + end_ts: str + diagnostics: Dict[str, Any] + storage: List[Dict[str, Any]] + erase_tool: str + result: str + operator: str | None = None + notes: str | None = None + + def as_dict(self) -> Dict[str, Any]: + data = { + "collection_id": self.collection_id, + "machine_id": self.machine_id, + "start_ts": self.start_ts, + "end_ts": self.end_ts, + "diagnostics": self.diagnostics, + "storage": self.storage, + "erase_tool": self.erase_tool, + "result": self.result, + } + if self.operator: + data["operator"] = self.operator + if self.notes: + data["notes"] = self.notes + return data + + +class CollectionLogger: + def __init__(self, manifest_path: Path, machine_json: Path) -> None: + self.manifest_path = manifest_path + self.machine_json = machine_json + + def write_machine(self, record: MachineRecord) -> None: + payload = record.as_dict() + self.machine_json.write_text(json.dumps(payload, indent=2), encoding="utf-8") + + manifest_data = json.loads(self.manifest_path.read_text(encoding="utf-8")) + manifest_data.setdefault("machines", []).append( + { + "machine_id": record.machine_id, + "result": record.result, + "start_ts": record.start_ts, + "end_ts": record.end_ts, + "erase_tool": record.erase_tool, + } + ) + self.manifest_path.write_text(json.dumps(manifest_data, indent=2), encoding="utf-8") + + +class SummaryWriter: + def __init__(self, summary_path: Path) -> None: + self.summary_path = summary_path + + def append(self, record: MachineRecord) -> None: + with self.summary_path.open("a", newline="") as fp: + writer = csv.writer(fp) + writer.writerow( + [ + record.machine_id, + len(record.storage), + record.result, + record.start_ts, + record.end_ts, + ] + ) diff --git a/orchestrator/main.py b/orchestrator/main.py new file mode 100644 index 0000000..13c3daa --- /dev/null +++ b/orchestrator/main.py @@ -0,0 +1,138 @@ +"""Entrypoint wiring the orchestrator modules together.""" +from __future__ import annotations + +import json +import sys +import tkinter as tk +from dataclasses import dataclass +from pathlib import Path + +from . import config +from .collections import CollectionManager, CollectionError, next_sequence +from .diagnostics import DiagnosticError, gather +from .erasure import ErasureError, available_tools, launch +from .logger import MachineRecord, utcnow +from .storage import StorageError, mount_logs, verify_post_erase +from .ui import MachineViewModel, OrchestratorUI + + +@dataclass +class SessionState: + collection_id: str + operator: str | None + logs_path: Path + + +def setup_collection(collection_id: str, operator: str | None) -> tuple[SessionState, CollectionManager]: + logs_root = mount_logs() + manager = CollectionManager(logs_root) + ctx = manager.create_collection(collection_id) + return SessionState(collection_id, operator, logs_root), manager + + +def run_machine(state: SessionState, manager: CollectionManager, tool_name: str) -> None: + try: + diagnostics = gather() + storage_devices = [f"/dev/{d['name']}" for d in diagnostics["storage"]] + if not storage_devices: + raise DiagnosticError("No internal storage detected") + except DiagnosticError as exc: + raise SystemExit(str(exc)) + + ctx = manager.load_collection(state.collection_id) + sequence = next_sequence(ctx.machines_dir) + machine_id = f"{state.collection_id.split('_')[0]}-{sequence}" + machine_logger, summary_writer = manager.new_machine_logger(ctx, machine_id) + + root = tk.Tk() + ui = OrchestratorUI( + root, + on_confirm=lambda: _confirm(root), + on_abort=lambda: _abort(root), + ) + ui.display_machine( + MachineViewModel( + collection_id=state.collection_id, + operator=state.operator, + storage_devices=storage_devices, + diagnostics={"cpu": diagnostics.get("cpu", {}).get("model"), "boot_mode": diagnostics.get("boot_mode")}, + ) + ) + ui.set_status("Review devices and confirm to start trusted erasure") + + # UI loop waits for confirm/abort + root._erasure_confirmed = False # type: ignore[attr-defined] + root.mainloop() + + result = "OPERATOR_ABORTED" + start_ts = utcnow() + end_ts = utcnow() + + if root._erasure_confirmed: # type: ignore[attr-defined] + start_ts = utcnow() + process = _launch_erase(tool_name, storage_devices) + process.wait() + leftovers = verify_post_erase(storage_devices) + end_ts = utcnow() + result = "PASS" if not leftovers and process.returncode == 0 else "FAIL" + + record = MachineRecord( + collection_id=state.collection_id, + machine_id=machine_id, + start_ts=start_ts, + end_ts=end_ts, + diagnostics=diagnostics, + storage=diagnostics["storage"], + erase_tool=tool_name, + result=result, + operator=state.operator, + ) + + machine_logger.write_machine(record) + summary_writer.append(record) + + +def _launch_erase(tool_name: str, devices: list[str]): + try: + return launch(tool_name, devices) + except ErasureError as exc: + raise SystemExit(str(exc)) + + +def _confirm(root: tk.Tk) -> None: + root._erasure_confirmed = True # type: ignore[attr-defined] + root.destroy() + + +def _abort(root: tk.Tk) -> None: + root._erasure_confirmed = False # type: ignore[attr-defined] + root.destroy() + + +def main(argv: list[str] | None = None) -> int: + argv = argv or sys.argv[1:] + if len(argv) < 2: + print("Usage: orchestrator [operator]", file=sys.stderr) + return 2 + + collection_id, tool_name = argv[0], argv[1] + operator = argv[2] if len(argv) > 2 else None + + tools = available_tools() + if tool_name not in tools: + print(f"Unsupported erasure tool: {tool_name}", file=sys.stderr) + print(f"Available: {', '.join(tools)}", file=sys.stderr) + return 2 + + try: + state, manager = setup_collection(collection_id, operator) + except (StorageError, CollectionError) as exc: + print(f"Fatal: {exc}", file=sys.stderr) + return 1 + + run_machine(state, manager, tool_name) + return 0 + + +if __name__ == "__main__": + raise SystemExit(main()) diff --git a/orchestrator/storage.py b/orchestrator/storage.py new file mode 100644 index 0000000..163971e --- /dev/null +++ b/orchestrator/storage.py @@ -0,0 +1,52 @@ +"""Storage helpers for logs partition handling and post-erase verification.""" +from __future__ import annotations + +import subprocess +from pathlib import Path +from typing import List + +from . import config + + +class StorageError(RuntimeError): + """Raised when storage operations fail.""" + + +def find_logs_device() -> str: + try: + output = subprocess.run( + ["lsblk", "-rpo", "NAME,LABEL"], check=True, capture_output=True, text=True + ).stdout + except subprocess.CalledProcessError as exc: + raise StorageError("Unable to enumerate block devices") from exc + + for line in output.splitlines(): + if config.LOGS_PARTITION_LABEL in line: + return line.split()[0] + raise StorageError("Logs partition not found; label missing") + + +def mount_logs(target: Path = config.DEFAULT_LOGS_MOUNTPOINT) -> Path: + target.mkdir(parents=True, exist_ok=True) + device = find_logs_device() + try: + subprocess.run(["mount", "-t", "exfat", device, str(target)], check=True) + except subprocess.CalledProcessError as exc: + raise StorageError("Failed to mount logs partition (expected exFAT)") from exc + return target + + +def verify_post_erase(devices: List[str]) -> List[str]: + """Return devices that still show partitions or filesystems.""" + still_present: List[str] = [] + for dev in devices: + try: + probe = subprocess.run( + ["lsblk", "-no", "FSTYPE", dev], check=True, capture_output=True, text=True + ).stdout.strip() + except subprocess.CalledProcessError: + still_present.append(dev) + continue + if probe: + still_present.append(dev) + return still_present diff --git a/orchestrator/ui.py b/orchestrator/ui.py new file mode 100644 index 0000000..d3e1201 --- /dev/null +++ b/orchestrator/ui.py @@ -0,0 +1,115 @@ +"""Tkinter-based fullscreen UI shell. + +This UI is intentionally minimal but enforces the operator flow: +- choose collection +- review diagnostics +- confirm erasure +- hand off to erasure tool +- log results +""" +from __future__ import annotations + +import tkinter as tk +from dataclasses import dataclass +from pathlib import Path +from typing import Callable, List + +from . import config + + +@dataclass +class MachineViewModel: + collection_id: str + operator: str | None + storage_devices: List[str] + diagnostics: dict + + +class OrchestratorUI: + def __init__(self, root: tk.Tk, on_confirm: Callable[[], None], on_abort: Callable[[], None]): + self.root = root + self.on_confirm = on_confirm + self.on_abort = on_abort + self.status_var = tk.StringVar(value="Waiting for collection setup") + self._build() + + def _build(self) -> None: + self.root.title(config.UI_TITLE) + self.root.configure(bg=config.UI_BG) + self.root.attributes("-fullscreen", True) + self.root.bind("", lambda *_: None) # disable escape + + frame = tk.Frame(self.root, bg=config.UI_BG, padx=32, pady=32) + frame.pack(fill=tk.BOTH, expand=True) + + self.title = tk.Label( + frame, + text=config.UI_TITLE, + fg=config.UI_FG, + bg=config.UI_BG, + font=(config.UI_FONT[0], 26, "bold"), + ) + self.title.pack(anchor="w") + + self.status = tk.Label( + frame, + textvariable=self.status_var, + fg=config.UI_WARN, + bg=config.UI_BG, + font=config.UI_FONT, + ) + self.status.pack(anchor="w", pady=(12, 18)) + + self.detail = tk.Text(frame, height=20, bg="#0f0f0f", fg=config.UI_FG, relief=tk.FLAT) + self.detail.configure(state="disabled") + self.detail.pack(fill=tk.BOTH, expand=True) + + btn_frame = tk.Frame(frame, bg=config.UI_BG) + btn_frame.pack(fill=tk.X, pady=(20, 0)) + + confirm = tk.Button( + btn_frame, + text="Confirm erase", + bg=config.UI_WARN, + fg="black", + font=config.UI_FONT, + command=self.on_confirm, + ) + confirm.pack(side=tk.LEFT, padx=(0, 12)) + + abort = tk.Button( + btn_frame, + text="Abort / Next", + bg=config.UI_ERROR, + fg="white", + font=config.UI_FONT, + command=self.on_abort, + ) + abort.pack(side=tk.LEFT) + + def display_machine(self, model: MachineViewModel) -> None: + text = [ + f"Collection: {model.collection_id}", + f"Operator: {model.operator or 'N/A'}", + "", + "Storage devices detected:", + ] + for dev in model.storage_devices: + text.append(f" - {dev}") + text.append("\nDiagnostics summary:") + for key, value in model.diagnostics.items(): + text.append(f" {key}: {value}") + + self.detail.configure(state="normal") + self.detail.delete("1.0", tk.END) + self.detail.insert(tk.END, "\n".join(text)) + self.detail.configure(state="disabled") + + def set_status(self, message: str, level: str = "warn") -> None: + color = { + "warn": config.UI_WARN, + "error": config.UI_ERROR, + "ok": config.UI_SUCCESS, + }.get(level, config.UI_WARN) + self.status_var.set(message) + self.status.configure(fg=color) diff --git a/schemas/collection_manifest.schema.json b/schemas/collection_manifest.schema.json new file mode 100644 index 0000000..ed92ec1 --- /dev/null +++ b/schemas/collection_manifest.schema.json @@ -0,0 +1,26 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Collection Manifest", + "type": "object", + "properties": { + "collection_id": {"type": "string"}, + "version": {"type": "string"}, + "machines": { + "type": "array", + "items": { + "type": "object", + "properties": { + "machine_id": {"type": "string"}, + "result": {"type": "string", "enum": ["PASS", "FAIL", "OPERATOR_ABORTED"]}, + "start_ts": {"type": "string"}, + "end_ts": {"type": "string"}, + "erase_tool": {"type": "string"} + }, + "required": ["machine_id", "result", "start_ts", "end_ts", "erase_tool"], + "additionalProperties": false + } + } + }, + "required": ["collection_id", "version", "machines"], + "additionalProperties": false +} diff --git a/schemas/machine_record.schema.json b/schemas/machine_record.schema.json new file mode 100644 index 0000000..ac2a56c --- /dev/null +++ b/schemas/machine_record.schema.json @@ -0,0 +1,31 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Machine Evidence Record", + "type": "object", + "properties": { + "collection_id": {"type": "string"}, + "machine_id": {"type": "string"}, + "start_ts": {"type": "string"}, + "end_ts": {"type": "string"}, + "diagnostics": {"type": "object"}, + "storage": { + "type": "array", + "items": {"type": "object"} + }, + "erase_tool": {"type": "string"}, + "result": {"type": "string", "enum": ["PASS", "FAIL", "OPERATOR_ABORTED"]}, + "operator": {"type": ["string", "null"]}, + "notes": {"type": ["string", "null"]} + }, + "required": [ + "collection_id", + "machine_id", + "start_ts", + "end_ts", + "diagnostics", + "storage", + "erase_tool", + "result" + ], + "additionalProperties": false +} diff --git a/system/getty@tty1.service.d/override.conf b/system/getty@tty1.service.d/override.conf new file mode 100644 index 0000000..b560e48 --- /dev/null +++ b/system/getty@tty1.service.d/override.conf @@ -0,0 +1,4 @@ +[Service] +ExecStart= +ExecStart=-/sbin/agetty --autologin operator --noclear --noissue %I $TERM +TTYVTDisallocate=no diff --git a/system/orchestrator.service b/system/orchestrator.service new file mode 100644 index 0000000..5b184ed --- /dev/null +++ b/system/orchestrator.service @@ -0,0 +1,23 @@ +[Unit] +Description=Secure Erasure Orchestrator +After=multi-user.target +RequiresMountsFor=/mnt/evidence_logs + +[Service] +Type=simple +User=operator +WorkingDirectory=/opt/orchestrator +Environment=DISPLAY=:0 +ExecStart=/usr/bin/python3 -m orchestrator.main ${COLLECTION_ID} ${ERASE_TOOL} ${OPERATOR} +Restart=on-failure +RestartSec=5 + +# Disallow new privileges +NoNewPrivileges=true +PrivateTmp=true +ProtectHome=true +ProtectSystem=full +ReadWritePaths=/mnt/evidence_logs + +[Install] +WantedBy=multi-user.target