Skip to content

sapsan14/aletheia-ai

Repository files navigation

Enterprise Agent Trust Framework

⚠️ University research prototype — reflection-informed, not a production system. For discussion, experimentation, and clear technical understanding. Not legal advice.

Java Spring Boot License Context PQC

CI Open issues Open PRs Last commit Delivery board

Written for PKI engineers and architectstrust / security / solution architects who draw the boxes and flows, and PKI practitioners who run CAs, TSPs, QTSP-oriented services, validation, and crypto operations. The architecture question we stress: where does agent output and governed action sit in a trust stack you already know (X.509 path logic, signature verification, revocation, timestamps, audit discipline under eIDAS and ETSI EN 319 xxx)? From there we map how that reference architecture relates to EU AI Act traceability and integrity language—article by article—without treating the Act as a PKI spec.

PKI / QTSP experience informs the design; this repository is not a QTSP product.

University research prototype — a strong, runnable sketch of that argument in code: hash → sign → (optional) RFC 3161 timestamp → evidence package, plus path validation, OCSP/CRL, and hybrid classical + post-quantum signing where enabled. Grounded in ongoing reflections and design notes (e.g. 2026-03-24, 2026-03-25); elaborated for engineering audiences in-repo (vision note). Not legal advice. Not a production offering.

Primary law (AI Act): Regulation (EU) 2024/1689 — EUR-Lex (consolidated EN) · Navigator (unofficial, convenient): artificialintelligenceact.eu — use EUR-Lex for authoritative text.

Want to plug an agent in? partner-integrations/QUICKSTART.md is the five-minute, copy-paste path: mint a key → eatf initeatf doctoreatf agents synceatf sign --downloadeatf verify. End state is a real, offline-verifiable .aep evidence bundle — no UI clicks beyond the API key, no curl. See RFC #72 for the design.


Why this prototype exists

From a PKI and trust-architecture perspective, the interesting question is whether high-risk AI obligations (logging, transparency, robustness “in places”) can be grounded in artefacts you already understand: CMS/CAdES-style or equivalent signing, X.509 path validation, revocation (OCSP, CRL), TSA (RFC 3161), and policy aligned with EN 319 102 / 401 / 411—i.e. the same trust-service toolbox as under eIDAS.

The AI Act (especially Chapter III, Section 2) frames what deployers may need to show in terms of logging, transparency, oversight, and robustness. This codebase is an implementation hypothesis stated in PKI terms: bind agent outputs and sensitive actions to signed, timestamped, verifiable evidence, policy, and human steps—so engineers and architects can inspect the chain and challenge the design, not only read narrative mapping.

We deliberately map Article 10 here to integrity and provenance (hashes, signatures, timestamps)—not to ML “data quality” as a PKI claim. In-app mapping: /trust/regulatory-mapping.

MCP angle: experimental governance next to identity (MCP) and detection—see mcp-ecosystem.md.


Core mapping (summary table)

PKI-native summary: which AI Act articles we discuss against which eIDAS / ETSI levers, and what the repo actually runs. Full tables and sector matrix below.

AI Act (navigator → EUR-Lex) Idea in this prototype eIDAS / ETSI (entry points) What the repo runs
Art. 10 · EUR-Lex context Integrity & provenance only—not ML dataset “quality” as a crypto claim eIDAS (EU) No 910/2014 (trust services, e-signatures); EN 319 102-1, EN 319 132-1; ETSI digital signatures Canonical payloads, signatures, Evidence Packages
Art. 12 Tamper-evident trails, PKI-related events EN 319 401, EN 319 411-1 Audit ledger, signed events, OCSP / CRL where configured
Art. 13 Verifiable crypto / policy state for deployers and auditors eIDAS trust services — Commission overview; EN 319 102-1, EN 319 412-1; EU Digital Identity / eIDAS hub Chain-of-trust validation, verification UI/API
Art. 6 + Annex III · Annex III (EUR-Lex) Motivation when agents touch high-risk use types Same stack as above Scenario docs, sector matrix—not a compliance certificate

Focus: cryptographic integrity and provenance, not ML data quality under the Act.


What is implemented (prototype depth)

PKI / crypto pipeline

  • Digital signatures over canonical content (RSA; optional ML-DSA hybrid — PQC plan)
  • Certificate path validation (PKIX / X.509) and QTSP-oriented trust configuration (deployment-dependent)
  • RFC 3161 timestamping (real or mock TSA)
  • Hash-chained audit events; signing where enabled
  • Evidence Packages (.aep): export + offline verification (JAR/scripts in repo)
  • Spring Boot + Next.js; REST + partner/MCP-style governed actions and attestation flows

Technical entry: docs/README.md.


Post-quantum cryptography (PQC)


Additional prototype components

  • Delegation chain modellingdelegation builder, concepts, /delegation-chains/builder
  • Human-in-the-loop — approvals / review queues (demo tenants)
  • Policy-gated actions — illustrative policies only (not legal compliance)

Deep dive — regulatory mapping (engineering argument, not legal canon)

PKI lens: the tables below are for mapping conversations—they do not replace CP/CPS discipline or notified-body work. Not legal advice. Counsel validates classifications. Extended analysis: eu_ai_act_multi_sector_opportunities.md.

Speaking lines (hypothesis, not statutory interpretation)

  • The AI Act frames what trust and traceability may be required in places; eIDAS / ETSI show how the EU often implements integrity and validation in practice—this prototype tests alignment in software.
  • Article 10 in this mapping means integrity & provenance—not ML training-data “quality” as a PKI deliverable.

Three-column mental model (EN)

EU AI Act eIDAS / ETSI Aletheia (prototype)
Art. 6 — high-risk AI systems Trust services; CA / TSP / QTSP (Commission) End-to-end PKI path: CA material, validation, cross-border trust-service practice per eIDAS hub
Art. 10 — data integrity & provenance (not “quality” on this axis) eIDAS e-signatures (integrity + authenticity) Signed artefacts; integrity checks
Art. 12 — logging, traceability Non-repudiation; EN 319 401, EN 319 411-1 Serials; OCSP / CRL; validation history
Art. 13 — transparency (verifiability) EN 319 102-1; electronic trust services (EU) Chain validation; source / anchor verification
Art. 14 — human oversight Indirect: qualified trust services + audit trails → accountability; human control remains process Validation + audit evidence for reviewers—not a substitute for governance
Reliability / future eIDAS 2.0 — (EU) 2024/1183; EU eIDAS policy; ETSI quantum-safe crypto Hybrid RSA + ML-DSA; PoC PQC

Full layer mapping (EU AI Act → eIDAS/ETSI → prototype)

Layer EU AI Act → eIDAS / ETSI → Aletheia (prototype) One-liner (EN)
Data integrity & provenance Art. 10 — scope here: integrity + provenance; “data quality” in the Act ≠ this PKI slice eIDAS; EN 319 102-1, EN 319 132-1 Signed artefacts Art. 10 → integrity & provenance via signatures; not an ML data-quality claim.
Traceability & logging Art. 12 EN 319 401, EN 319 411-1 OCSP/CRL; audit Art. 12 → PKI-anchored traceability.
Transparency & verifiability Art. 13 EN 319 102-1, EN 319 412-1; trust services (EU) Path validation; root / policy provenance Art. 13 → cryptographic verifiability.
Human oversight (supporting) Art. 14 (indirect) eIDAS trust stack; audit-friendly logs Evidence for review Art. 14 → PKI does not add the human; it makes outcomes reviewable and attributable.
High-risk trust architecture Art. 6 CA, TSP, QTSP; EN 319 401 Configurable trust material + validation + verification UX Strong trust layer analogous to regulated PKI—prototype, not a national scheme.
Crypto agility / PQC High-risk chapter context (Arts 9–15 navigator) EU / ETSI quantum-safe Hybrid ML-DSA Future-proofing—not a claim that the Act mandates PQC.

Official & secondary anchors

Resource URL
AI Act (EU law) EUR-Lex CELEX 32024R1689
AI Act — implementation timeline Commission AI Act Service Desk
Navigator — Annex III artificialintelligenceact.eu/annex/3
Navigator — Arts 9–15 Section 3(2) high-risk
Deployer — Art. 26 navigator
FRIA — Art. 27 navigator
eIDAS (2014) EUR-Lex 910/2014
eIDAS 2.0 (2024/1183) EUR-Lex

High-risk chapter — article quick map (prototype levers)

Article Navigator Gist Prototype lever
9 Art. 9 Risk management Policies, kill-switch, delegation limits
10 Art. 10 Data governance Here: integrity/provenance via crypto
11 Art. 11 Technical documentation Exports, metadata
12 Art. 12 Record-keeping Hash-chained audit, signed events
13 Art. 13 Transparency to deployers Agent identity / capabilities in UI/API
14 Art. 14 Human oversight Approvals, escalation
15 Art. 15 Robustness / security Verification endpoints, signals

Deployer discussion only: Art. 26, Art. 27 — not a claim this repo satisfies them.

Annex III → sectors (condensed)

High-risk categories under Art. 6(2) — see Annex III navigator and EUR-Lex Annex III: (1) Biometrics, (2) Critical infrastructure, (3) Education, (4) Employment, (5) Essential services & benefits, (6) Law enforcement, (7) Migration/border, (8) Justice & democratic processes.

Sector matrix (illustrative — not classification advice)

Sector Typical agentic actions Touchpoints (links) Prototype-style response
Healthcare Triage, notes, orders Annex III(5)(d); MDR; IVDR; GDPR Human gate; signed evidence; audit
Life & health insurance Underwriting, claims Annex III(5)(c) Human binds; policies
Banking & credit Credit, KYC Annex III(5)(b) Threshold approval; exports
Payments & treasury Pay, FX PSD2; AML overview (Commission); AI Act if 5(b) Governance gate; demo PaymentIntent
Legal & dispute Drafts, discovery Annex III(8)(a) Delegation; human sign-off · OpenCourt
HR & talent Screening Annex III(4) Escalation; audit
Education Grading Annex III(3) Roles; review · School Compass
Critical infrastructure Control suggestions Annex III(2) Dual-control patterns
Public benefits Eligibility Annex III(5)(a) Records; FRIA-aware design · Bürokratt Kit
Law enforcement Case support Annex III(6) Sensitive; audit narrative only where appropriate
Migration & border Checks Annex III(7) High sensitivity
Political / civic Targeting Annex III(8)(b); GPAI — navigator Demo boundaries
Biometric / emotion Inference Art. 5 · Annex III Default off; legal review
Citizen × AI (cross-cutting) Personal AI accountability Art. 14; eIDAS 2.0 EUDI Wallet × EATF; Verifiable Credential receipt · Citizen Receipts

Documentation

You are… Start here
User Trust & demo
Developer API & setup
Partner Integrations
Delegation UI Builder quick start
Research (Article 01 plan & bibliography) docs/research/README.md
Reference scenarios (4) OpenCourt · Bürokratt Kit · School Compass · Citizen Receipts
Open Kratt manifest spec docs/specs/kratt-manifest.md · JSON Schema
RIA / Bürokratt pilot pitch docs/partners/ria-pilot.md

Full index: docs/README.md.


Quick start

git clone https://github.com/sapsan14/aletheia-ai.git && cd aletheia-ai
cp .env.example .env
openssl genpkey -algorithm RSA -out ai.key -pkeyopt rsa_keygen_bits:4096
# In .env set: AI_ALETHEIA_SIGNING_KEY_PATH=./ai.key
cd backend && mvn spring-boot:run
cd frontend && cp .env.example .env.local && npm install && npm run dev
  • Backend: http://localhost:8080 (see backend config if your port differs)
  • Frontend: http://localhost:3000
  • Set OPENAI_API_KEY in .env for the AI demo. Set NEXT_PUBLIC_API_URL to your backend URL in frontend/.env.local.

More: docs/README.md → Developers.


Demo accounts (development mode)

demo@aletheia.ai / Demo123!tenant-scoped ADMIN on each of demo-healthcare, demo-fintech, demo-legal (one membership row per tenant). NOT SUPER_ADMIN — V208 migration intentionally downgraded this account because SUPER_ADMIN combined with X-Tenant-Id pivoting under the demo profile enabled cross-tenant access. See backend/.../db/seeding/DemoBaselineRecovery.java and DemoBaselineRecoveryTest.java for the guard. Per-tenant admins: admin@demo-{healthcare,fintech,legal}.local / Demo123!. Full detail: database seeding.


License

MIT. See LICENSE. Authorship: docs/README.md#authorship.


Validate legal URLs against your source of record. This README is the canonical project pitch for the repository.

About

ProofGPT — verifiable AI responses: signed, timestamped, PKI-anchored proofs of LLM output provenance.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors