Skip to content

Latest commit

 

History

History
94 lines (71 loc) · 4.21 KB

File metadata and controls

94 lines (71 loc) · 4.21 KB

Bürokratt Accountability Kit — open governance for Estonia's 2026 agent network

Annex III(5)(a) public benefits + Annex III(8)(b) democratic processes. Reference partner integration; Aletheia is not a state agency.

The 30-second pitch

Bürokratt 2026 moves Estonia from one central public-sector chatbot to "a unified network of agents" — every institution gets its own personalised AI agent. None of those agents come with built-in accountability: signed responses, hash-chained audit, citizen-facing evidence. The Accountability Kit is an open spec + reference implementation that any institution can adopt in roughly a day to give its Kratt:

  • a public, machine-readable manifest at /.well-known/eatf-agent.json
  • a signed evidence package per Q&A round-trip (RSA + ML-DSA + RFC 3161)
  • a human-review gate that fires on HIGH-risk prompts
  • a live trust badge embeddable on the institution's site or on kratid.ee content pages

Where Aletheia stops, and what stays with the institution

Aletheia provides infrastructure:

  • the spec (docs/specs/kratt-manifest.md)
  • the backend that signs, timestamps, and stores evidence
  • the embeddable badge (frontend/public/badge.js)
  • the verification UX (/scenarios/kratid/verify)
  • a reference Kratt deployed at example.eatf.eu/kratid for partners to inspect

The institution decides:

  • whether to adopt the spec (voluntary)
  • where to host its own EATF tenant (self-hosted or via a partner instance during pilot)
  • which prompts trigger HIGH risk in its policy file
  • whether to list the Kratt on kratid.ee (separate editorial step)

The reference Kratt

example.eatf.eu/kratid runs "Aletheia Reference Kratt" — clearly banner-labelled as a research prototype. Visitors can:

  • ask three canned Estonian-language public-service questions
  • watch each round-trip get signed and (for HIGH risk) routed through a human-review queue
  • download the resulting .aep evidence package
  • copy the badge snippet for their own site

Stage flow (60 seconds)

  1. Show the spec scrolled to well_known_url and risk_class.
  2. Open https://example.eatf.eu/kratid. Maarja asks a benefit-eligibility question → HIGH-risk → "human review queued" appears inline.
  3. Approve as the mock clerk → answer is released.
  4. Click Download .aep.
  5. Paste the badge snippet into a sample HTML page (jsfiddle, codepen) → live stats render in seconds.

Why this matters in 2026 specifically

Bürokratt's 2026 rearchitecture is happening now, not later. Each new institutional agent that ships without runtime accountability is a missed window: it's much cheaper to bake governance into a Kratt at launch than to retrofit it after a citizen complaint or an Andmekaitse Inspektsioon inquiry.

The kit also gives the AI Act (specifically Articles 12 and 13) a concrete runtime grounding for a sector — public-service consultancy chatbots — that's otherwise served by opaque LLM proxies.

What this is not

  • Not a kratid.ee registration mechanism. kratid.ee remains an editorial content site.
  • Not a substitute for whatever governance the institution has via RIA or its parent ministry. It's an additive runtime layer.
  • Not a legal classification under the AI Act. risk_class mirrors the Act's spirit but is set by the institution's policy, not by Aletheia.

How a real institution adopts this

See docs/partners/ria-pilot.md for the proposed pilot path with RIA / Government CIO Office.

Where to go next