Replies: 2 comments 1 reply
-
|
@xsa520 — good question, and the answer is yes, partially. The toolkit already models decisions as structured artifacts:
What we don't do yet is treat the decision as a sealed, independently verifiable artifact in the way you describe. Your The closest thing to what you're describing is the Would be interested to see how your guardian repo approaches the sealed evidence part. Is it cryptographic (signed decisions) or structural (decision logs with integrity chains)? |
Beta Was this translation helpful? Give feedback.
-
|
We opened a discussion here to explore this further: The issue summarizes two governance evidence models: • execution-receipt-centric governance Interested to hear perspectives from other governance implementations. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I've been exploring governance layers for AI agents.
Many current approaches focus on:
• policy enforcement
• runtime sandboxing
• audit logs
However I'm curious whether governance should also treat
the decision itself as a first-class artifact.
For example:
Intent → Policy → Decision → Evidence → Execution
Where the decision and evidence are sealed and replayable.
Curious whether the toolkit models governance decisions
in this way or focuses mainly on enforcement.
Experiment repo:
https://github.com/xsa520/guardian
Beta Was this translation helpful? Give feedback.
All reactions