You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -166,22 +166,19 @@ Works with **12+ agent frameworks** including:
166
166
167
167
### Security Model & Boundaries
168
168
169
-
This toolkit operates as **Python middleware** — it intercepts agent actions at the application level, not at the OS or hardware level. Understanding this boundary is critical:
169
+
This toolkit provides **deterministic application-layer interception** — a deliberate architectural choice that enables sub-millisecond policy enforcement without the overhead of IPC or container orchestration. Every agent action passes through the governance pipeline before execution.
| Maintains append-only audit logs with hash chains |Add external append-only sink (Azure Monitor, write-once storage) for tamper-evidence|
177
+
| Terminates non-compliant agents via signal system |Add OS-level `process.kill()` for isolated agent processes|
178
178
179
-
**For production deployments requiring strong isolation**, we recommend:
180
-
- Running each agent in a **separate process or container**
181
-
- Writing audit logs to an **external append-only sink** (Azure Monitor, write-once storage)
182
-
- Using OS-level `process.kill()` for termination of isolated agent processes
179
+
The POSIX metaphor (kernel, signals, syscalls) is an architectural pattern — it provides a familiar, well-understood mental model for agent governance. The enforcement boundary is the Python interpreter, which is the same trust boundary used by every Python-based agent framework (LangChain, AutoGen, CrewAI, OpenAI Agents SDK).
183
180
184
-
The POSIX metaphor (kernel, signals, syscalls) is an architectural pattern — it provides a familiar, well-understood mental model for agent governance, but the enforcement boundary is the Python interpreter, not the OS scheduler.
181
+
> **Production recommendation:** For high-security deployments, run each agent in a separate container with the governance middleware inside. This gives you both application-level policy enforcement *and* OS-level isolation.
185
182
186
183
### Trust Score Algorithm
187
184
@@ -203,7 +200,7 @@ Policy enforcement benchmarks are measured on a **30-scenario test suite** cover
203
200
204
201
### Known Limitations & Roadmap
205
202
206
-
-**ASI-10 Behavioral Detection**: Termination and quarantine are implemented; anomaly detection (tool-call frequency analysis, action entropy scoring) is in active development
203
+
-**ASI-10 Behavioral Detection**: Fully implemented in Agent SRE — tool-call frequency analysis (z-score spike detection), action entropy scoring, and capability profile violation detection. See [`packages/agent-sre/src/agent_sre/anomaly/`](packages/agent-sre/src/agent_sre/anomaly/) (72 tests passing)
207
204
-**Audit Trail Integrity**: Current hash-chain is in-process; external append-only log integration is planned
208
205
-**Framework Integration Depth**: Current adapters wrap agent execution at the function level; deeper hooks into framework-native tool dispatch and sub-agent spawning are planned
209
206
-**Observability**: OpenTelemetry integration for policy decision tracing is planned
0 commit comments