-
Notifications
You must be signed in to change notification settings - Fork 1
Article: How Rego Policy Controls Caught a Real Resolver Issue in Our Agentic SDLC #152
Copy link
Copy link
Open
Labels
documentationImprovements or additions to documentationImprovements or additions to documentation
Description
Article Idea
Write a technical article about how adding Rego policy controls to our agentic SDLC build flow caught a real resolver completeness issue on its very first run.
The Story
Background
- We have an agentic SDLC pipeline where AI agents write code, review it (multi-model: Claude + OpenAI), resolve findings, and create PRs
- Every step is attested to Kosli trails with structured evidence
- The build flow runs
kosli evaluate trailswith Rego policies to verify the review evidence before merge
The Audit Gap
- An SDLC audit review of GH145 revealed that while the review flow had rich attestation evidence (ticket integrity, injection scanning, resolver decisions, cost tracking, 80 per-persona review attestations), the build flow only evaluated 7 controls
- Key review evidence was being attested but never verified at build time:
diff-injection-scan— no build-time check that no injections were foundresolver-threads-resolved— no build-time check that all findings were properly accounted forchange-classified,moderator-debate,moderator-resolution— no check that the review quality process actually ran
What We Built
Three new Rego policies + CI jobs in one commit:
- injection-scan-control — Verifies across all review loops: scan completed, 0 injection candidates, 0 encoded payloads
- resolver-completeness-control — Verifies: conservation (findings_in == resolutions_out), count match, file alignment, no extra files modified, no threads left open
- review-quality-control — Verifies: change classification ran with personas selected, cross-model severity applied, findings deduplicated, moderator debate + resolution completed with valid verdict
The Catch (First Run!)
On the very first pipeline run with the new controls (GH149), resolver-completeness-control caught two violations:
"violations": [
"Trail GH149-Loop1: 1 review threads still open after resolution",
"Trail GH149-Loop1: resolver introduced extra files not in findings"
]
The resolver had:
- Left 1 review thread unresolved
- Modified files outside the scope of the review findings
This would have passed through the old pipeline silently. The new control made the build trail NON-COMPLIANT, creating a visible audit record.
What Makes This Interesting
- The control worked on its first real run — not a test scenario, real production data
- Full transparency in the attestation: the user_data includes the Rego policy source code, the evaluation result, the violations, and the policy file hash — so an auditor can see exactly what was evaluated and why it failed
- Evidence chain: review flow attestation → Kosli trail →
kosli evaluate trails+ Rego → build flow attestation with pass/fail + reasons - Pattern:
kosli evaluate trails+ Rego +kosli attest generic --compliantcreates a reusable control-gate pattern for any policy
Key Technical Details to Include
The Rego Policy Pattern
default allow = false
violations contains msg if { ... }
allow if { count(violations) == 0 }
Each policy checks specific attestation slots in the review trail data.
The CI Job Pattern
kosli evaluate trails <trail_names> --policy <rego_file> --flow <review_flow> --output json- Capture
allow+violationsfrom JSON output - Build control-payload.json with: policy_content (--rawfile), evaluate result (--slurpfile), violations, metadata
kosli attest generic --compliant=$ALLOWED --user-data control-payload.json
The Evidence Structure (in Kosli attestation user_data)
policy_content— full Rego source at time of evaluationpolicy_file_hash— SHA256 for integrity verificationevaluate_trails_result—{allow: bool, violations: [...]}violations— human-readable list of why it failedreview_flow_name,review_trails,trails_evaluated— what was evaluatedcommit_sha,evaluated_at— when and on what code
Numbers
- Review flow: 17 trail-level + 80 artifact-level attestations per trail (97 total)
- Build flow: 1 trail-level + 11 artifact-level attestations (was 8, now 11)
- Total Rego policies: 10 (was 7)
- New OPA test cases: ~25 across 3 test files
Relevant Links
- GH149 build trail: https://staging.app.kosli.com/Agentic-SDLC-Demo/flows/agentic-sdlc-demo-GH149-Build/trails/GH149-PR151
- resolver-completeness-control attestation: https://staging.app.kosli.com/Agentic-SDLC-Demo/flows/agentic-sdlc-demo-GH149-Build/trails/GH149-PR151?attestation_id=a5c2a2bf-5894-4944-9323-90bd7825
- Commit adding the 3 controls: e5651b9
Audience
- Engineering leaders evaluating AI coding agents for regulated environments
- DevOps/platform engineers implementing SDLC governance
- Compliance/audit teams wanting to understand automated evidence chains
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
documentationImprovements or additions to documentationImprovements or additions to documentation