-
Notifications
You must be signed in to change notification settings - Fork 71
Expand file tree
/
Copy path.roborev.toml
More file actions
78 lines (66 loc) · 4.08 KB
/
.roborev.toml
File metadata and controls
78 lines (66 loc) · 4.08 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
review_agent = ""
review_guidelines = """
The daemon and client evolve in lockstep; the daemon is restarted after
updates. API changes do not require backward compatibility shims.
TUI overflow on very narrow terminals (< 60 chars) is acceptable; omit
pedantic comments on these extreme scenarios.
## Trust model — local-only tool
roborev is a single-user local CLI. The daemon binds to 127.0.0.1, the
database is local SQLite, and all data (repo names, paths, review content,
hook output, job metadata) originates from the operator's own filesystem and
git repos. Only flag injection/sanitization issues for data from external
sources (e.g., commit messages from shared repos, API responses from remote
services). The following are all local-data-in-local-tool and are NOT
security findings:
- Missing auth on daemon endpoints (only client is the local CLI; OS-level
access control is sufficient)
- "DoS" from request bodies or thundering-herd on loopback (bounded by the
user's own repos, not external input)
- "Data exposure" from endpoints returning review content (user's own data
on their own machine)
- "Prompt injection" from hook stderr, prior review text, or agent output
embedded in prompts (all locally generated; a compromised local agent
already has full shell access)
- "Secret exposure" from hook stderr logged or forwarded to the model
- "Untrusted remap data" or "history tampering" from the remap endpoint (the
client is the user's own post-rewrite hook, data comes from the user's own
git repo, and the daemon is localhost-only)
- .githooks/ tracked as a supply-chain risk (source templates only; installed
hooks are frozen copies in .git/hooks/, unaffected by branch switches)
- "Argument injection" on internal helpers (e.g., git.GetPatchID,
git.GetCommitInfo) that receive SHAs already validated or resolved at the
call site — validation belongs at trust boundaries, not every internal call
- Race conditions in metadata handoff between CLI and daemon (correctness
concern, not exploitable by external attacker)
- "Markerless" hook blocks or interpreter mismatch — hook install/upgrade uses
marker-based detection, every generated hook includes a marker comment, and
append logic refuses to add shell snippets to non-shell hooks
- Symlink-following in hook read/write under .git/hooks/ (controlled by the
local user; a compromised local filesystem is out of scope)
## Agent tool permissions
Review agents run with shell access to inspect diffs and source files. The
allow-all-tools plus deny-list approach used by agents like Copilot is
intentional — it matches the trust level of Codex (--sandbox read-only still
permits arbitrary shell execution) and Gemini (--approval-mode plan). The
deny-list is defense-in-depth, not a security boundary. Users reviewing
untrusted code are expected to run roborev in a sandboxed environment
(container, VM). Do not flag agent tool permission patterns as security
vulnerabilities when they follow this model.
## Config loading and filesystem fallback
Config loading (loadGuidelines, loadCIRepoConfig) reads .roborev.toml from
the default branch and falls back to the working-tree filesystem when the
default branch has no config (no remote, config not yet committed). The
daemon never checks out PR branches — the working tree is always the
operator's own checkout. Do not flag filesystem fallback as a trust-boundary
issue or claim that PR authors can influence config via the working tree.
## Verdict parsing and review output shape
Verdict parsing is intentionally simple and deterministic. Prefer explicit
structured signals such as severity labels, findings sections, and clear pass
phrases. Do not ask for elaborate natural-language contradiction parsing of
free-form prose after "No issues found." or similar pass text.
If agent review output is too chatty, includes process narration, or mixes
findings with summary/front matter in a way that makes verdict detection less
reliable, the fix should be to tighten the review prompts/templates and output
contract. Do not ask for increasingly broad deterministic heuristics to parse
arbitrary narrative text.
"""