This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Layne is a self-hosted GitHub App that centralises security scanning across repositories. It receives pull_request webhooks, enqueues scan jobs via BullMQ/Redis, posts results back as GitHub Check Run annotations, manages PR labels, and sends chat notifications. It runs three scanners: Semgrep (SAST), Trufflehog (secret detection), and Claude (malicious intent detection).
# Run the webhook server
npm start # node dist/server.js
# Run the job worker
npm run worker # node dist/worker.js
# Build TypeScript
npm run build # tsc -p tsconfig.build.json
npm run typecheck # tsc --noEmit
# Tests
npm test # vitest run (single pass)
npm run test:watch # vitest (watch mode)
npm run test:coverage
# Run a single test file
npx vitest run src/__tests__/github.test.ts
# Lint
npm run lintRequired (checked at startup by validateEnv() in src/env.ts):
GITHUB_APP_ID
GITHUB_APP_PRIVATE_KEY # single-line PEM with literal \n between lines
GITHUB_WEBHOOK_SECRET
Optional:
REDIS_URL # defaults to redis://localhost:6379
PORT # defaults to 3000
ANTHROPIC_API_KEY # required when any repo has claude.enabled: true
METRICS_ENABLED # set to "true" to enable Prometheus metrics
METRICS_PORT # worker metrics server port, defaults to 9091
DOMAIN # used for Rocket.Chat icon_url and TLS
DEBUG_MODE # set to "true" for verbose logging
Two separate Node.js processes:
src/server.ts - Webhook receiver
- Express app with
POST /webhook,GET /health,GET /metrics(when enabled),GET /assets/layne-logo.png - Verifies GitHub HMAC signature before processing
- Handles four event types:
pull_request,workflow_run,workflow_job, andissue_comment pull_requesttrigger (default): on opened/synchronize/reopened, creates a Check Run inqueuedstate, enqueues a BullMQ job, returns 200workflow_runtrigger: onpull_requestevents, caches PR metadata in Redis (TTL 7 days) and creates askippedCheck Run; onworkflow_run completedevents matching the configured workflow name and conclusion, looks up cached PR metadata (falls back to GitHub API if cache is cold) then enqueues the scanworkflow_jobtrigger: same two-stage pattern asworkflow_runbut gates on a single named job completing rather than the whole workflowissue_commenttrigger: parses/layne exception-approvecommands from PR comments; validates the commenter is an authorized exception approver; stores exceptions in Redis scoped to the PR (not the commit SHA); re-enqueues the scan if the current check run is infailurestate- Job ID is deduplicated by
{repo}#{pr}@{sha}- duplicate webhook deliveries are no-ops (Redis lock + queue check) - Exported
appandprocessWebhookRequestfor use in tests
src/worker.ts - Job processor
- BullMQ
Workerconsuming thescansqueue with concurrency 5 processJob()is exported for direct testing without Redis- Per-job 10-minute timeout via
Promise.race - Graceful shutdown on SIGTERM/SIGINT - finishes in-flight jobs before exiting
- When
METRICS_ENABLED=true: starts an HTTP metrics server onMETRICS_PORTand polls BullMQ queue counts every 15 s
Job lifecycle (inside runScan):
- Mark Check Run
in_progress - Authenticate as installation via
src/auth.ts→ short-lived token - Resolve merge base SHA via
src/github.ts→getMergeBaseSha(three-dot diff base) - Create temp workspace (
src/fetcher.ts→createWorkspace) - Partial-clone head and merge-base SHAs with
--filter=blob:none(src/fetcher.ts→setupRepo) - Diff the two commits to get changed file paths (
getChangedFiles) and per-file changed line ranges (getChangedLineRanges) - Sparse-checkout only the changed files - blobs fetched on demand (
checkoutFiles) - Load per-repo config via
src/config.ts→loadScanConfig - Run scanners in parallel via
src/dispatcher.ts→dispatch() - Validate finding locations against the actual file content (
src/location-validator.ts→validateFindingLocations) - Suppress findings that have a
// SECURITY:comment at the merge base (src/suppressor.ts→suppressFindings) - Filter to actionable findings; stamp each with a deterministic
_findingId(LAYNE-xxxxxxxxxxxxxxxx) viasrc/exception-approvals.ts→generateFindingId - Convert findings to annotations via
src/reporter.ts→buildAnnotations() - If
exceptionApproversis configured: load stored exceptions from Redis (loadExceptions), remove stale ones whose flagged line changed (filterStaleExceptions), resolve approvals that survived a line-number shift via rebase (resolveDriftedExceptions), then callbuildExceptionSummaryto potentially override conclusion tosuccess - Complete Check Run
- Post PR comment if
comment.enabledviasrc/commenter.ts→postComment - Apply/remove PR labels via
src/github.ts→ensureLabelsExist+setLabels - Notify via
src/notifiers/index.ts→notify()(always fires on exception approval; otherwise only when finding count increases) - Clean up workspace in
finally
Scanners (src/adapters/):
semgrep.ts- runssemgrep scan --config auto --json; exit code 1 = findings found (not an error); maps ERROR→high, WARNING→medium, INFO→lowtrufflehog.ts- runstrufflehog filesystem --json --no-update; exit code 183 = secrets found (not an error); batched at 200 files to stay under ARG_MAX; all findings are severityhighclaude.ts- calls the Anthropic API to detect malicious intent; disabled by default, opt in per repo; skips binary files; caps files at 50 KB; batches at 100 KB per API call; errors are caught and logged without failing the scan. Supports two modes (configured per-repo inconfig/layne.json):- Prompt mode (default): single
messages.createcall with a system prompt; useclaude.promptto override - Skill mode: uses the Anthropic API Skills beta - adds a
code_executiontool + an uploaded skill to each batch call, enabling runtime decoding, registry lookups, and richer static analysis; setclaude.skill: { id, version }to enable; handlespause_turncontinuations automatically (up to 10 turns per batch)
- Prompt mode (default): single
Common finding shape:
{ file, line, severity, message, ruleId, tool }Severity → GitHub annotation level:
critical/high→failure(blocks merge)medium→warninglow/info→notice
GitHub Check Run annotations are chunked at 50 per API call (GitHub API limit), with status: completed set only on the last chunk.
| Module | Purpose |
|---|---|
src/config.ts |
Loads and merges config/layne.json; cached after first read |
src/github.ts |
Check Run CRUD + label management (ensureLabelsExist, setLabels) |
src/metrics.ts |
Prometheus metric definitions; exports no-op stubs when METRICS_ENABLED is not true |
src/notifiers/index.ts |
Notification orchestrator; iterates registered notifiers |
src/notifiers/rocketchat.ts |
Rocket.Chat incoming webhook notifier |
src/queue.ts |
Shared Redis + BullMQ queue instance |
src/debug.ts |
Conditional debug logging via DEBUG_MODE |
See docs/2-configuration.md for the full schema and examples.
Key points for code navigation:
- Read once per process startup - restart both server and worker to pick up changes
- Loaded and merged by
src/config.ts→loadScanConfig - Supports
$globalkey for defaults inherited by all repos - Scanner blocks: per-repo spread over defaults (
{ ...DEFAULT_CONFIG.semgrep, ...repoOverrides.semgrep }) trigger: controls when scanning fires -pull_request(default, immediate) orworkflow_run(deferred until a named CI workflow completes); global default → per-repo overridenotificationsandlabels: per-repo notifier/key wins over global; per-repo absence = inherit global entirelyextraArgsfully replaces the default (not extended)config/layne.jsonmust be present in the Docker image (COPY config/ ./config/)- Notifier contract:
async function notify({ findings, owner, repo, prNumber, toolConfig })- must never throw - Notification dedup key:
layne:scan:count:{owner}/{repo}#{prNumber}(Redis, 30-day TTL) webhookUrlvalues starting with$are resolved fromprocess.envat runtime- Label errors never affect the scan result or Check Run
src/metrics.tsexports real prom-client objects whenMETRICS_ENABLED=true, silent no-op stubs otherwise- No
if (METRICS_ENABLED)guards needed at call sites - stubs absorb all calls - Worker: metrics HTTP server + BullMQ queue poller (15 s interval) started only when enabled
- Server:
GET /metricsroute registered only when enabled monitoring/directory has Prometheus scrape config and a pre-built Grafana dashboard
- Tests use Vitest with ESM (
"type": "module"in package.json) - All test files are TypeScript (
.ts); imports use.jsextensions (NodeNext resolution) src/__tests__/setup.tssets all required env vars before each test file;ANTHROPIC_API_KEYandMETRICS_ENABLEDare intentionally not set - adapters and metrics are mocked- External dependencies (
@octokit/auth-app,@octokit/rest,bullmq,ioredis,@anthropic-ai/sdk,prom-client) are always mocked - no live connections in tests src/metrics.tsis mocked in worker and server tests withvi.fn()stubs; tested in isolation insrc/__tests__/metrics.test.tsprocessJobanddispatchare exported specifically for unit testing without live infrastructure- Tests import modules with
await import(...)aftervi.mock()calls to handle ESM module caching - The
@anthropic-ai/sdkmock uses a regularfunctionconstructor (not an arrow function) becausenew Anthropic()must be constructable - Typed mock call access pattern:
(mockFn as ReturnType<typeof vi.fn>).mock.calls[0] as [T1, T2]