Skip to content

Commit cb1fd05

Browse files
feat(security): add basic security reviewer agent with owasp skills (#1008)
This PR introduces a new Security Reviewer agent containing skills relating to the follow OWASP related content: - [OWASP Top 10 for Agentic Applications for 2026](https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/) - [OWASP Top 10 for LLM Applications 2025](https://genai.owasp.org/resource/owasp-top-10-for-llm-applications-2025/) - [OWASP Top 10:2025](https://owasp.org/Top10/2025/) The agent contains 3 modes to fit a rang of different usecases and they are: Mode | Description | Scenario --- | --- | --- `audit` | Full audit of target code base for common vulnerabilities | For use on existing code base. Will take time to complete if repository is large `diff` | Only analyse changes on current branch | For use during validation stage after code base has been modified. Could also be used within a PR `plan` | Must provide an implementation plan. Analyse implementation plan to highlight potental vulnerabilities to look out for with mitigation suggestions | For use at planning stage to guide the agent before the implementation stage to mitigate the risk of vulnerabilities from entering code base There are also two additional inputs you can pass: - `targetSkill`: Run a particular skill - skips codebase profiling step - `scope`: Restricts agent to a particular directory/file Detail agent flow: ```mermaid flowchart TD Start([User Invokes Security Reviewer]) --> SetDate["Pre-req: Set report date"] SetDate --> DetectMode{"Detect scanning mode"} DetectMode -->|"explicit or keywords:<br/>changes, branch, PR"| DiffMode["Mode = diff"] DetectMode -->|"explicit or keywords:<br/>plan, design, RFC"| PlanMode["Mode = plan"] DetectMode -->|"default / explicit"| AuditMode["Mode = audit"] DetectMode -->|"invalid mode"| InvalidStop([Stop: Invalid mode]) %% Step 0: Mode-specific setup AuditMode --> StatusSetup["Status: Starting in audit mode"] DiffMode --> GitDetect["Detect default branch<br/>git symbolic-ref"] PlanMode --> ResolvePlan["Resolve plan document<br/>from input / context / fallback"] GitDetect -->|"fail"| FallbackAudit["Fallback to audit mode"] FallbackAudit --> StatusSetup GitDetect -->|"ok"| MergeBase["Compute merge base<br/>git merge-base"] MergeBase -->|"fail"| FallbackAudit MergeBase -->|"ok"| ChangedFiles["Get changed files<br/>git diff --name-only"] ChangedFiles -->|"fail"| FallbackAudit ChangedFiles -->|"no files"| EmptyStop([Stop: No changed files]) ChangedFiles -->|"files found"| FilterFiles["Filter non-assessable<br/>.md .yml .json images etc."] FilterFiles -->|"empty after filter"| FilterStop([Stop: No assessable code files]) FilterFiles -->|"assessable files"| StatusSetup ResolvePlan -->|"no plan found"| AskUser["Ask user for plan path"] AskUser --> ResolvePlan ResolvePlan -->|"plan resolved"| ReadPlan["Read plan document"] --> StatusSetup %% Step 1: Profile Codebase StatusSetup --> TargetSkill{"targetSkill<br/>provided?"} TargetSkill -->|"yes"| ValidateSkill{"Skill in<br/>Available Skills?"} ValidateSkill -->|"no"| SkillStop([Stop: Show available skills]) ValidateSkill -->|"yes"| StubProfile["Build minimal profile stub<br/>skip Codebase Profiler"] StubProfile --> SetSkills1["Applicable skills = targetSkill only"] TargetSkill -->|"no"| RunProfiler[/"Subagent: Codebase Profiler<br/>mode-specific prompt"/] RunProfiler -->|"fail"| ProfileFail([Stop: Profiling failed]) RunProfiler -->|"ok"| IntersectSkills["Intersect profiler skills<br/>with Available Skills"] IntersectSkills --> SpecificOverride{"Specific skills<br/>list provided?"} SpecificOverride -->|"yes"| OverrideSkills["Override with provided list<br/>intersect with Available Skills"] SpecificOverride -->|"no"| CheckEmpty{"Any applicable<br/>skills?"} OverrideSkills --> CheckEmpty CheckEmpty -->|"none"| NoSkillStop([Stop: No applicable skills]) CheckEmpty -->|"skills found"| SetSkills2["Set applicable skills list"] SetSkills1 --> StatusProfile["Status: Profiling complete"] SetSkills2 --> StatusProfile %% Step 2: Assess Skills StatusProfile --> AssessLoop["Status: Beginning skill assessments"] AssessLoop --> ForEachSkill["For each applicable skill<br/>(parallel when supported)"] ForEachSkill --> RunAssessor[/"Subagent: Skill Assessor<br/>mode-specific prompt per skill"/] RunAssessor -->|"incomplete"| RetryAssessor[/"Retry Skill Assessor<br/>(once)"/] RetryAssessor -->|"still fails"| ExcludeSkill["Exclude skill from results"] RetryAssessor -->|"ok"| CollectFindings["Collect structured findings"] RunAssessor -->|"ok"| CollectFindings ExcludeSkill --> AllDone{"All skills<br/>processed?"} CollectFindings --> AllDone AllDone -->|"no"| ForEachSkill AllDone -->|"yes"| CheckAllFailed{"All assessments<br/>failed?"} CheckAllFailed -->|"yes"| AllFailStop([Stop: All assessments failed]) CheckAllFailed -->|"no"| StatusAssess["Status: All assessments complete"] %% Step 3: Verify Findings StatusAssess --> IsPlanMode{"Mode = plan?"} IsPlanMode -->|"yes"| SkipVerify["Skip verification<br/>pass findings through unchanged"] IsPlanMode -->|"no"| VerifyLoop["Status: Adversarial verification"] VerifyLoop --> ForEachSkillV["For each skill's findings<br/>(parallel when supported)"] ForEachSkillV --> Classify["Classify findings"] Classify --> PassThrough["PASS + NOT_ASSESSED<br/>verdict = UNCHANGED"] Classify --> Serialize["FAIL + PARTIAL<br/>serialize findings"] Serialize --> HasUnverified{"Any FAIL/PARTIAL<br/>findings?"} HasUnverified -->|"no"| MergeVerified["Merge into verified collection"] HasUnverified -->|"yes"| RunVerifier[/"Subagent: Finding Deep Verifier<br/>all FAIL+PARTIAL in single call"/] RunVerifier -->|"incomplete"| RetryVerifier[/"Retry Verifier (once)"/] RetryVerifier --> CaptureVerdicts["Capture deep verdicts"] RunVerifier -->|"ok"| CaptureVerdicts PassThrough --> MergeVerified CaptureVerdicts --> MergeVerified MergeVerified --> AllVerified{"All skills<br/>verified?"} AllVerified -->|"no"| ForEachSkillV AllVerified -->|"yes"| StatusVerify["Status: All findings verified"] SkipVerify --> StatusVerify %% Step 4: Generate Report StatusVerify --> RunReporter[/"Subagent: Report Generator<br/>mode-specific prompt + verified findings"/] RunReporter --> CaptureReport["Capture report path +<br/>summary counts + severity"] %% Step 5: Completion CaptureReport --> StatusReport["Status: Report generation complete"] StatusReport --> IsPlanReport{"Mode = plan?"} IsPlanReport -->|"yes"| PlanCompletion["Display plan completion format<br/>risk counts + report path"] IsPlanReport -->|"no"| AuditCompletion["Display audit/diff completion format<br/>severity + verification + finding counts"] PlanCompletion --> ExcludedNote{"Excluded skills?"} AuditCompletion --> ExcludedNote ExcludedNote -->|"yes"| AppendNote["Append excluded skills note"] ExcludedNote -->|"no"| Done([Scan Complete]) AppendNote --> Done %% Styling classDef subagent fill:#4a90d9,color:#fff,stroke:#2c5f8a classDef stop fill:#e74c3c,color:#fff,stroke:#c0392b classDef decision fill:#f5c542,color:#333,stroke:#d4a017 classDef status fill:#2ecc71,color:#fff,stroke:#27ae60 class RunProfiler,RunAssessor,RetryAssessor,RunVerifier,RetryVerifier,RunReporter subagent class InvalidStop,EmptyStop,FilterStop,ProfileFail,SkillStop,NoSkillStop,AllFailStop stop class DetectMode,TargetSkill,ValidateSkill,SpecificOverride,CheckEmpty,AllDone,CheckAllFailed,IsPlanMode,HasUnverified,AllVerified,IsPlanReport,ExcludedNote decision class StatusSetup,StatusProfile,StatusAssess,StatusVerify,StatusReport status ``` ## Related Issue(s) - Closes #794 - Closes #793 - Closes #796 - Closes #795 ## Type of Change Select all that apply: **Code & Documentation:** * [x] Bug fix (non-breaking change fixing an issue) * [x] New feature (non-breaking change adding functionality) * [ ] Breaking change (fix or feature causing existing functionality to change) * [ ] Documentation update **Infrastructure & Configuration:** * [ ] GitHub Actions workflow * [ ] Linting configuration (markdown, PowerShell, etc.) * [ ] Security configuration * [ ] DevContainer configuration * [ ] Dependency update **AI Artifacts:** * [x] Reviewed contribution with `prompt-builder` agent and addressed all feedback * [x] Copilot instructions (`.github/instructions/*.instructions.md`) * [ ] Copilot prompt (`.github/prompts/*.prompt.md`) * [x] Copilot agent (`.github/agents/*.agent.md`) * [x] Copilot skill (`.github/skills/*/SKILL.md`) > Note for AI Artifact Contributors: > > * Agents: Research, indexing/referencing other project (using standard VS Code GitHub Copilot/MCP tools), planning, and general implementation agents likely already exist. Review `.github/agents/` before creating new ones. > * Skills: Must include both bash and PowerShell scripts. See [Skills](../docs/contributing/skills.md). > * Model Versions: Only contributions targeting the **latest Anthropic and OpenAI models** will be accepted. Older model versions (e.g., GPT-3.5, Claude 3) will be rejected. > * See [Agents Not Accepted](../docs/contributing/custom-agents.md#agents-not-accepted) and [Model Version Requirements](../docs/contributing/ai-artifacts-common.md#model-version-requirements). **Other:** * [ ] Script/automation (`.ps1`, `.sh`, `.py`) * [ ] Other (please describe): ## Sample Prompts (for AI Artifact Contributions) <!-- If you checked any boxes under "AI Artifacts" above, provide a sample prompt showing how to use your contribution --> <!-- Delete this section if not applicable --> **User Request:** <!-- What natural language request would trigger this agent/prompt/instruction? --> > Analyse the code base and reproduce a detailed security report containing common vulnerabilities **Execution Flow:** <!-- Step-by-step: what happens when invoked? Include tool usage, decision points --> 1. The user switches to the `Security Reviewer` agent with prompt `Analyse the code base and reproduce a detailed security report`. By default the agent will run in `audit`mode. This will do a full audit of the current codebase. 2. The Security Reviewer agent will then proceed with the following execution steps via subagents: Analyse codebase and select relevant owasp skills via `Codebase Profiler` agent -> Create subagents for each identified owasp skill to analyse codebase against owasp skill's knowledge base via `Skill Assessor` agent -> New subagents are created for each owasp skill to verify and challenge the agent's findings via `Finding Deep Verifier` agent -> Collate results and generate a report via `Report Generator` agent 3. Report contains the results of the assessment, including links to offending files with details explanation of the findings, and remediation suggestions **Output Artifacts:** <!-- What files/content are created? Show first 10-20 lines as preview --> - `audit` mode: `.copilot-tracking/security/{date}/security-report-001.md` - `diff` mode: `.copilot-tracking/security/{date}/security-report-diff-001.md` - `plan` mode: `.copilot-tracking/security/{date}/plan-risk-assessment-001.md` **Success Indicators:** <!-- How does user know it worked correctly? What validation should they perform? --> - A details report is generated and saved under `.copilot-tracking/security/{date}/` - Report should contain the following: - Summary count - Serverity breakdown - Verification summary - Findings by framework - Detailed remediation guidance - Disproved findings ## Testing <!-- Describe how you tested these changes --> Check | Command | Status -- | -- | -- Markdown linting | npm run lint:md | ✅ Pass Spell checking | npm run spell-check | ✅ Pass Frontmatter validation | npm run lint:frontmatter | ✅ Pass Skill structure validation | npm run validate:skills | ✅ Pass Link validation | npm run lint:md-links | ✅ Pass PowerShell analysis | npm run lint:ps | ✅ Pass Plugin freshness | npm run plugin:generate | ✅ Pass I had to modify `CollectionHelpers.psm1` for `npm run plugin:generate` to work. My owasp skills contained a handful of `.md` used for reference. `CollectionHelpers.psm1` would automatically add these `.md`s to the `hev-core-all.collection.yaml` with `kind: "0"`. `kind: "0"` is not a recongised `kind` and would cause an error and updating `kind` to `skill` would just get overridden when you run `npm run plugin:generate` again. To resolve this I updated the script to ignore `.md` under the `skills` folder ## Checklist ### Required Checks * [ ] Documentation is updated (if applicable) * [x] Files follow existing naming conventions * [ ] Changes are backwards compatible (if applicable) * [ ] Tests added for new functionality (if applicable) ### AI Artifact Contributions <!-- If contributing an agent, prompt, instruction, or skill, complete these checks --> * [x] Used `/prompt-analyze` to review contribution * [x] Addressed all feedback from `prompt-builder` review * [x] Verified contribution follows common standards and type-specific requirements ### Required Automated Checks The following validation commands must pass before merging: * [x] Markdown linting: `npm run lint:md` * [x] Spell checking: `npm run spell-check` * [x] Frontmatter validation: `npm run lint:frontmatter` * [x] Skill structure validation: `npm run validate:skills` * [x] Link validation: `npm run lint:md-links` * [x] PowerShell analysis: `npm run lint:ps` * [x] Plugin freshness: `npm run plugin:generate` ## Security Considerations <!-- ⚠️ WARNING: Do not commit sensitive information such as API keys, passwords, or personal data --> * [x] This PR does not contain any sensitive or NDA information * [ ] Any new dependencies have been reviewed for security issues * [ ] Security-related scripts follow the principle of least privilege ## Additional Notes <!-- Any additional information that reviewers should know --> --------- Co-authored-by: Katrien De Graeve <katriendg@users.noreply.github.com>
1 parent 27fbd33 commit cb1fd05

File tree

80 files changed

+6054
-326
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

80 files changed

+6054
-326
lines changed

.cspell.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,7 @@
6868
"hideable",
6969
"learning",
7070
"ˈpræksɪs",
71-
"πρᾶξις"
71+
"πρᾶξις",
72+
"agentic"
7273
]
7374
}

.github/CUSTOM-AGENTS.md

Lines changed: 25 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -68,10 +68,11 @@ The Research-Plan-Implement (RPI) workflow provides a structured approach to com
6868

6969
### Code and Review Agents
7070

71-
| Agent | Purpose | Key Constraint |
72-
|--------------------|--------------------------------------------------|---------------------------------------|
73-
| **pr-review** | 4-phase PR review with tracking artifacts | Review-only; never modifies code |
74-
| **prompt-builder** | Engineers and validates instruction/prompt files | Dual-persona system with auto-testing |
71+
| Agent | Purpose | Key Constraint |
72+
|-----------------------|------------------------------------------------------------------|-----------------------------------------|
73+
| **pr-review** | 4-phase PR review with tracking artifacts | Review-only; never modifies code |
74+
| **prompt-builder** | Engineers and validates instruction/prompt files | Dual-persona system with auto-testing |
75+
| **security-reviewer** | OWASP vulnerability assessment with subagent-driven verification | Delegates all reference reading to subagents |
7576

7677
### Generator Agents
7778

@@ -295,6 +296,26 @@ Users are responsible for verifying their repository's `.gitignore` configuratio
295296

296297
**Critical:** Requires blueprint infrastructure (Terraform or Bicep). Maps threats to specific system components. Generates iteratively with user feedback per section.
297298

299+
### security-reviewer
300+
301+
**Creates:** OWASP vulnerability assessment reports:
302+
303+
* `.copilot-tracking/security/{{YYYY-MM-DD}}/security-report-{{NNN}}.md` (audit mode report)
304+
* `.copilot-tracking/security/{{YYYY-MM-DD}}/security-report-diff-{{NNN}}.md` (diff mode report)
305+
* `.copilot-tracking/security/{{YYYY-MM-DD}}/plan-risk-assessment-{{NNN}}.md` (plan mode report)
306+
307+
**Workflow:** Setup → Profile Codebase → Assess Applicable Skills → Verify Findings → Generate Report → Compute Summary
308+
309+
**Modes:**
310+
311+
* `audit` (default): Full codebase scan against applicable OWASP skills
312+
* `diff`: Scoped scan of changed files relative to the default branch
313+
* `plan`: Pre-implementation risk assessment of a plan document (skips verification)
314+
315+
**Subagents:** Codebase Profiler, Skill Assessor, Finding Deep Verifier, Report Generator
316+
317+
**Critical:** Orchestrator-only pattern. Delegates codebase profiling, skill assessment, adversarial finding verification, and report generation to specialized subagents. Uses OWASP skills (`owasp-agentic`, `owasp-llm`, `owasp-top-10`) for vulnerability references. Supports incremental comparison with prior scan reports.
318+
298319
### gen-jupyter-notebook
299320

300321
**Creates:** Exploratory data analysis notebooks:

.github/agents/security/security-reviewer.agent.md

Lines changed: 258 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 166 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,166 @@
1+
---
2+
name: Codebase Profiler
3+
description: "Scans the repository to build a technology profile and identify which OWASP skills apply to the codebase - Brought to you by microsoft/hve-core"
4+
tools:
5+
- search/changes
6+
- search/codebase
7+
- search/fileSearch
8+
- search/textSearch
9+
- read/readFile
10+
user-invocable: false
11+
---
12+
13+
# Codebase Profiler
14+
15+
Scan the repository to identify its technology stack and determine which OWASP skills apply to the codebase. Return a structured profile for the parent orchestrator.
16+
17+
## Purpose
18+
19+
* Discover languages, frameworks, and infrastructure patterns present in the repository.
20+
* Match discovered technology signals against the known skill catalog.
21+
* Produce a concise, structured codebase profile suitable for downstream skill assessment.
22+
* Include a skill when uncertain whether its signals are present to avoid missing potential vulnerabilities.
23+
24+
## Inputs
25+
26+
* Codebase root directory to scan (defaults to the repository root).
27+
* (Optional) Specific subdirectories or paths to focus the scan on.
28+
* (Optional) Prior profile output to compare or update incrementally.
29+
* (Optional) Changed files list for diff-mode scoped profiling.
30+
* (Optional) Plan document content for plan-mode profiling.
31+
32+
## Constants
33+
34+
Skill resolution: Read the applicable OWASP skill by name (e.g., `owasp-top-10`, `owasp-llm`, `owasp-agentic`).
35+
36+
### Technology Signals
37+
38+
Each skill maps to file patterns, directory conventions, or code patterns that indicate relevance.
39+
40+
```yaml
41+
owasp-agentic:
42+
- "Multi-agent pipelines"
43+
- "Tool-use loops"
44+
- "Memory stores for agents"
45+
owasp-llm:
46+
- "Prompt templates"
47+
- "LLM API calls (OpenAI, Anthropic, etc.)"
48+
- "AI chain orchestration"
49+
owasp-top-10:
50+
- "HTML/JS/CSS files"
51+
- "REST API endpoints"
52+
- "Server-side templates"
53+
- "Web framework config (Express, Django, Flask, Rails, Spring)"
54+
```
55+
56+
## Codebase Profile Format
57+
58+
Return the profile using this structure. Replace each placeholder with discovered values.
59+
60+
```markdown
61+
## Codebase Profile
62+
63+
**Repository:** <REPO_NAME>
64+
**Mode:** <MODE>
65+
**Primary Languages:** <LANGUAGES>
66+
**Frameworks:** <FRAMEWORKS>
67+
68+
### Key Directories
69+
70+
<DIRECTORIES>
71+
72+
### Technology Summary
73+
74+
<TECH_SUMMARY>
75+
76+
### Applicable Skills
77+
78+
<SKILL_LIST>
79+
```
80+
81+
Where:
82+
83+
* REPO_NAME: Repository name derived from the workspace root.
84+
* MODE: Scanning mode used for profiling (`audit`, `diff`, or `plan`).
85+
* LANGUAGES: Comma-separated list of programming languages found. In plan mode, languages mentioned in the plan.
86+
* FRAMEWORKS: Comma-separated list of frameworks and tools found. In plan mode, frameworks referenced in the plan.
87+
* DIRECTORIES: Bullet list of key directories with brief descriptions. In plan mode, directories referenced in the plan or omitted when the plan contains no directory references.
88+
* TECH_SUMMARY: Two to four sentence overview of the technology stack. In plan mode, summarize the technology landscape described by the plan.
89+
* SKILL_LIST: YAML-style list where each item is a skill name with a brief justification for inclusion.
90+
91+
## Required Steps
92+
93+
### Pre-requisite: Setup
94+
95+
1. Confirm access to file search and codebase search tools.
96+
2. Identify the repository root and name from the workspace context.
97+
3. If the caller provided a prior profile or specific paths, load them as starting context.
98+
4. Determine the profiling mode from the caller prompt: `audit` when no changed files list or plan content is provided, `diff` when a changed files list is provided, `plan` when plan document content is provided.
99+
100+
### Step 1: Scan Repository
101+
102+
Discover technology signals using the approach appropriate to the profiling mode.
103+
104+
#### Audit Mode
105+
106+
Run parallel file searches to discover technology signals across the full codebase.
107+
108+
1. Search for infrastructure and CI/CD files:
109+
* `**/Dockerfile`, `**/docker-compose.yml`, `**/.github/workflows/**`, `**/Jenkinsfile`, `**/serverless.yml`, `**/terraform/**`
110+
2. Search for dependency manifests:
111+
* `**/package.json`, `**/requirements.txt`, `**/go.mod`, `**/pom.xml`, `**/Cargo.toml`
112+
3. Search for source code by language:
113+
* `**/*.py`, `**/*.js`, `**/*.ts`, `**/*.java`, `**/*.go`, `**/*.rb`, `**/*.cs`
114+
4. Search for mobile platform indicators:
115+
* `**/AndroidManifest.xml`, `**/Info.plist`, `**/pubspec.yaml`
116+
5. Run semantic searches for AI-specific patterns:
117+
* "LLM API calls OR prompt templates OR OpenAI OR Anthropic OR langchain"
118+
* "MCP server OR MCP client OR MCP tool definition"
119+
* "agent pipeline OR multi-agent OR tool-use loop OR memory store"
120+
6. Merge all search results into a unified file inventory.
121+
122+
#### Diff Mode
123+
124+
Scope technology signal detection to the changed files list while gathering full-repo context.
125+
126+
1. Parse the changed files list from the caller prompt.
127+
2. Classify each changed file by extension, directory pattern, and filename against the technology signals mapping.
128+
3. Read changed dependency manifests and configuration files to extract framework and tooling references.
129+
4. Run targeted semantic searches scoped to changed file paths for AI-specific patterns.
130+
5. Optionally scan the full repository tree for additional context that informs the changed files (for example, a changed route handler may indicate a web framework detected in the broader repo).
131+
6. Merge results into a unified file inventory, annotating which signals originated from changed files versus full-repo context.
132+
133+
#### Plan Mode
134+
135+
Skip file searches entirely. Extract technology signals from the plan document text.
136+
137+
1. Parse the plan document content from the caller prompt.
138+
2. Scan the plan text for technology keywords, programming language names, framework references, infrastructure patterns, and tooling mentions.
139+
3. Match extracted mentions against each entry in the technology signals mapping.
140+
4. Record matched signals with the plan text excerpt that triggered each match.
141+
5. Compile results into a unified signal inventory. Note that all signals are theoretical since they derive from plan text rather than observed files.
142+
143+
### Step 2: Identify Applicable Skills
144+
145+
1. Compare the unified file inventory against each entry in the technology signals list.
146+
2. Mark a skill as applicable when one or more of its signals are detected.
147+
3. Include a skill when uncertain whether its signals are present; err on the side of inclusion.
148+
4. Record the matching evidence for each applicable skill:
149+
* Audit mode: file paths or search hits from the full repository scan.
150+
* Diff mode: file paths from the changed files list. Note which skills are relevant to the diff scope specifically versus derived from full-repo context.
151+
* Plan mode: plan text excerpts containing the technology mention. Note that signals are theoretical and derived from plan content rather than observed code.
152+
5. Compile the final profile using the codebase profile format, filling in all placeholders with discovered values and setting the Mode field to the active profiling mode.
153+
154+
## Response Format
155+
156+
Return the completed codebase profile in the format defined above. Include all sections: repository name, mode, languages, frameworks, key directories, technology summary, and applicable skills with justifications.
157+
158+
Mode-specific response guidance:
159+
160+
* Audit mode: report all sections with evidence from the full repository scan.
161+
* Diff mode: report all sections with evidence prioritized from changed files. Indicate which signals came from the diff scope versus full-repo context.
162+
* Plan mode: report all sections with evidence extracted from plan text. Label signals as theoretical. Omit the key directories section when the plan contains no directory references.
163+
164+
When any input is ambiguous or the scan reveals patterns that do not clearly map to a known skill, include a **Clarifying Questions** section at the end of the response listing specific questions for the parent agent to resolve before proceeding.
165+
166+
Do not modify any files in the repository. Do not include secrets, credentials, or sensitive values in the profile. Keep the profile concise enough to fit in a subagent context window.

0 commit comments

Comments
 (0)