Skip to content

Commit 5504b15

Browse files
authored
fix: /debate command now manages debate workflow directly (#231) (#237)
* fix: /debate command now manages debate workflow directly instead of delegating to orchestrator agent Move debate orchestration logic (skill invocation, round execution, verdict synthesis, state saving) from debate-orchestrator agent into the /debate command itself, following the /consult pattern. The debate-orchestrator agent is now the programmatic entry point for other agents/workflows that need to spawn debates via Task(). Direct user invocations use the command path. Syncs OpenCode and Codex adapters. Fixes #231 * test: update debate tests to match new inline orchestration architecture The /debate command was refactored from delegating to debate-orchestrator to executing the debate workflow inline (Skill: debate + Skill: consult per round). Update 4 tests that checked for the old spawn pattern: - command spawns debate:debate-orchestrator -> invokes skills inline - command invokes skill via Task tool in Phase 3 -> Skill blocks in Phase 3 - command handles orchestrator failure -> handles tool failure during debate - orchestrator description mentions proposer/challenger -> programmatic entry point * fix: address review feedback - inline failure handling, test coverage, least-privilege tools - Add inline failure-handling directives in Phase 3b round loop across all 3 command files (plugin, opencode adapter, codex adapter) for both proposer and challenger consult invocations - Remove Task from allowed-tools in plugin command (least-privilege; command uses Skill: directly, not Task) - Strengthen test assertion for command/agent skill reference - Add negative regression test ensuring command does not spawn debate-orchestrator via Task - Add test for Skill in allowed-tools, add negative test for Task removal - Add Challenger fails round 1 check to error handling test - Remove duplicate skill version test from alignment block (kept in cross-file consistency block) * fix: unify warn message, add sanitization constraint to commands, add sanitization test - Update Error Handling table in all 3 command files to match the inline Phase 3b challenger failure message exactly: `[WARN] Challenger ({tool}) failed on round 1. Proceeding with uncontested proposer position.` - Add "MUST sanitize all tool output before displaying" constraint to all 3 command files (already present in orchestrator agent, now ported to commands) - Add test verifying command has Output Sanitization section * docs: update CHANGELOG for #231 debate command fix Add unreleased entry for /debate command inline orchestration refactor. * enhance: improve agent descriptions, frontmatter completeness, trigger phrases - debate-orchestrator: capabilities-focused description replacing "Programmatic entry point..." wording - debate-orchestrator: simplify Role section triad paragraph to two-line summary - adapters/codex/skills/debate/SKILL.md: add missing version and argument-hint fields - adapters/opencode/commands/debate.md: add missing name field to frontmatter - debate command (plugins + opencode): add trigger phrases to description for better command routing * chore: regenerate adapters and README after debate command changes * fix: add 'programmatic' to debate-orchestrator description to satisfy CI test The test 'orchestrator description describes programmatic entry point' asserts the agent description contains /programmatic/i and /Task()/. The description had 'Task()' but was missing 'programmatic'. Updated to: 'Programmatic entry point for other agents or workflows that need to spawn a structured debate via Task().' * fix: sync opencode adapter description to match plugin agent Update adapters/opencode/agents/debate-orchestrator.md description to match the updated plugins/debate/agents/debate-orchestrator.md description, keeping both files in sync as required by the adapter-freshness check. * fix: add AI provider Bash tool permissions to /debate command allowed-tools Without Bash(claude:*), Bash(gemini:*) etc., all Skill: consult calls in the debate command would fail at runtime since skills execute within the command's tool permission scope. * fix: address PR review comments - placeholder consistency, sk-proj attribution, process.env wording, stale agent references - Replace {N} with {round} in round display headers (proposer and challenger) for consistency - Fix context assembly rule: "rounds 1 through N-2" -> "rounds 1 through {round}-2" to correctly refer to current round - Correct sk-proj-* attribution: it is an OpenAI project key, not an Anthropic key - Replace process.env.AI_STATE_DIR with plain English "AI_STATE_DIR environment variable" in Phase 3d - Update debate-orchestrator agent: replace "pre-resolved by the /debate command" with "pre-resolved by the caller" and fix error message to match - Regenerate OpenCode and Codex adapter files via gen-adapters.js
1 parent 88e8c86 commit 5504b15

File tree

9 files changed

+391
-128
lines changed

9 files changed

+391
-128
lines changed

CHANGELOG.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
99

1010
## [Unreleased]
1111

12+
### Fixed
13+
14+
- **`/debate` command inline orchestration** — The `/debate` command now manages the full debate workflow directly (parse → resolve → execute → verdict), following the `/consult` pattern. The `debate-orchestrator` agent is now the programmatic entry point for other agents/workflows that need to spawn a debate via `Task()`. Fixes issue #231.
15+
1216
## [5.1.0] - 2026-02-18
1317

1418
### Added

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ This came from testing on 1,000+ repositories.
9191
| [`/sync-docs`](#sync-docs) | Finds outdated references, stale examples, missing CHANGELOG entries |
9292
| [`/learn`](#learn) | Research any topic, gather online sources, create learning guide with RAG index |
9393
| [`/consult`](#consult) | Consult another AI CLI tool for a second opinion. Use when you want to cross-check ideas, get alternative approaches, or validate decisions with Gemini, Codex, Claude, OpenCode, or Copilot. |
94-
| [`/debate`](#debate) | Structured debate between two AI tools to stress-test ideas. Proposer/Challenger format with a verdict. |
94+
| [`/debate`](#debate) | Use when user asks to "debate", "argue about", "compare perspectives", "stress test idea", "devil advocate", or "tool vs tool". Structured debate between two AI tools with proposer/challenger roles and a verdict. |
9595
<!-- GEN:END:readme-commands -->
9696

9797
Each command works standalone. Together, they compose into end-to-end pipelines.

__tests__/debate-command.test.js

Lines changed: 34 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -161,8 +161,9 @@ describe('provider configuration - prompt templates', () => {
161161

162162
// ─── 3. Command / Skill / Agent Alignment ───────────────────────────
163163
describe('command/skill/agent alignment', () => {
164-
test('command spawns debate:debate-orchestrator', () => {
165-
expect(commandContent).toMatch(/debate:debate-orchestrator/);
164+
test('command invokes debate and consult skills inline', () => {
165+
expect(commandContent).toMatch(/Skill:\s*debate/);
166+
expect(commandContent).toMatch(/Skill:\s*consult/);
166167
});
167168

168169
test('agent invokes debate skill', () => {
@@ -173,21 +174,23 @@ describe('command/skill/agent alignment', () => {
173174
expect(agentContent).toMatch(/Skill:\s*consult/);
174175
});
175176

176-
test('skill version matches plugin.json version', () => {
177-
const fm = parseFrontmatter(skillContent);
178-
expect(fm.version).toBe(pluginJson.version);
179-
});
180-
181-
test('command invokes skill via Task tool in Phase 3', () => {
182-
expect(commandContent).toMatch(/Task:/);
183-
expect(commandContent).toMatch(/debate:debate-orchestrator/);
177+
test('command invokes skills via Skill blocks in Phase 3', () => {
178+
const phase3Match = commandContent.match(/### Phase 3[\s\S]*$/);
179+
expect(phase3Match).not.toBeNull();
180+
const phase3 = phase3Match[0];
181+
expect(phase3).toMatch(/Skill:\s*debate/);
182+
expect(phase3).toMatch(/Skill:\s*consult/);
184183
});
185184

186185
test('agent has Skill tool for invoking skills', () => {
187186
const fm = parseFrontmatter(agentContent);
188187
const toolsStr = Array.isArray(fm.tools) ? fm.tools.join(', ') : fm.tools;
189188
expect(toolsStr).toContain('Skill');
190189
});
190+
191+
test('command does not spawn debate-orchestrator via Task', () => {
192+
expect(commandContent).not.toMatch(/subagent_type.*debate-orchestrator|debate:debate-orchestrator/);
193+
});
191194
});
192195

193196
// ─── 4. Security Constraints ────────────────────────────────────────
@@ -210,6 +213,10 @@ describe('security constraints', () => {
210213
expect(agentContent).toMatch(/Output Sanitization/);
211214
});
212215

216+
test('command has output sanitization section', () => {
217+
expect(commandContent).toMatch(/## Output Sanitization/);
218+
});
219+
213220
test('orchestrator mentions 120s timeout', () => {
214221
expect(agentContent).toMatch(/120s?\s*timeout/i);
215222
});
@@ -385,8 +392,10 @@ describe('error handling coverage', () => {
385392
expect(commandContent).toMatch(/context.*file=PATH|--context=.*file/i);
386393
});
387394

388-
test('command handles orchestrator failure', () => {
389-
expect(commandContent).toMatch(/Orchestrator fails|Debate failed/i);
395+
test('command handles tool failure during debate', () => {
396+
expect(commandContent).toMatch(/Proposer fails round 1/i);
397+
expect(commandContent).toMatch(/Challenger fails round 1/i);
398+
expect(commandContent).toMatch(/Any tool fails mid-debate/i);
390399
});
391400
});
392401

@@ -436,10 +445,16 @@ describe('cross-file consistency', () => {
436445
expect(fm.model).toBe('opus');
437446
});
438447

439-
test('command allowed-tools includes Task', () => {
448+
test('command allowed-tools includes Skill', () => {
449+
const fm = parseFrontmatter(commandContent);
450+
const tools = fm['allowed-tools'] || '';
451+
expect(tools).toContain('Skill');
452+
});
453+
454+
test('command allowed-tools does not include Task (least-privilege)', () => {
440455
const fm = parseFrontmatter(commandContent);
441456
const tools = fm['allowed-tools'] || '';
442-
expect(tools).toContain('Task');
457+
expect(tools).not.toContain('Task');
443458
});
444459

445460
test('command allowed-tools includes AskUserQuestion', () => {
@@ -453,10 +468,10 @@ describe('cross-file consistency', () => {
453468
expect(fm.version).toBe(pluginJson.version);
454469
});
455470

456-
test('orchestrator description mentions proposer/challenger', () => {
471+
test('orchestrator description describes programmatic entry point', () => {
457472
const fm = parseFrontmatter(agentContent);
458-
expect(fm.description).toMatch(/proposer/i);
459-
expect(fm.description).toMatch(/challenger/i);
473+
expect(fm.description).toMatch(/programmatic/i);
474+
expect(fm.description).toMatch(/Task\(\)/);
460475
});
461476

462477
test('agent tools list includes all 5 provider CLI tools', () => {
@@ -470,8 +485,8 @@ describe('cross-file consistency', () => {
470485
});
471486

472487
test('command and agent both reference debate skill', () => {
473-
// Command spawns orchestrator which invokes skill
474-
expect(commandContent).toMatch(/debate/i);
488+
// Command executes debate inline via Skill:debate and Skill:consult. Agent is the programmatic entry point for Task() callers.
489+
expect(commandContent).toMatch(/Skill:\s*debate/);
475490
expect(agentContent).toMatch(/Skill:\s*debate/);
476491
});
477492
});

adapters/codex/skills/debate/SKILL.md

Lines changed: 111 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: "Use when user asks to \"debate\", \"argue about\", \"compare persp
55

66
# /debate - Structured AI Dialectic
77

8-
You are executing the /debate command. Your job is to parse the user's request, resolve missing parameters interactively, and spawn the debate orchestrator.
8+
You are executing the /debate command. Your job is to parse the user's request, resolve missing parameters interactively, and execute the debate directly.
99

1010
## Constraints
1111

@@ -14,6 +14,7 @@ You are executing the /debate command. Your job is to parse the user's request,
1414
- MUST validate tool names against allow-list: gemini, codex, claude, opencode, copilot
1515
- Proposer and challenger MUST be different tools
1616
- Rounds MUST be 1-5 (default: 2)
17+
- MUST sanitize all tool output before displaying (see Output Sanitization section below)
1718

1819
## Execution
1920

@@ -151,38 +152,119 @@ If context resolved to "file":
151152

152153
If proposer and challenger resolve to the same tool after selection, show error and re-ask for challenger.
153154

154-
### Phase 3: Spawn Debate Orchestrator
155+
### Phase 3: Execute Debate
155156

156-
With all parameters resolved, spawn the debate orchestrator agent:
157+
With all parameters resolved (topic, proposer, challenger, effort, rounds, optional model_proposer, model_challenger, context), execute the debate directly.
157158

159+
#### Phase 3a: Load Debate Templates
160+
161+
Invoke the `debate` skill to load prompt templates, context assembly rules, and synthesis format:
162+
163+
```
164+
Skill: debate
165+
Args: "[topic]" --proposer=[proposer] --challenger=[challenger] --rounds=[rounds] --effort=[effort]
166+
```
167+
168+
The skill returns the prompt templates and rules. Use them for all subsequent steps.
169+
170+
#### Phase 3b: Execute Debate Rounds
171+
172+
For each round (1 through N):
173+
174+
**Build Proposer Prompt:**
175+
176+
- **Round 1**: Use the "Round 1: Proposer Opening" template from the skill. Substitute {topic}.
177+
- **Round 2+**: Use the "Round 2+: Proposer Defense" template. Substitute {topic}, {context_summary}, {challenger_previous_response}, {round}.
178+
179+
**Context assembly rules:**
180+
- **Rounds 1-2**: Include full text of all prior exchanges per the skill's context format.
181+
- **Round 3+**: Summarize rounds 1 through {round}-2 (target 500-800 tokens, preserving core positions, key evidence, all concessions as verbatim quotes, points of disagreement, and any contradictions between rounds). Include only the most recent round's responses in full.
182+
183+
**Invoke Proposer via Consult Skill:**
184+
185+
Only include `--model=[model_proposer]` if the user provided a specific model. If model is "omit", empty, or "auto", do NOT pass --model to the consult skill.
186+
187+
```
188+
Skill: consult
189+
Args: "{proposer_prompt}" --tool=[proposer] --effort=[effort] [--model=[model_proposer]] [--context=[context]]
158190
```
159-
Task:
160-
subagent_type: "debate:debate-orchestrator"
161-
model: opus
162-
prompt: |
163-
Execute a structured debate with these pre-resolved parameters:
164-
- topic: [topic]
165-
- proposer: [proposer tool]
166-
- challenger: [challenger tool]
167-
- effort: [effort]
168-
- rounds: [rounds]
169-
- model_proposer: [model or "omit"]
170-
- model_challenger: [model or "omit"]
171-
172-
If model is "omit" or empty, do NOT include --model in consult skill invocations. The consult skill will use effort-based defaults.
173-
- context: [context or "none"]
174-
175-
Follow the debate skill templates. Display each round progressively.
176-
Deliver a verdict that picks a winner.
191+
192+
Parse the JSON result. Extract the response text. Record: round, role="proposer", tool, response, duration_ms.
193+
194+
If the proposer call fails on round 1, abort: `[ERROR] Debate aborted: proposer ({tool}) failed on opening round. {error}`
195+
If the proposer call fails on round 2+, skip remaining rounds and proceed to Phase 3c (synthesize from completed rounds, note the early stop).
196+
197+
Display to user immediately:
177198
```
199+
--- Round {round}: {proposer_tool} (Proposer) ---
200+
201+
{proposer_response}
202+
```
203+
204+
**Build Challenger Prompt:**
205+
206+
- **Round 1**: Use the "Round 1: Challenger Response" template from the skill. Substitute {topic}, {proposer_tool}, {proposer_round1_response}.
207+
- **Round 2+**: Use the "Round 2+: Challenger Follow-up" template. Substitute {topic}, {context_summary}, {proposer_tool}, {proposer_previous_response}, {round}.
208+
209+
**Invoke Challenger via Consult Skill:**
210+
211+
Only include `--model=[model_challenger]` if the user provided a specific model. If model is "omit", empty, or "auto", do NOT pass --model to the consult skill.
212+
213+
```
214+
Skill: consult
215+
Args: "{challenger_prompt}" --tool=[challenger] --effort=[effort] [--model=[model_challenger]] [--context=[context]]
216+
```
217+
218+
Parse the JSON result. Record: round, role="challenger", tool, response, duration_ms.
219+
220+
If the challenger call fails on round 1, emit `[WARN] Challenger ({tool}) failed on round 1. Proceeding with uncontested proposer position.` then proceed to Phase 3c.
221+
If the challenger call fails on round 2+, skip remaining rounds and proceed to Phase 3c.
222+
223+
Display to user immediately:
224+
```
225+
--- Round {round}: {challenger_tool} (Challenger) ---
226+
227+
{challenger_response}
228+
```
229+
230+
Assemble context for the next round using the context assembly rules above.
231+
232+
#### Phase 3c: Synthesize and Deliver Verdict
233+
234+
After all rounds complete (or after a partial failure), YOU are the JUDGE. Read all exchanges carefully. Use the synthesis format from the debate skill:
235+
236+
1. **Pick a winner.** Which tool made the stronger argument overall? Why? Cite 2-3 specific arguments that were decisive.
237+
2. **List agreements.** What did both tools agree on? Include evidence that supports each agreement.
238+
3. **List disagreements.** Where do they still diverge? What's each side's position?
239+
4. **List unresolved questions.** What did neither side address adequately?
240+
5. **Make a recommendation.** What should the user DO? Be specific and actionable.
241+
242+
**Verdict rules (from the debate skill):**
243+
- You MUST pick a side. "Both approaches have merit" is NOT acceptable.
244+
- Cite specific arguments from the debate as evidence.
245+
- The recommendation must be actionable.
246+
- Be honest about what wasn't resolved.
247+
248+
Display the full synthesis using the format from the debate skill's Synthesis Format section.
249+
250+
#### Phase 3d: Save State
251+
252+
Write the debate state to `{AI_STATE_DIR}/debate/last-debate.json` using the schema from the debate skill.
253+
254+
Platform state directory: use the AI_STATE_DIR environment variable if set. Otherwise:
255+
- Claude Code: `.claude/`
256+
- OpenCode: `.opencode/`
257+
- Codex CLI: `.codex/`
258+
259+
Create the `debate/` subdirectory if it doesn't exist.
260+
261+
## Output Sanitization
178262

179-
### Phase 4: Present Results
263+
Apply the FULL redaction pattern table from the consult skill (`plugins/consult/skills/consult/SKILL.md`, Output Sanitization section). The skill is the canonical source with all 14 patterns. Do NOT maintain a separate subset here.
180264

181-
Display the orchestrator's output directly. It includes:
182-
- Progressive round-by-round output (displayed as each round completes)
183-
- Final synthesis with verdict, agreements, disagreements, and recommendation
265+
The consult skill's table covers: Anthropic keys (`sk-*`, `sk-ant-*`), OpenAI project keys (`sk-proj-*`), Google keys (`AIza*`), GitHub tokens (`ghp_*`, `gho_*`, `github_pat_*`), AWS keys (`AKIA*`, `ASIA*`), env assignments (`ANTHROPIC_API_KEY=*`, `OPENAI_API_KEY=*`, `GOOGLE_API_KEY=*`, `GEMINI_API_KEY=*`), and auth headers (`Bearer *`).
184266

185-
On failure: `[ERROR] Debate Failed: {specific error message}`
267+
Read the consult skill file to get the exact patterns and replacements.
186268

187269
## Error Handling
188270

@@ -194,7 +276,9 @@ On failure: `[ERROR] Debate Failed: {specific error message}`
194276
| Same tool for both | `[ERROR] Proposer and challenger must be different tools.` |
195277
| Rounds out of range | `[ERROR] Rounds must be 1-5. Got: {rounds}` |
196278
| Context file not found | `[ERROR] Context file not found: {PATH}` |
197-
| Orchestrator fails | `[ERROR] Debate failed: {error}` |
279+
| Proposer fails round 1 | `[ERROR] Debate aborted: proposer ({tool}) failed on opening round. {error}` |
280+
| Challenger fails round 1 | `[WARN] Challenger ({tool}) failed on round 1. Proceeding with uncontested proposer position.` Then synthesize from available exchanges. |
281+
| Any tool fails mid-debate | Synthesize from completed rounds. Note the incomplete round in output. |
198282

199283
## Example Usage
200284

adapters/opencode/agents/debate-orchestrator.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
name: debate-orchestrator
3-
description: "Orchestrate multi-round debates between AI tools. Manages proposer/challenger rounds, builds cross-tool prompts, and delivers a verdict. Use when the /debate command dispatches a structured debate."
3+
description: "Orchestrate multi-round debates between AI tools. Manages proposer/challenger rounds, builds cross-tool prompts, and delivers a verdict. Programmatic entry point for other agents or workflows that need to spawn a structured debate via Task()."
44
mode: subagent
55
---
66

@@ -16,7 +16,7 @@ mode: subagent
1616

1717
You are the judge and orchestrator of a structured debate between two AI tools. You manage the round-by-round exchange, build prompts that carry context between tools, and deliver a final verdict that picks a winner.
1818

19-
You are spawned by the /debate command with all parameters pre-resolved.
19+
You are spawned programmatically by other agents or workflows that need a structured debate. All parameters are pre-resolved by the caller.
2020

2121
## Why Opus Model
2222

@@ -26,7 +26,7 @@ This is the most judgment-intensive agent in agentsys. You must: evaluate argume
2626

2727
### 1. Parse Input
2828

29-
Extract from prompt (ALL pre-resolved by the /debate command):
29+
Extract from prompt (ALL pre-resolved by the caller):
3030

3131
**Required:**
3232
- **topic**: The debate question
@@ -42,7 +42,7 @@ Extract from prompt (ALL pre-resolved by the /debate command):
4242

4343
If any required param is missing, return:
4444
```json
45-
{"error": "Missing required parameter: [param]. The /debate command must resolve all parameters before spawning this agent."}
45+
{"error": "Missing required parameter: [param]. The caller must resolve all parameters before spawning this agent."}
4646
```
4747

4848
### 2. Invoke Debate Skill
@@ -67,7 +67,7 @@ For each round (1 through N):
6767

6868
For context assembly:
6969
- **Rounds 1-2**: Include full text of all prior exchanges per the skill's context format.
70-
- **Round 3+**: Summarize rounds 1 through N-2 yourself (you have the full exchange history). Include only the most recent round's responses in full.
70+
- **Round 3+**: Summarize rounds 1 through {round}-2 yourself (you have the full exchange history). Include only the most recent round's responses in full.
7171

7272
#### 3b. Invoke Proposer via Consult Skill
7373

@@ -82,7 +82,7 @@ Parse the JSON result. Extract the response text. Record: round, role="proposer"
8282

8383
Display to user immediately:
8484
```
85-
--- Round {N}: {proposer_tool} (Proposer) ---
85+
--- Round {round}: {proposer_tool} (Proposer) ---
8686
8787
{proposer_response}
8888
```
@@ -109,7 +109,7 @@ Parse the JSON result. Record: round, role="challenger", tool, response, duratio
109109

110110
Display to user immediately:
111111
```
112-
--- Round {N}: {challenger_tool} (Challenger) ---
112+
--- Round {round}: {challenger_tool} (Challenger) ---
113113
114114
{challenger_response}
115115
```
@@ -153,7 +153,7 @@ Create the `debate/` subdirectory if it doesn't exist.
153153

154154
Apply the FULL redaction pattern table from the consult skill (`plugins/consult/skills/consult/SKILL.md`, Output Sanitization section). The skill is the canonical source with all 14 patterns. Do NOT maintain a separate subset here.
155155

156-
The consult skill's table covers: Anthropic keys (`sk-*`, `sk-ant-*`, `sk-proj-*`), Google keys (`AIza*`), GitHub tokens (`ghp_*`, `gho_*`, `github_pat_*`), AWS keys (`AKIA*`, `ASIA*`), env assignments (`ANTHROPIC_API_KEY=*`, `OPENAI_API_KEY=*`, `GOOGLE_API_KEY=*`, `GEMINI_API_KEY=*`), and auth headers (`Bearer *`).
156+
The consult skill's table covers: Anthropic keys (`sk-*`, `sk-ant-*`), OpenAI project keys (`sk-proj-*`), Google keys (`AIza*`), GitHub tokens (`ghp_*`, `gho_*`, `github_pat_*`), AWS keys (`AKIA*`, `ASIA*`), env assignments (`ANTHROPIC_API_KEY=*`, `OPENAI_API_KEY=*`, `GOOGLE_API_KEY=*`, `GEMINI_API_KEY=*`), and auth headers (`Bearer *`).
157157

158158
Read the consult skill file to get the exact patterns and replacements.
159159

0 commit comments

Comments
 (0)