Skip to content

feat: natural language parsing and multi-instance for /consult#223

Merged
avifenesh merged 3 commits intomainfrom
feat/consult-nlp-multi-instance
Feb 17, 2026
Merged

feat: natural language parsing and multi-instance for /consult#223
avifenesh merged 3 commits intomainfrom
feat/consult-nlp-multi-instance

Conversation

@avifenesh
Copy link
Copy Markdown
Collaborator

Summary

  • /consult now accepts free-form natural language: /consult with codex about my auth approach
  • NLP parser extracts tool, effort, count, and question from natural text
  • Flags still work for power users (backward compatible)
  • Multi-instance support: /consult ask 3 codex about this design runs N parallel consultations
  • Command handles NLP + interactive resolution, agent extended for multi-instance, skill unchanged

NLP Parsing Examples

Input Parsed
with codex about my auth approach tool=codex, question="my auth approach"
ask 3 codex about this design tool=codex, count=3, question="this design"
gemini should I use redis or postgres tool=gemini, question="should I use redis or postgres"
thoroughly ask claude about error handling tool=claude, effort=high, question="error handling"
"Is this right?" --tool=gemini --effort=high tool=gemini, effort=high, question="Is this right?"

Architecture

Command (NLP + pickers) ─── count=1 ──→ Skill directly
                        └── count>1 ──→ Agent (parallel execution)

Files Changed

  • consult.md (command): Rewritten with Phase 1 NLP + Phase 2 interactive + Phase 3 routing
  • consult-agent.md: Extended with multi-instance Steps 4a-4e (indexed temp files, parallel Bash, synthesis)
  • consult-command.test.js: Updated test section markers for new step numbering
  • Adapters: Regenerated

Test plan

  • All 68 consult-command tests pass
  • Full suite: 3,939 tests pass (97 suites)
  • All validators pass
  • Preflight --all passes
  • Backward compatibility: flag-based invocations unchanged

…sult

The /consult command now accepts free-form natural language input in
addition to explicit flags:

- "with codex about my auth approach" → tool=codex, question extracted
- "ask 3 gemini about this design" → tool=gemini, count=3
- "thoroughly ask claude about errors" → tool=claude, effort=high

Multi-instance support (--count=N or "ask N tool") runs N parallel
consultations with the same tool, presenting numbered responses with
a brief synthesis of agreements and differences.

Architecture: command handles NLP parsing + interactive resolution,
agent extended with multi-instance parallel execution, skill unchanged.
Copilot AI review requested due to automatic review settings February 17, 2026 14:23
@chatgpt-codex-connector
Copy link
Copy Markdown

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.
To continue using code reviews, add credits to your account and enable them for code reviews in your settings.

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @avifenesh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the /consult command by integrating natural language processing for more intuitive user input and introducing multi-instance consultation capabilities. These changes aim to make the consultation process more flexible and powerful, allowing users to easily query multiple AI tools in parallel or with natural language, while maintaining backward compatibility for existing flag-based commands.

Highlights

  • Natural Language Parsing: The /consult command now supports free-form natural language input, allowing users to phrase requests like "/consult with codex about my auth approach". The system extracts parameters such as tool, effort, count, and question from the natural text.
  • Multi-Instance Consultations: Introduced support for running multiple parallel consultations using a single command, e.g., "/consult ask 3 codex about this design". This leverages a new consult-agent to manage parallel execution and synthesize results.
  • Backward Compatibility: Existing flag-based invocations of /consult remain fully functional, ensuring power users can continue to use explicit arguments.
  • Enhanced Interactive Resolution: The interactive parameter resolution phase has been refined to ensure all missing parameters are explicitly resolved, preventing silent defaults and improving user guidance.
  • Agent Extension for Parallelism: The consult-agent has been extended to orchestrate multi-instance consultations, including managing temporary files, executing parallel Bash commands, and synthesizing responses.
Changelog
  • tests/consult-command.test.js
    • Updated regex for --tool validation to be more robust.
    • Modified test descriptions and regex for Phase 2 interactive parameter enforcement.
    • Adjusted test section markers to reflect new step numbering in command documentation.
  • adapters/codex/skills/consult/SKILL.md
    • Updated description to include multi-consultation use cases.
    • Refactored argument parsing into explicit flags and natural language extraction phases.
    • Added --count=N flag for specifying parallel instances.
    • Detailed natural language extraction patterns for tool, count, effort, and question.
    • Renamed and updated interactive parameter resolution steps, including a new 'Instances' question.
    • Introduced a new 'Multi-instance' execution path that spawns the consult-agent.
    • Revised error handling to include multi-instance specific errors.
    • Expanded example usage with natural language and mixed flag/NLP inputs.
  • adapters/opencode/agents/consult-agent.md
    • Updated description to explicitly state support for multi-instance parallel consultations.
    • Clarified the agent's role in the Command -> Agent -> Skill pattern, specifically for multi-instance scenarios.
    • Added count as a required parameter for the agent.
    • Introduced a routing mechanism for single vs. multi-instance execution.
    • Detailed new steps for multi-instance execution, including invoking the skill once, writing indexed temp files, executing parallel Bash commands, parsing/formatting results, and cleaning up state.
  • adapters/opencode/commands/consult.md
    • Updated description to include multi-consultation use cases.
    • Refactored argument parsing into explicit flags and natural language extraction phases.
    • Added --count=N flag for specifying parallel instances.
    • Detailed natural language extraction patterns for tool, count, effort, and question.
    • Renamed and updated interactive parameter resolution steps, including a new 'Instances' question.
    • Introduced a new 'Multi-instance' execution path that spawns the consult-agent.
    • Revised error handling to include multi-instance specific errors.
    • Expanded example usage with natural language and mixed flag/NLP inputs.
  • plugins/consult/agents/consult-agent.md
    • Updated description to explicitly state support for multi-instance parallel consultations.
    • Clarified the agent's role in the Command -> Agent -> Skill pattern, specifically for multi-instance scenarios.
    • Added count as a required parameter for the agent.
    • Introduced a routing mechanism for single vs. multi-instance execution.
    • Detailed new steps for multi-instance execution, including invoking the skill once, writing indexed temp files, executing parallel Bash commands, parsing/formatting results, and cleaning up state.
  • plugins/consult/commands/consult.md
    • Updated description to include multi-consultation use cases.
    • Refactored argument parsing into explicit flags and natural language extraction phases.
    • Added --count=N flag for specifying parallel instances.
    • Detailed natural language extraction patterns for tool, count, effort, and question.
    • Renamed and updated interactive parameter resolution steps, including a new 'Instances' question.
    • Introduced a new 'Multi-instance' execution path that spawns the consult-agent.
    • Revised error handling to include multi-instance specific errors.
    • Expanded example usage with natural language and mixed flag/NLP inputs.
Activity
  • The pull request introduces significant new features for the /consult command, including natural language parsing and multi-instance execution capabilities.
  • The command's internal logic has been restructured to accommodate both flag-based and natural language inputs, with flags taking precedence.
  • The consult-agent has been updated to handle the orchestration of parallel consultations, including temporary file management and result synthesis.
  • Test cases for consult-command were adjusted to align with the updated command structure and interactive resolution phases.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant and valuable new features to the /consult command, including natural language parsing and multi-instance support. The changes are well-structured, with clear separation of concerns between the command for user interaction and the agent for parallel execution. The NLP parsing rules are thoughtfully designed, and the multi-instance workflow in the agent is robust, covering parallel execution, result synthesis, and state management. The updates to the tests are also thorough. I have one minor suggestion to improve the clarity of an example in the agent's documentation.

"model": "gpt-5.3-codex",
"effort": "high",
"count": 3,
"timestamp": "2026-02-17T12:00:00Z",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The example JSON for the multi-session state uses a hardcoded future timestamp. To avoid confusion and ensure the implementing agent uses a dynamic value, it would be better to use a placeholder, similar to how [tool] and [count] are used elsewhere in the documentation. This makes it clear that the current timestamp should be inserted here.

Suggested change
"timestamp": "2026-02-17T12:00:00Z",
"timestamp": "{CURRENT_ISO_TIMESTAMP}",

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds natural language parsing (NLP) and multi-instance consultation support to the /consult command. The command now accepts free-form input like "/consult with codex about my auth approach" or "/consult ask 3 codex about this design" in addition to the existing flag-based syntax. Multi-instance consultations (count > 1) enable running N parallel consultations with the same tool/parameters.

Changes:

  • Natural language parsing extracts tool, effort, count, and question from user input, with flags taking precedence over NLP
  • Multi-instance architecture: commands route count=1 to skill directly, count>1 to agent for parallel execution
  • Interactive picker extended with optional "Instances" question for ambiguous multi-instance requests

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 17 comments.

Show a summary per file
File Description
plugins/consult/commands/consult.md Rewritten with Phase 1 NLP parsing (Steps 1a-1b), Phase 2 interactive resolution (Steps 2a-2d), Phase 3 routing (single/multi), Phase 4 results display
plugins/consult/agents/consult-agent.md Extended with Step 2 routing logic and Steps 4a-4e for multi-instance execution (temp files, parallel Bash, synthesis, cleanup)
adapters/opencode/commands/consult.md Auto-generated adapter with platform-specific transforms (Task -> @agent syntax)
adapters/opencode/agents/consult-agent.md Auto-generated adapter with platform state dir changes and agent invocation syntax
adapters/codex/skills/consult/SKILL.md Auto-generated adapter as skill format for Codex platform
__tests__/consult-command.test.js Updated test expectations for new step numbering (2b→2c, 2c→2d)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

- "{N} {tool}" (e.g., "3 codex") -> count=N, tool
- "{N} instances" -> count=N
- "few instances" / "multiple" / "several" -> count=ambiguous (ask user in Phase 2)

Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar issue - NLP count extraction patterns can parse any number N, but there's no validation at the point of extraction to ensure 1 ≤ N ≤ 5. Add validation instructions after the count extraction patterns to reject invalid counts immediately.

Suggested change
After extracting any numeric count N from natural language, immediately validate that 1 ≤ N ≤ 5:
- If 1 ≤ N ≤ 5, accept N as the count.
- If N < 1 or N > 5, treat the NLP count as invalid: do NOT set `count` from NLP and instead ask the user in Phase 2 to choose an integer between 1 and 5.

Copilot uses AI. Check for mistakes.
Comment on lines +129 to 136
- header: "Instances" # SKIP if count resolved or not hinted
question: "How many parallel consultations?"
multiSelect: false
options:
- label: "1 (Single)" description: "Standard single consultation"
- label: "2 (Compare)" description: "Two responses to compare"
- label: "3 (Panel)" description: "Three perspectives"
```
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Instances picker only offers options 1, 2, and 3, but the specification allows counts from 1-5. Users who want 4 or 5 parallel consultations would have no way to select these values through the interactive picker. Either add options for "4 (Extensive)" and "5 (Maximum)" to the picker, or add an "Other" option that allows custom input (with validation).

Copilot uses AI. Check for mistakes.
Comment on lines +54 to +56
**Count extraction**:
- "ask {N} {tool}" (e.g., "ask 3 codex") -> count=N, tool
- "{N} {tool}" (e.g., "3 codex") -> count=N, tool
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The NLP pattern "ask {N} {tool}" could have an ambiguous parse with "ask {tool}". For example, if a user says "ask 2 gemini questions about authentication", the count extraction pattern might incorrectly extract count=2 and tool=gemini, treating "questions" as part of the question. The order of pattern matching matters here. Consider adding clarification that count extraction should validate that what follows N is actually a valid tool name, not just any word.

Suggested change
**Count extraction**:
- "ask {N} {tool}" (e.g., "ask 3 codex") -> count=N, tool
- "{N} {tool}" (e.g., "3 codex") -> count=N, tool
**Count extraction** (apply only after the tool extraction rules above, and only when `{tool}` is a valid tool name from the allow-list):
- "ask {N} {tool}" (e.g., "ask 3 codex") -> count=N, tool, but only if the word after `{N}` is exactly one of: gemini, codex, claude, opencode, copilot
- "{N} {tool}" (e.g., "3 codex") -> count=N, tool, with the same tool-name validation as above

Copilot uses AI. Check for mistakes.
Comment on lines 80 to 90
@@ -66,9 +89,9 @@ If `--continue` is present:
2. If the file exists, restore the saved tool, session_id, and model from it
3. If the file does not exist, show `[WARN] No previous session found` and proceed as a fresh consultation
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The interaction between --continue and --count is not documented. What should happen if a user runs "/consult --continue --count=3"? Should it continue the previous single session 3 times, continue from a multi-session (if the previous was multi), or should this combination be rejected? The spec should clarify this edge case.

Copilot uses AI. Check for mistakes.
- "ask {N} {tool}" (e.g., "ask 3 codex") -> count=N, tool
- "{N} {tool}" (e.g., "3 codex") -> count=N, tool
- "{N} instances" -> count=N
- "few instances" / "multiple" / "several" -> count=ambiguous (ask user in Phase 2)
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar issue - NLP count extraction patterns can parse any number N, but there's no validation at the point of extraction to ensure 1 ≤ N ≤ 5. Add validation instructions after the count extraction patterns to reject invalid counts immediately.

Suggested change
- "few instances" / "multiple" / "several" -> count=ambiguous (ask user in Phase 2)
- "few instances" / "multiple" / "several" -> count=ambiguous (ask user in Phase 2)
- **Validation after extraction**: For any numeric `{N}` extracted by the patterns above, immediately validate that `N` is an integer between 1 and 5 inclusive. If `N` is missing, non-numeric, `< 1`, or `> 5`, do **not** set `count`; instead, treat the count as unresolved and ask the user for a valid value in Phase 2 (or fail with a clear validation error).

Copilot uses AI. Check for mistakes.
Comment on lines +126 to +131
- header: "Instances" # SKIP if count resolved or not hinted
question: "How many parallel consultations?"
options:
- label: "1 (Single)" description: "Standard single consultation"
- label: "2 (Compare)" description: "Two responses to compare"
- label: "3 (Panel)" description: "Three perspectives"
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Instances picker only offers options 1, 2, and 3, but the specification allows counts from 1-5. Users who want 4 or 5 parallel consultations would have no way to select these values through the interactive picker. Either add options for "4 (Extensive)" and "5 (Maximum)" to the picker, or add an "Other" option that allows custom input (with validation).

Copilot uses AI. Check for mistakes.
3. `--model=VALUE` or `--model VALUE` (any string, including quoted)
4. `--context=VALUE` where VALUE is: diff, file=PATH, or none
5. `--continue` (optionally `--continue=SESSION_ID`)
6. `--count=N` where N is 1-5
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The count parameter validation is mentioned in Step 1a but there's no actual validation logic in the parsing phase. The command should validate that N is between 1-5 when extracting the --count flag or parsing NLP count patterns, and immediately reject out-of-range values with the error message from the error table. Currently, the validation would only happen later when the error is displayed, but it's unclear where the actual check occurs.

Copilot uses AI. Check for mistakes.
Comment on lines +127 to +129
Bash: codex exec "$(cat "{AI_STATE_DIR}/consult/question-1.tmp")" --json -m "gpt-5.3-codex" -a suggest
Bash: codex exec "$(cat "{AI_STATE_DIR}/consult/question-2.tmp")" --json -m "gpt-5.3-codex" -a suggest
Bash: codex exec "$(cat "{AI_STATE_DIR}/consult/question-3.tmp")" --json -m "gpt-5.3-codex" -a suggest
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Bash command example uses command substitution with cat to read the temp file content, but this could be problematic if the question contains shell metacharacters that aren't properly escaped. While the skill is responsible for building the command template and should handle quoting, the example shows direct command substitution which could lead to injection if not handled carefully. Consider clarifying that the skill's command template must already include proper quoting around the question parameter, or use a safer approach like passing the file path directly to tools that support file input rather than using $(cat ...).

Suggested change
Bash: codex exec "$(cat "{AI_STATE_DIR}/consult/question-1.tmp")" --json -m "gpt-5.3-codex" -a suggest
Bash: codex exec "$(cat "{AI_STATE_DIR}/consult/question-2.tmp")" --json -m "gpt-5.3-codex" -a suggest
Bash: codex exec "$(cat "{AI_STATE_DIR}/consult/question-3.tmp")" --json -m "gpt-5.3-codex" -a suggest
Bash: codex exec --input-file "{AI_STATE_DIR}/consult/question-1.tmp" --json -m "gpt-5.3-codex" -a suggest
Bash: codex exec --input-file "{AI_STATE_DIR}/consult/question-2.tmp" --json -m "gpt-5.3-codex" -a suggest
Bash: codex exec --input-file "{AI_STATE_DIR}/consult/question-3.tmp" --json -m "gpt-5.3-codex" -a suggest

Copilot uses AI. Check for mistakes.
Comment on lines +133 to 140
- header: "Instances" # SKIP if count resolved or not hinted
question: "How many parallel consultations?"
multiSelect: false
options:
- label: "1 (Single)" description: "Standard single consultation"
- label: "2 (Compare)" description: "Two responses to compare"
- label: "3 (Panel)" description: "Three perspectives"
```
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Instances picker only offers options 1, 2, and 3, but the specification allows counts from 1-5. Users who want 4 or 5 parallel consultations would have no way to select these values through the interactive picker. Either add options for "4 (Extensive)" and "5 (Maximum)" to the picker, or add an "Other" option that allows custom input (with validation).

Copilot uses AI. Check for mistakes.
- "ask {N} {tool}" (e.g., "ask 3 codex") -> count=N, tool
- "{N} {tool}" (e.g., "3 codex") -> count=N, tool
- "{N} instances" -> count=N
- "few instances" / "multiple" / "several" -> count=ambiguous (ask user in Phase 2)
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar issue - NLP count extraction patterns can parse any number N, but there's no validation at the point of extraction to ensure 1 ≤ N ≤ 5. Add validation instructions after the count extraction patterns to reject invalid counts immediately.

Suggested change
- "few instances" / "multiple" / "several" -> count=ambiguous (ask user in Phase 2)
- "few instances" / "multiple" / "several" -> count=ambiguous (ask user in Phase 2)
- After extracting any numeric `{N}` above, immediately validate that `N` is an integer between 1 and 5 (inclusive).
- If `N < 1` or `N > 5`, do **not** treat it as a valid count: clear the count value, keep any other extracted parameters (like tool and question), and in Phase 2 ask the user to choose a count between 1 and 5 (or apply a safe default like 3 if the user declines to choose).

Copilot uses AI. Check for mistakes.
Gemini review: hardcoded timestamp in JSON example could mislead
the implementing agent. Use dynamic placeholder instead.
- Add count validation (1-5) in Phase 1 and agent Step 2
- Document --continue + --count mutual exclusivity
- Add 5th option to Instances picker (was missing 4-5 range)
Copilot AI review requested due to automatic review settings February 17, 2026 14:44
@avifenesh avifenesh merged commit e74c2b7 into main Feb 17, 2026
5 checks passed
@avifenesh avifenesh deleted the feat/consult-nlp-multi-instance branch February 17, 2026 14:44
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 6 out of 6 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

- ... through `question-{count}.tmp`

Platform state directory:
- Claude Code: `.opencode/`
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The state directory for Claude Code is incorrectly documented as .opencode/ but should be .claude/. The adapter transform incorrectly replaced the Claude Code state directory reference. The source plugin (plugins/consult/agents/consult-agent.md:117) has the correct value .claude/.

Suggested change
- Claude Code: `.opencode/`
- Claude Code: `.claude/`

Copilot uses AI. Check for mistakes.
Comment on lines +136 to +140
options:
- label: "1 (Single)" description: "Standard single consultation"
- label: "2 (Compare)" description: "Two responses to compare"
- label: "3 (Panel)" description: "Three perspectives"
- label: "5 (Full spread)" description: "Five diverse perspectives"
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The interactive "Instances" picker offers options for 1, 2, 3, and 5 parallel consultations, but count=4 is missing despite being valid (line 39 says "--count=N where N is 1-5", and line 60 validates "1 <= count <= 5"). Users can still use --count=4 as a flag, but cannot select it interactively. Consider adding a "4 (Quad)" option for completeness, or update the documentation to explain that the picker offers recommended values while any valid count (1-5) can be specified via flags.

Copilot uses AI. Check for mistakes.
IMPORTANT: Do NOT skip any missing parameter. Do NOT silently default --effort to "medium" or --tool to any value. Present pickers for ALL unresolved parameters.
Map tool choice to lowercase: "Claude" -> "claude", "Codex" -> "codex", etc.
Map effort choice: "Medium (Recommended)" -> "medium", "Low" -> "low", etc.
Map count choice: "1 (Single)" -> 1, "2 (Compare)" -> 2, "3 (Panel)" -> 3.
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The count choice mapping only shows mappings for options 1, 2, and 3, but doesn't include the mapping for "5 (Full spread)" -> 5. Add this mapping for completeness: Map count choice: "1 (Single)" -> 1, "2 (Compare)" -> 2, "3 (Panel)" -> 3, "5 (Full spread)" -> 5.

Suggested change
Map count choice: "1 (Single)" -> 1, "2 (Compare)" -> 2, "3 (Panel)" -> 3.
Map count choice: "1 (Single)" -> 1, "2 (Compare)" -> 2, "3 (Panel)" -> 3, "5 (Full spread)" -> 5.

Copilot uses AI. Check for mistakes.
Comment on lines 111 to 137
@@ -90,23 +119,35 @@ request_user_input:
- label: "OpenCode" description: "Flexible model choice"
- label: "Copilot" description: "GitHub-integrated AI"

- header: "Effort" # SKIP if --effort provided
- header: "Effort" # SKIP if effort resolved
question: "What thinking effort level?"
options:
- label: "Medium (Recommended)" description: "Balanced speed and quality"
- label: "Low" description: "Fast, minimal reasoning"
- label: "High" description: "Thorough analysis"
- label: "Max" description: "Maximum reasoning depth"

- header: "Instances" # SKIP if count resolved or not hinted
question: "How many parallel consultations?"
options:
- label: "1 (Single)" description: "Standard single consultation"
- label: "2 (Compare)" description: "Two responses to compare"
- label: "3 (Panel)" description: "Three perspectives"
- label: "5 (Full spread)" description: "Five diverse perspectives"
```
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Codex adapter documentation notes that "Each question MUST include a unique id field (e.g., id: \"q1\")," but the example questions below don't include any id fields. Add id fields to each question in the request_user_input examples (e.g., "id: "tool"", "id: "effort"", "id: "instances"") to match the stated requirement.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants