-
-
Notifications
You must be signed in to change notification settings - Fork 935
Description
What problem is this feature trying to solve?
Currently, the clink tool delegates requests to external AI CLIs (Gemini CLI, Claude Code, OpenCode, Codex, etc.) but provides no way to specify which model the target CLI should use. Users are locked into whatever model each CLI has configured as its default.
This creates friction when:
- Different tasks need different models - A quick lookup might work fine with a fast model, but complex code review benefits from a more capable one
- Cost management - Users may want to use cheaper models for routine tasks via clink
- Model experimentation - Testing how different models handle the same prompt through a CLI requires reconfiguring the CLI itself
- Workflow consistency - Other PAL tools (
chat,debug,thinkdeep, etc.) all support amodelparameter, but clink does not
Related: Issue #366 shows user interest in accessing specific models ("grok or other models") through CLI integrations.
Describe the solution you'd like
Add an optional model parameter to the clink tool that passes the model selection to the target CLI:
class CLinkRequest(BaseModel):
# ... existing fields ...
model: str | None = Field(
default=None,
description="Optional model override passed to the target CLI. Format depends on CLI (e.g., 'gemini-2.5-pro' for Gemini CLI, 'claude-sonnet-4' for Claude Code)."
)The implementation would pass this to the CLI's model selection mechanism:
- Gemini CLI:
gemini -m <model>or--model <model> - Claude Code:
claude --model <model> - OpenCode: May require config or environment variable
- Codex CLI: Uses config-based model selection
If model is not provided, the CLI uses its configured default (current behavior).
Describe alternatives you've considered
-
Use PAL's native tools instead of clink - Works but loses access to CLI-specific capabilities (Gemini's web search, Claude's computer use, etc.)
-
Reconfigure each CLI's default model - Tedious and affects all uses of that CLI, not just PAL-triggered ones
-
Create multiple CLI client configs per model - Could define
gemini-pro,gemini-flashas separate clients inconf/cli_clients/, but this pollutes the config and doesn't scale -
Per-role model configuration - Add model to role definitions in
conf/cli_clients/. This could complement but not replace a runtime parameter.
Implementation Notes
Each CLI has different model selection mechanisms that would need to be handled:
| CLI | Model Flag | Notes |
|---|---|---|
| Gemini CLI | -m / --model |
Direct flag support |
| Claude Code | --model |
Direct flag support |
| OpenCode | Config-based | May need OPENCODE_MODEL env var |
| Codex CLI | Config-based | Uses OpenAI config |
The clink/agents/ module would need updates to pass the model parameter through to each CLI runner.