What problem is this feature trying to solve?
pal-mcp-server currently can integrate with Claude Code CLI, Gemini CLI and Codex CLI. Github Copilot CLI provides access to a large set of models at a very low subscription cost ($10/mo) with reasonable token allocations, and would be a great addition to the list of agents that can collaborate using pal!
Describe the solution you'd like
I'd like to see Copilot CLI added as a supported CLI interface for clink-ing to, with the ability to select any of the supported models for various uses:
❯ Claude Sonnet 4.6 (default) ✓ 1x
Claude Sonnet 4.5 1x
Claude Haiku 4.5 0.33x
Claude Opus 4.6 3x
Claude Opus 4.6 (fast mode) (Preview) (requires enablement) 30x
Claude Opus 4.5 3x
Claude Sonnet 4 1x
Gemini 3 Pro (Preview) 1x
GPT-5.3-Codex 1x
GPT-5.2-Codex 1x
GPT-5.2 1x
GPT-5.1-Codex-Max 1x
GPT-5.1-Codex 1x
GPT-5.1 1x
GPT-5.1-Codex-Mini (Preview) 0.33x
GPT-5 mini 0x
GPT-4.1 0x
Describe alternatives you've considered
I can point at the Copilot API, but it appears to make much less efficient use of provided tokens (i.e. each individual request counts as a "premium request" and tokens run out within 15 minutes, vs. using Copilot CLI allows chains of requests that consume less tokens.
Feature Category
Integration enhancement
Contribution
What problem is this feature trying to solve?
pal-mcp-server currently can integrate with Claude Code CLI, Gemini CLI and Codex CLI. Github Copilot CLI provides access to a large set of models at a very low subscription cost ($10/mo) with reasonable token allocations, and would be a great addition to the list of agents that can collaborate using pal!
Describe the solution you'd like
I'd like to see Copilot CLI added as a supported CLI interface for clink-ing to, with the ability to select any of the supported models for various uses:
❯ Claude Sonnet 4.6 (default) ✓ 1x
Claude Sonnet 4.5 1x
Claude Haiku 4.5 0.33x
Claude Opus 4.6 3x
Claude Opus 4.6 (fast mode) (Preview) (requires enablement) 30x
Claude Opus 4.5 3x
Claude Sonnet 4 1x
Gemini 3 Pro (Preview) 1x
GPT-5.3-Codex 1x
GPT-5.2-Codex 1x
GPT-5.2 1x
GPT-5.1-Codex-Max 1x
GPT-5.1-Codex 1x
GPT-5.1 1x
GPT-5.1-Codex-Mini (Preview) 0.33x
GPT-5 mini 0x
GPT-4.1 0x
Describe alternatives you've considered
I can point at the Copilot API, but it appears to make much less efficient use of provided tokens (i.e. each individual request counts as a "premium request" and tokens run out within 15 minutes, vs. using Copilot CLI allows chains of requests that consume less tokens.
Feature Category
Integration enhancement
Contribution