CLI tools for AI code assistance via GitHub Copilot API. Terminal-native, Unix-composable.
- Terminal-Native: No GUI, runs in SSH/tmux/headless environments
- Stateful Conversations: Context maintained across interactions
- Streaming Responses: Real-time SSE output
- Model Selection: Aliases (g/c/o) + specific model IDs
- Auto Token Caching: OAuth device flow, local persistence
bun install -g copilot-scriptsOr run directly:
bun src/tools/chatsh.ts
bun src/tools/holefill.ts myfile.ts
bun src/tools/refactor.ts myfile.tsOn first run, tools initiate OAuth device flow:
- Visit displayed GitHub URL
- Enter code shown in terminal
- Token cached at
~/.config/copilot-scripts/tokens.json - Auto-refresh on expiration
Create ~/bin wrappers:
mkdir -p ~/bin
# Create chatsh wrapper
cat > ~/bin/chatsh << 'EOF'
#!/bin/bash
exec bun ~/copilot-scripts/src/tools/chatSH.ts "$@"
EOF
# Create holefill wrapper
cat > ~/bin/holefill << 'EOF'
#!/bin/bash
exec bun ~/copilot-scripts/src/tools/holefill.ts "$@"
EOF
# Create refactor wrapper
cat > ~/bin/refactor << 'EOF'
#!/bin/bash
exec bun ~/copilot-scripts/src/tools/refactor.ts "$@"
EOF
chmod +x ~/bin/chatsh ~/bin/holefill ~/bin/refactorAdd to PATH:
# ~/.bashrc or ~/.zshrc
export PATH="$HOME/bin:$PATH"ChatGPT-like experience in terminal with shell execution.
Features:
- Interactive REPL conversations
- AI suggests/executes bash commands (with confirmation)
- User executes commands with
!command - Conversation history logged to
~/.copilot-scripts/chatsh_history/
Usage:
chatsh [model]
# Examples
chatsh # Default model (gpt-4o)
chatsh c # Claude 3.5 Sonnet
chatsh o # GPT-4oExample Interaction:
$ chatsh
GPT-4o (gpt-4o)
> What files are in this directory?
I'll check the directory contents.
<RUN>
ls -la
</RUN>
Run this command? (y/n): y
[command output shown]
> Create a hello world script in TypeScriptFill code placeholders (.?.) using AI context.
Features:
- Preserves indentation and style
- Inline imports via
//./path//,{-./path-},#./path#syntax - Logs to
~/.copilot-scripts/holefill_history/ - Hole must be at column 0
Usage:
holefill <file> [model]
# Examples
holefill app.ts # Default model
holefill app.ts c # Claude 3.5 Sonnet
holefill component.tsx o # GPT-4oExample:
app.ts:
function fibonacci(n: number): number {
.?.
}After holefill app.ts:
function fibonacci(n: number): number {
if (n <= 1) return n;
return fibonacci(n - 1) + fibonacci(n - 2);
}Inline Imports:
app.mini.ts:
//./src/types.ts//
function processUser(user: User) {
.?.
}The //./src/types.ts// line replaced with file contents when sending to AI.
AI-powered refactoring with context compaction.
Features:
- Two-phase: compacting (identify relevant code) + editing
- Splits files into numbered blocks
- Token budget management
- Supports write/patch/delete operations
- Multi-file transformation support
Usage:
refactor <file> [model]
# Examples
refactor src/app.ts # Default model
refactor src/app.ts c # Claude 3.5 Sonnet
refactor "src/**/*.ts" o # All TS files with GPT-4oHow it works:
- Context Collection:
- Recursively crawls imports in the target file (relative imports).
- Reverse Dependency Search: Uses
ripgrep(if installed) to find other files that import the target file, adding them to the context. This allows the AI to fix call-sites in other files when you change a function signature.
- Compacting Phase: AI identifies blocks relevant to your task.
- Editing Phase: AI edits necessary blocks.
- Output: Structured patches or full rewrites.
Interactive Prompts:
$ refactor src/app.ts
Model: GPT-4o
Files: src/app.ts (1234 tokens)
Task: Rename function getUserData to fetchUserProfile
[Compacting phase...]
Omitted 45 irrelevant blocks
[Editing phase...]
<patch block="12">
export async function fetchUserProfile(id: string) {
</patch>
<patch block="34">
const profile = await fetchUserProfile(userId);
</patch>
Apply changes? (y/n):Format: alias or model_id
Aliases:
c- Claude 3.5 Sonnetg- GPT-4i- Gemini 1.5 Proo- GPT-4o (default)
Examples (using full model IDs):
gpt-4oclaude-3-5-sonnet-20241022gemini-1.5-pro-002
Integrate tools into Neovim workflow.
~/.config/nvim/lua/copilot-scripts.lua:
local M = {}
-- HoleFill: Complete code at cursor placeholder
function M.hole_fill()
local filepath = vim.api.nvim_buf_get_name(0)
if filepath == "" then
vim.notify("Buffer has no file", vim.log.levels.ERROR)
return
end
vim.cmd('write')
local cmd = string.format('holefill "%s"', filepath)
vim.notify("Running HoleFill...", vim.log.levels.INFO)
vim.fn.jobstart(cmd, {
on_exit = function(_, exit_code)
if exit_code == 0 then
vim.cmd('edit!')
vim.notify("HoleFill completed!", vim.log.levels.INFO)
else
vim.notify("HoleFill failed", vim.log.levels.ERROR)
end
end,
})
end
-- ChatSH: Open terminal with AI chat
function M.chat(model)
model = model or "o"
local cmd = string.format('chatsh %s', model)
vim.cmd('split | terminal ' .. cmd)
vim.cmd('startinsert')
end
-- Refactor: AI-powered refactoring
function M.refactor(model)
local filepath = vim.api.nvim_buf_get_name(0)
if filepath == "" then
vim.notify("Buffer has no file", vim.log.levels.ERROR)
return
end
vim.cmd('write')
model = model or "o"
local cmd = string.format('refactor "%s" %s', filepath, model)
vim.cmd('split | terminal ' .. cmd)
vim.cmd('startinsert')
end
return M~/.config/nvim/init.lua:
local copilot = require('copilot-scripts')
-- Key mappings
vim.keymap.set('n', '<leader>af', copilot.hole_fill, { desc = 'AI: Fill hole' })
vim.keymap.set('n', '<leader>ac', function() copilot.chat('c') end, { desc = 'AI: Chat (Claude)' })
vim.keymap.set('n', '<leader>ao', function() copilot.chat('o') end, { desc = 'AI: Chat (GPT-4o)' })
-- Refactor mappings (matching different models)
vim.keymap.set('n', '<leader>arc', function() copilot.refactor('c') end, { desc = 'Refactor: Claude' })
vim.keymap.set('n', '<leader>arg', function() copilot.refactor('g') end, { desc = 'Refactor: GPT-4' })
vim.keymap.set('n', '<leader>aro', function() copilot.refactor('o') end, { desc = 'Refactor: GPT-4o' })
vim.keymap.set('n', '<leader>ari', function() copilot.refactor('i') end, { desc = 'Refactor: Gemini' })Code Completion:
- Type
.?.where code completion needed - Press
<leader>af - Buffer reloads with completed code
Chat:
<leader>ac- Claude 3.5 Sonnet chat<leader>ao- GPT-4o chat
Refactoring:
- Open file to refactor
- Press
<leader>ar - Enter task in terminal
- Review and apply changes
You might notice that unlike the original AI-scripts (which uses direct vendor APIs), copilot-scripts does not currently display the dim-colored "thinking" or "reasoning" traces for models like Gemini or Claude.
Why?
copilot-scriptsuses the GitHub Copilot API as a proxy.- Currently, the GitHub Copilot API hides the raw reasoning tokens from the response stream for most models.
- While models like
gpt-5-miniorclaude-sonnet-4.5might perform reasoning internally (and even reportreasoning_tokensusage), the actual text of that thought process is not streamed back to the client by the API. - The VS Code Copilot Chat extension displays thinking traces using internal/privileged protocols ("Agent Mode") that are not yet fully exposed in the standard public API endpoint used by this tool.
I will try to enable thinking traces as soon as the Copilot API exposes them for standard consumers.
Tools (CLI entry points: chatsh, holefill, refactor)
↓
Core (CopilotChatInstance, ModelResolver)
↓
Services (Auth, Copilot, FileSystem, Log)
↓
API/Utils (streaming, tokenizer)
Requirements:
- Bun 1.0+
- TypeScript 5.9+
- GitHub account (for Copilot API)
Install:
bun installType-check:
bun run typecheckTest:
bun testToken Storage:
- Location:
~/.config/copilot-scripts/tokens.json - Permissions: 0600 (user read/write only)
- Never logged, only sent to GitHub API
Input Validation:
- All external inputs validated
- No shell injection in command execution
- API responses validated against schemas
Inspired by Taelin AI Scripts - adapted for GitHub Copilot API.
MIT - See LICENSE file