feat: add GitHub Copilot provider support and related configurations#458
feat: add GitHub Copilot provider support and related configurations#458d0uub wants to merge 3 commits intoglowingjade:mainfrom
Conversation
WalkthroughAdds a new "copilot" LLM provider (GitHub Copilot): registers provider and default chat models, implements CopilotProvider with session token management and chat completion calls, wires it into the LLM manager, and extends type schemas to include the copilot variant. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant App
participant Manager as LLM Manager
participant Provider as CopilotProvider
participant Token as Copilot Token Endpoint
participant API as Copilot Chat API
App->>Manager: createProvider(type='copilot')
Manager->>Provider: new CopilotProvider(config)
App->>Provider: generateResponse(request)
rect rgba(220,235,255,0.4)
note right of Provider: Ensure session token
Provider->>Provider: read cached token
alt no token or expired
Provider->>Token: POST get session (using API key)
Token-->>Provider: session token + expiry
Provider->>Provider: cache session
end
end
Provider->>API: POST chat completions (model, messages)
alt 200 OK
API-->>Provider: response JSON
Provider-->>App: normalized LLMResponse
else 401/403
note over Provider,API: Invalidate and retry once
Provider->>Provider: clear session
Provider->>Token: POST get session
Token-->>Provider: new session token
Provider->>API: POST chat completions
alt success
API-->>Provider: response JSON
Provider-->>App: normalized LLMResponse
else failure
API-->>Provider: error
Provider-->>App: throw error
end
else other error
API-->>Provider: error
Provider-->>App: throw error
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (4)
src/constants.ts (1)
430-441: Add Copilot models to recommendation lists (optional)Consider whether you want any Copilot chat models in
RECOMMENDED_MODELS_FOR_CHAT/RECOMMENDED_MODELS_FOR_APPLY. If yes, add the corresponding IDs (e.g.,copilot-gpt-4.1) to those arrays. If not, ignore.src/core/llm/copilotProvider.ts (3)
85-94: Use typed exception for invalid/expired session and centralize session resetPrefer emitting
LLMAPIKeyInvalidExceptionfor consistency with other providers, and ensure the cached session is cleared before throwing.Apply this diff:
- if (resp.status === 401) { - window?.localStorage?.removeItem?.('copilot.session'); - sessionToken = await this.getCopilotToken(); - headers.authorization = `Bearer ${sessionToken}` - resp = await requestUrl({url: 'https://api.githubcopilot.com/chat/completions',method: 'POST',headers,body: JSON.stringify(payload)}); - } - if (resp.status === 401 || resp.status === 403) { - window?.localStorage?.removeItem?.('copilot.session'); - throw new Error('Invalid token') - } + if (resp.status === 401) { + window?.localStorage?.removeItem?.('copilot.session') + sessionToken = await this.getCopilotToken() + headers.authorization = `Bearer ${sessionToken}` + resp = await requestUrl({ + url: 'https://api.githubcopilot.com/chat/completions', + method: 'POST', + headers, + body: JSON.stringify(payload), + }) + } + if (resp.status === 401 || resp.status === 403) { + window?.localStorage?.removeItem?.('copilot.session') + throw new LLMAPIKeyInvalidException('Invalid token') + }
54-64: Minimize spoofed telemetry headers (optional)Unless the Copilot API requires these specific editor/version headers, consider trimming to essentials (accept, content-type, authorization). Spoofed headers can change server behavior or become brittle.
120-151: Streaming implementation is non-incremental; usage should be emitted only at the final chunkYou currently synthesize streaming by yielding one chunk per choice with full content. If downstream logic expects OpenAI-like semantics (usage stats only in the final chunk), consider emitting
usageonly once on the last choice to avoid double-counting.Apply this diff:
- async function* streamGenerator(): AsyncIterable<LLMResponseStreaming> { - for (const choice of response.choices) { + async function* streamGenerator(): AsyncIterable<LLMResponseStreaming> { + for (let i = 0; i < response.choices.length; i++) { + const choice = response.choices[i] + const isLast = i === response.choices.length - 1 yield { id: response.id ?? '', model: response.model ?? model.model, choices: [ { finish_reason: choice.finish_reason ?? null, delta: { content: choice.message?.content ?? '', role: choice.message?.role ?? 'assistant', }, }, ], object: 'chat.completion.chunk', - usage: response.usage, + usage: isLast ? response.usage : undefined, } as LLMResponseStreaming; } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (5)
src/constants.ts(3 hunks)src/core/llm/copilotProvider.ts(1 hunks)src/core/llm/manager.ts(2 hunks)src/types/chat-model.types.ts(1 hunks)src/types/provider.types.ts(1 hunks)
🔇 Additional comments (5)
src/types/provider.types.ts (2)
18-21: Add copilot provider variant – looks consistent with the existing schemaThe new discriminated union member for
type: 'copilot'reuses the base provider shape, matching how other non-specialized providers are modeled.
10-16: Double-check embedding model types noteThe header comment mentions updating
src/types/embedding-model.types.tswhen adding a provider. Since Copilot doesn’t support embeddings (as per constants), it’s expected to not add a copilot variant to the embedding schema. Please confirm this is intentional to avoid unnecessary schema variants.src/core/llm/manager.ts (1)
20-21: Factory wiring for Copilot is correctImport and switch-case addition are straightforward and align with existing provider patterns.
Also applies to: 80-82
src/types/chat-model.types.ts (1)
28-32: Chat model union extended for Copilot – consistentAdding
providerType: 'copilot'with the base shape mirrors other providers without special fields. This keeps typing coherent with the new defaults in constants.src/constants.ts (1)
150-157: Provider type info for Copilot – consistent with capabilities (no embeddings, API key required)The entry aligns with how other providers are defined and matches the Copilot implementation’s expectations.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (5)
src/core/llm/copilotProvider.ts (5)
93-96: Use the typed exception for invalid tokens.Keep error semantics consistent with token fetch path.
Apply this diff:
- if (resp.status === 401 || resp.status === 403) { - window?.localStorage?.removeItem?.('copilot.session'); - throw new Error('Invalid token') - } + if (resp.status === 401 || resp.status === 403) { + window?.localStorage?.removeItem?.('copilot.session') + throw new LLMAPIKeyInvalidException('Invalid token') + }
1-1: Avoid top‑level side effects; move session cleanup into constructor.Clearing storage at module load is brittle and surprises importers. Run it once on provider init instead.
Apply this diff:
-window?.localStorage?.removeItem?.('copilot.session');And add inside the constructor (see next comment). Based on learnings
24-27: Initialize by clearing stale session once in constructor (not on every import).Aligns with the agreed startup behavior and preserves caching thereafter.
Apply this diff:
constructor(provider: Extract<LLMProvider, { type: 'copilot' }>) { super(provider) this.apiKey = provider.apiKey ?? '' + // Clear stale session once on provider init to avoid 401s after IP changes + window?.localStorage?.removeItem?.('copilot.session') }Based on learnings
56-66: Revisit hard‑coded “editor/user‑agent” identifiers.Impersonating other editors can be brittle. Prefer a plugin‑specific UA unless Copilot requires these exact values.
107-110: Ensure message content is a string, not null.Prevents downstream type/UX issues when providers return non‑string/empty payloads.
Apply this diff:
- message: { content: c.message?.content ?? c.message ?? c.output ?? null, role: c.message?.role ?? 'assistant' }, + message: { content: String(c.message?.content ?? c.message ?? c.output ?? ''), role: c.message?.role ?? 'assistant' },
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/core/llm/copilotProvider.ts(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-08-20T05:05:40.144Z
Learnt from: d0uub
PR: glowingjade/obsidian-smart-composer#458
File: src/core/llm/copilotProvider.ts:31-36
Timestamp: 2025-08-20T05:05:40.144Z
Learning: In GitHub Copilot integration for obsidian-smart-composer, clear the localStorage session token ('copilot.session') once in the CopilotProvider constructor to prevent 401 errors from IP changes, then allow normal token caching and reuse in getCopilotToken() method until the token expires.
Applied to files:
src/core/llm/copilotProvider.ts
📚 Learning: 2025-08-20T04:23:04.198Z
Learnt from: d0uub
PR: glowingjade/obsidian-smart-composer#458
File: src/core/llm/copilotProvider.ts:31-36
Timestamp: 2025-08-20T04:23:04.198Z
Learning: For GitHub Copilot token validation in obsidian-smart-composer, use a 1000 second buffer (now + 1000) when checking token expiration. Testing showed that even 600 seconds wasn't sufficient to prevent token expiry issues.
Applied to files:
src/core/llm/copilotProvider.ts
🧬 Code graph analysis (1)
src/core/llm/copilotProvider.ts (5)
src/types/provider.types.ts (1)
LLMProvider(98-98)src/core/llm/exception.ts (2)
LLMAPIKeyNotSetException(1-9)LLMAPIKeyInvalidException(11-19)src/types/chat-model.types.ts (1)
ChatModel(104-104)src/types/llm/request.ts (3)
LLMRequestNonStreaming(36-38)LLMOptions(84-86)LLMRequestStreaming(40-42)src/types/llm/response.ts (2)
LLMResponseNonStreaming(12-15)LLMResponseStreaming(17-20)
🔇 Additional comments (3)
src/core/llm/copilotProvider.ts (3)
46-53: Handle non‑2xx token fetch responses and parse JSON deterministically.Add generic non‑2xx handling, avoid
awaiton a property, and derive expiry from common fields.Apply this diff:
- const resp = await requestUrl({url: 'https://api.github.com/copilot_internal/v2/token',method: 'GET',headers,}) - if (resp.status === 401 || resp.status === 403) throw new LLMAPIKeyInvalidException('Invalid token') - const data = await resp.json - const newToken = data.token ?? data.access_token ?? undefined - const newExp = (data.exp && Number.isFinite(data.exp)) ? data.exp : now + 3600 + const resp = await requestUrl({ + url: 'https://api.github.com/copilot_internal/v2/token', + method: 'GET', + headers, + }) + if (resp.status === 401 || resp.status === 403) { + throw new LLMAPIKeyInvalidException('Invalid token') + } + if (resp.status < 200 || resp.status >= 300) { + throw new Error(`Copilot token endpoint error: HTTP ${resp.status} - ${resp.text}`) + } + const data: any = resp.json ?? JSON.parse(resp.text) + const newToken = data.token ?? data.access_token + const expFromApi = + (typeof data.exp === 'number' && Number.isFinite(data.exp)) ? data.exp : + (typeof data.expires_at === 'string' ? Math.floor(new Date(data.expires_at).getTime() / 1000) : undefined) ?? + (typeof data.expires_in === 'number' && Number.isFinite(data.expires_in) ? now + data.expires_in : undefined) + const newExp = expFromApi ?? now + 3600
122-153: “Streaming” is simulated by chunking a non‑streaming response. Verify UI expectations.If the UI expects token‑level deltas or early tokens before completion, this will not feel like real streaming.
Would you like me to switch this to true streaming (SSE) if Copilot supports it, or is simulated streaming sufficient for now?
31-37: Fix token validity logic: timezone offset used as buffer prevents caching; also harden session parsing and return a boolean.Using 8 hours (HK_TIMEZONE_OFFSET) as the validity buffer makes typical 1h Copilot tokens look “always expired,” forcing a new fetch every call (risking 429/latency). Replace with a 1000‑second buffer, wrap JSON.parse, and make isValid explicitly boolean.
Apply this diff:
- const HK_TIMEZONE_OFFSET = 28800; // 8 hours in seconds - const getSession = () => { - const raw = window?.localStorage?.getItem?.('copilot.session'); - return raw ? JSON.parse(raw) : { token: null, exp: null }; - }; - const isValid = (token: string | null, exp: number | null) => token && exp && exp > now + HK_TIMEZONE_OFFSET; + const EXP_BUFFER_SEC = 1000 // safety buffer per empirical testing + const getSession = (): { token: string | null; exp: number | null } => { + const raw = window?.localStorage?.getItem?.('copilot.session') + if (!raw) return { token: null, exp: null } + try { + const parsed = JSON.parse(raw) + const token = typeof parsed?.token === 'string' ? parsed.token : null + const exp = Number.isFinite(parsed?.exp) ? parsed.exp : null + return { token, exp } + } catch { + window?.localStorage?.removeItem?.('copilot.session') + return { token: null, exp: null } + } + } + const isValid = (token: string | null, exp: number | null): boolean => + Boolean(token) && typeof exp === 'number' && Number.isFinite(exp) && exp > now + EXP_BUFFER_SECBased on learnings
Description
Added support for Github Copilot, sorry I dont know how to apply migration correctly so I didn't do it, e.g. v11_v12
Note: If your changes include UI modifications, please include screenshots to help reviewers visualize the changes.
Checklist before requesting a review
npm run lint:checkandnpm run type:check)npm run test)Summary by CodeRabbit