Skip to content

feat: add GitHub Copilot provider support and related configurations#458

Open
d0uub wants to merge 3 commits intoglowingjade:mainfrom
d0uub:main
Open

feat: add GitHub Copilot provider support and related configurations#458
d0uub wants to merge 3 commits intoglowingjade:mainfrom
d0uub:main

Conversation

@d0uub
Copy link

@d0uub d0uub commented Aug 20, 2025

Description

Added support for Github Copilot, sorry I dont know how to apply migration correctly so I didn't do it, e.g. v11_v12

Note: If your changes include UI modifications, please include screenshots to help reviewers visualize the changes.

Checklist before requesting a review

  • I have reviewed the guidelines for contributing to this repository.
  • I have performed a self-review of my code
  • I have performed a code linting check and type check (by running npm run lint:check and npm run type:check)
  • I have run the test suite (by running npm run test)
  • I have tested the functionality manually

Summary by CodeRabbit

  • New Features
    • Added GitHub Copilot as a selectable AI provider in settings.
    • Includes two chat models: GPT-4.1 and GPT-5 Mini.
    • Requires an API key (no custom base URL needed).
    • Included by default in the provider list.
    • Supports chat completions; embeddings are not supported.

@coderabbitai
Copy link

coderabbitai bot commented Aug 20, 2025

Walkthrough

Adds a new "copilot" LLM provider (GitHub Copilot): registers provider and default chat models, implements CopilotProvider with session token management and chat completion calls, wires it into the LLM manager, and extends type schemas to include the copilot variant.

Changes

Cohort / File(s) Summary
Provider constants and defaults
src/constants.ts
Adds copilot to PROVIDER_TYPES_INFO, includes Copilot in DEFAULT_PROVIDERS, and adds two Copilot chat models to DEFAULT_CHAT_MODELS (copilot-gpt-4.1, copilot-gpt-5-mini).
Copilot provider implementation
src/core/llm/copilotProvider.ts
New CopilotProvider class: session token cache (localStorage key copilot.session), API key validation, token fetch/refresh, request header building, non-streaming chat completions, 401/403 handling with retry, streaming adapter that emits chunks, and embeddings unsupported (throws).
Provider factory wiring
src/core/llm/manager.ts
Imports and constructs CopilotProvider when provider.type === 'copilot' in the provider factory.
Type schemas
src/types/chat-model.types.ts, src/types/provider.types.ts
Adds providerType: 'copilot' variant to chatModelSchema and a type: 'copilot' variant to llmProviderSchema using base shapes; exported types now include Copilot variants.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant App
  participant Manager as LLM Manager
  participant Provider as CopilotProvider
  participant Token as Copilot Token Endpoint
  participant API as Copilot Chat API

  App->>Manager: createProvider(type='copilot')
  Manager->>Provider: new CopilotProvider(config)
  App->>Provider: generateResponse(request)

  rect rgba(220,235,255,0.4)
    note right of Provider: Ensure session token
    Provider->>Provider: read cached token
    alt no token or expired
      Provider->>Token: POST get session (using API key)
      Token-->>Provider: session token + expiry
      Provider->>Provider: cache session
    end
  end

  Provider->>API: POST chat completions (model, messages)
  alt 200 OK
    API-->>Provider: response JSON
    Provider-->>App: normalized LLMResponse
  else 401/403
    note over Provider,API: Invalidate and retry once
    Provider->>Provider: clear session
    Provider->>Token: POST get session
    Token-->>Provider: new session token
    Provider->>API: POST chat completions
    alt success
      API-->>Provider: response JSON
      Provider-->>App: normalized LLMResponse
    else failure
      API-->>Provider: error
      Provider-->>App: throw error
    end
  else other error
    API-->>Provider: error
    Provider-->>App: throw error
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • kevin-on

Poem

A nibble of tokens, a hop to the sky,
I fetched a fresh session—oh my, oh my! 🥕
Two models to chat, with whispers so bright,
I cache what I can, I retry on bite.
Thump goes the paw: Copilot’s in flight.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The description contains only a brief note that GitHub Copilot support was added and omits required context such as detailed motivation, behavior changes, related issue links, and proper migration steps, so it does not fully adhere to the repository’s description template. Please expand the Description section to include a clear summary of what was changed with details on the new provider’s behavior, the reasoning or context behind adding Copilot support, any related issue or feature request links, and instructions or notes about applying the necessary migrations.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title succinctly and accurately summarizes the main change by highlighting the addition of GitHub Copilot provider support along with related configurations, follows conventional commit format, and clearly conveys the intent to reviewers scanning the history.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
src/constants.ts (1)

430-441: Add Copilot models to recommendation lists (optional)

Consider whether you want any Copilot chat models in RECOMMENDED_MODELS_FOR_CHAT/RECOMMENDED_MODELS_FOR_APPLY. If yes, add the corresponding IDs (e.g., copilot-gpt-4.1) to those arrays. If not, ignore.

src/core/llm/copilotProvider.ts (3)

85-94: Use typed exception for invalid/expired session and centralize session reset

Prefer emitting LLMAPIKeyInvalidException for consistency with other providers, and ensure the cached session is cleared before throwing.

Apply this diff:

-    if (resp.status === 401) {
-      window?.localStorage?.removeItem?.('copilot.session');
-      sessionToken = await this.getCopilotToken();
-      headers.authorization = `Bearer ${sessionToken}`
-      resp = await requestUrl({url: 'https://api.githubcopilot.com/chat/completions',method: 'POST',headers,body: JSON.stringify(payload)});
-    }
-    if (resp.status === 401 || resp.status === 403) {
-      window?.localStorage?.removeItem?.('copilot.session');
-      throw new Error('Invalid token')
-    }
+    if (resp.status === 401) {
+      window?.localStorage?.removeItem?.('copilot.session')
+      sessionToken = await this.getCopilotToken()
+      headers.authorization = `Bearer ${sessionToken}`
+      resp = await requestUrl({
+        url: 'https://api.githubcopilot.com/chat/completions',
+        method: 'POST',
+        headers,
+        body: JSON.stringify(payload),
+      })
+    }
+    if (resp.status === 401 || resp.status === 403) {
+      window?.localStorage?.removeItem?.('copilot.session')
+      throw new LLMAPIKeyInvalidException('Invalid token')
+    }

54-64: Minimize spoofed telemetry headers (optional)

Unless the Copilot API requires these specific editor/version headers, consider trimming to essentials (accept, content-type, authorization). Spoofed headers can change server behavior or become brittle.


120-151: Streaming implementation is non-incremental; usage should be emitted only at the final chunk

You currently synthesize streaming by yielding one chunk per choice with full content. If downstream logic expects OpenAI-like semantics (usage stats only in the final chunk), consider emitting usage only once on the last choice to avoid double-counting.

Apply this diff:

-    async function* streamGenerator(): AsyncIterable<LLMResponseStreaming> {
-      for (const choice of response.choices) {
+    async function* streamGenerator(): AsyncIterable<LLMResponseStreaming> {
+      for (let i = 0; i < response.choices.length; i++) {
+        const choice = response.choices[i]
+        const isLast = i === response.choices.length - 1
         yield {
           id: response.id ?? '',
           model: response.model ?? model.model,
           choices: [
             {
               finish_reason: choice.finish_reason ?? null,
               delta: {
                 content: choice.message?.content ?? '',
                 role: choice.message?.role ?? 'assistant',
               },
             },
           ],
           object: 'chat.completion.chunk',
-          usage: response.usage,
+          usage: isLast ? response.usage : undefined,
         } as LLMResponseStreaming;
       }
     }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between d89c059 and 032d83c.

📒 Files selected for processing (5)
  • src/constants.ts (3 hunks)
  • src/core/llm/copilotProvider.ts (1 hunks)
  • src/core/llm/manager.ts (2 hunks)
  • src/types/chat-model.types.ts (1 hunks)
  • src/types/provider.types.ts (1 hunks)
🔇 Additional comments (5)
src/types/provider.types.ts (2)

18-21: Add copilot provider variant – looks consistent with the existing schema

The new discriminated union member for type: 'copilot' reuses the base provider shape, matching how other non-specialized providers are modeled.


10-16: Double-check embedding model types note

The header comment mentions updating src/types/embedding-model.types.ts when adding a provider. Since Copilot doesn’t support embeddings (as per constants), it’s expected to not add a copilot variant to the embedding schema. Please confirm this is intentional to avoid unnecessary schema variants.

src/core/llm/manager.ts (1)

20-21: Factory wiring for Copilot is correct

Import and switch-case addition are straightforward and align with existing provider patterns.

Also applies to: 80-82

src/types/chat-model.types.ts (1)

28-32: Chat model union extended for Copilot – consistent

Adding providerType: 'copilot' with the base shape mirrors other providers without special fields. This keeps typing coherent with the new defaults in constants.

src/constants.ts (1)

150-157: Provider type info for Copilot – consistent with capabilities (no embeddings, API key required)

The entry aligns with how other providers are defined and matches the Copilot implementation’s expectations.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
src/core/llm/copilotProvider.ts (5)

93-96: Use the typed exception for invalid tokens.

Keep error semantics consistent with token fetch path.

Apply this diff:

-    if (resp.status === 401 || resp.status === 403) {
-      window?.localStorage?.removeItem?.('copilot.session');
-      throw new Error('Invalid token')
-    }
+    if (resp.status === 401 || resp.status === 403) {
+      window?.localStorage?.removeItem?.('copilot.session')
+      throw new LLMAPIKeyInvalidException('Invalid token')
+    }

1-1: Avoid top‑level side effects; move session cleanup into constructor.

Clearing storage at module load is brittle and surprises importers. Run it once on provider init instead.

Apply this diff:

-window?.localStorage?.removeItem?.('copilot.session');

And add inside the constructor (see next comment). Based on learnings


24-27: Initialize by clearing stale session once in constructor (not on every import).

Aligns with the agreed startup behavior and preserves caching thereafter.

Apply this diff:

   constructor(provider: Extract<LLMProvider, { type: 'copilot' }>) {
     super(provider)
     this.apiKey = provider.apiKey ?? ''
+    // Clear stale session once on provider init to avoid 401s after IP changes
+    window?.localStorage?.removeItem?.('copilot.session')
   }

Based on learnings


56-66: Revisit hard‑coded “editor/user‑agent” identifiers.

Impersonating other editors can be brittle. Prefer a plugin‑specific UA unless Copilot requires these exact values.


107-110: Ensure message content is a string, not null.

Prevents downstream type/UX issues when providers return non‑string/empty payloads.

Apply this diff:

-      message: { content: c.message?.content ?? c.message ?? c.output ?? null, role: c.message?.role ?? 'assistant' },
+      message: { content: String(c.message?.content ?? c.message ?? c.output ?? ''), role: c.message?.role ?? 'assistant' },
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 032d83c and bfe07c7.

📒 Files selected for processing (1)
  • src/core/llm/copilotProvider.ts (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-08-20T05:05:40.144Z
Learnt from: d0uub
PR: glowingjade/obsidian-smart-composer#458
File: src/core/llm/copilotProvider.ts:31-36
Timestamp: 2025-08-20T05:05:40.144Z
Learning: In GitHub Copilot integration for obsidian-smart-composer, clear the localStorage session token ('copilot.session') once in the CopilotProvider constructor to prevent 401 errors from IP changes, then allow normal token caching and reuse in getCopilotToken() method until the token expires.

Applied to files:

  • src/core/llm/copilotProvider.ts
📚 Learning: 2025-08-20T04:23:04.198Z
Learnt from: d0uub
PR: glowingjade/obsidian-smart-composer#458
File: src/core/llm/copilotProvider.ts:31-36
Timestamp: 2025-08-20T04:23:04.198Z
Learning: For GitHub Copilot token validation in obsidian-smart-composer, use a 1000 second buffer (now + 1000) when checking token expiration. Testing showed that even 600 seconds wasn't sufficient to prevent token expiry issues.

Applied to files:

  • src/core/llm/copilotProvider.ts
🧬 Code graph analysis (1)
src/core/llm/copilotProvider.ts (5)
src/types/provider.types.ts (1)
  • LLMProvider (98-98)
src/core/llm/exception.ts (2)
  • LLMAPIKeyNotSetException (1-9)
  • LLMAPIKeyInvalidException (11-19)
src/types/chat-model.types.ts (1)
  • ChatModel (104-104)
src/types/llm/request.ts (3)
  • LLMRequestNonStreaming (36-38)
  • LLMOptions (84-86)
  • LLMRequestStreaming (40-42)
src/types/llm/response.ts (2)
  • LLMResponseNonStreaming (12-15)
  • LLMResponseStreaming (17-20)
🔇 Additional comments (3)
src/core/llm/copilotProvider.ts (3)

46-53: Handle non‑2xx token fetch responses and parse JSON deterministically.

Add generic non‑2xx handling, avoid await on a property, and derive expiry from common fields.

Apply this diff:

-    const resp = await requestUrl({url: 'https://api.github.com/copilot_internal/v2/token',method: 'GET',headers,})
-    if (resp.status === 401 || resp.status === 403) throw new LLMAPIKeyInvalidException('Invalid token')
-    const data = await resp.json
-    const newToken = data.token ?? data.access_token ?? undefined
-    const newExp = (data.exp && Number.isFinite(data.exp)) ? data.exp : now + 3600
+    const resp = await requestUrl({
+      url: 'https://api.github.com/copilot_internal/v2/token',
+      method: 'GET',
+      headers,
+    })
+    if (resp.status === 401 || resp.status === 403) {
+      throw new LLMAPIKeyInvalidException('Invalid token')
+    }
+    if (resp.status < 200 || resp.status >= 300) {
+      throw new Error(`Copilot token endpoint error: HTTP ${resp.status} - ${resp.text}`)
+    }
+    const data: any = resp.json ?? JSON.parse(resp.text)
+    const newToken = data.token ?? data.access_token
+    const expFromApi =
+      (typeof data.exp === 'number' && Number.isFinite(data.exp)) ? data.exp :
+      (typeof data.expires_at === 'string' ? Math.floor(new Date(data.expires_at).getTime() / 1000) : undefined) ??
+      (typeof data.expires_in === 'number' && Number.isFinite(data.expires_in) ? now + data.expires_in : undefined)
+    const newExp = expFromApi ?? now + 3600

122-153: “Streaming” is simulated by chunking a non‑streaming response. Verify UI expectations.

If the UI expects token‑level deltas or early tokens before completion, this will not feel like real streaming.

Would you like me to switch this to true streaming (SSE) if Copilot supports it, or is simulated streaming sufficient for now?


31-37: Fix token validity logic: timezone offset used as buffer prevents caching; also harden session parsing and return a boolean.

Using 8 hours (HK_TIMEZONE_OFFSET) as the validity buffer makes typical 1h Copilot tokens look “always expired,” forcing a new fetch every call (risking 429/latency). Replace with a 1000‑second buffer, wrap JSON.parse, and make isValid explicitly boolean.

Apply this diff:

-    const HK_TIMEZONE_OFFSET = 28800; // 8 hours in seconds
-    const getSession = () => {
-      const raw = window?.localStorage?.getItem?.('copilot.session');
-      return raw ? JSON.parse(raw) : { token: null, exp: null };
-    };
-    const isValid = (token: string | null, exp: number | null) => token && exp && exp > now + HK_TIMEZONE_OFFSET;
+    const EXP_BUFFER_SEC = 1000 // safety buffer per empirical testing
+    const getSession = (): { token: string | null; exp: number | null } => {
+      const raw = window?.localStorage?.getItem?.('copilot.session')
+      if (!raw) return { token: null, exp: null }
+      try {
+        const parsed = JSON.parse(raw)
+        const token = typeof parsed?.token === 'string' ? parsed.token : null
+        const exp = Number.isFinite(parsed?.exp) ? parsed.exp : null
+        return { token, exp }
+      } catch {
+        window?.localStorage?.removeItem?.('copilot.session')
+        return { token: null, exp: null }
+      }
+    }
+    const isValid = (token: string | null, exp: number | null): boolean =>
+      Boolean(token) && typeof exp === 'number' && Number.isFinite(exp) && exp > now + EXP_BUFFER_SEC

Based on learnings

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants