Summary
lossless-claw-enhanced (fork at v0.5.2) is missing the GitHub Copilot token-exchange fix that has already landed upstream in Martian-Engineering/lossless-claw PR #322 (released in v0.8.0+). As a result, all LCM compactions using github-copilot/* summary models fail with Connection error. and silently fall back to truncation, losing summaries.
Repro
- Configure
lossless-claw-enhanced with a github-copilot/* model as the LCM summary model.
- Run any session long enough to trigger compaction.
- Observe
gateway.err.log:
[lcm] empty normalized summary on first attempt; provider=github-copilot;
model=gpt-5.4-mini; ... finish=error; error_message=Connection error.;
retrying with conservative settings
[lcm] retry also returned empty summary; ... falling back to truncation
This happens for every Copilot-backed compaction, including with gpt-5.4-mini, gemini-3.1-pro-preview, and Claude models routed through github-copilot.
Root cause
In src/plugin/index.ts, the complete() bridge calls runtime.modelAuth.getApiKeyForModel() and forwards the resulting key directly to completeSimple(). For the github-copilot provider this returns the raw GitHub user token (ghu_...), which the downstream pi-ai SDK can't use:
- Copilot endpoints require an exchanged Copilot API token (
tid=...)
- The correct runtime
baseUrl must be derived from the exchanged token's proxy-ep field (e.g. api.enterprise.githubcopilot.com vs api.individual.githubcopilot.com)
OpenClaw core does this in its agent-runtime layer (setRuntimeApiKeyForCompletion → resolveCopilotApiToken), but that layer isn't reachable from the plugin path — so LCM's complete() skips the exchange entirely.
This is the exact same root cause as upstream issue Martian-Engineering/lossless-claw#269, fixed by PR #322 ("fix: use runtime-ready model auth for summarization", merged 2026-04-08).
Suggested fix
Easiest path: rebase / cherry-pick upstream PR #322 (and ideally sync to the latest upstream v0.9.1, which also contains a number of compatibility fixes for newer OpenClaw releases).
Local workaround currently in production on my side (in case it helps as a reference, but the real fix should be the upstream sync):
// in src/plugin/index.ts, after `completeOptions` is built,
// before requestMetadata / completeSimple:
if (providerId === "github-copilot" && resolvedApiKey) {
try {
const mod = await import(
"/opt/homebrew/lib/node_modules/openclaw/dist/github-copilot-token-4lekVV0W.js"
);
const exchanged = await mod.resolveCopilotApiToken({
githubToken: resolvedApiKey,
});
if (exchanged?.token) {
resolvedApiKey = exchanged.token;
(completeOptions as any).apiKey = exchanged.token;
}
if (exchanged?.baseUrl) {
(resolvedModel as any).baseUrl = exchanged.baseUrl;
}
} catch (err) {
console.error("[lcm-patch] copilot token exchange failed:", err);
}
}
After this patch (and a gateway restart): Connection error. count in LCM logs drops to 0.
Environment
lossless-claw-enhanced 0.5.2 (installed via clawhub)
- OpenClaw 2026.4.15
- Provider:
github-copilot (Individual plan)
- Models tried:
gpt-5.4-mini, gemini-3.1-pro-preview, claude-opus-4.7
Why it matters
Compaction silently falling back to truncation means users on Copilot-backed setups quietly lose summary fidelity — the symptom shows up not as an error to the user, but as worse long-conversation memory and degraded recall. Worth syncing upstream sooner rather than later.
Happy to test once a fix lands. Thanks for maintaining the fork! 🙏
Summary
lossless-claw-enhanced(fork at v0.5.2) is missing the GitHub Copilot token-exchange fix that has already landed upstream inMartian-Engineering/lossless-clawPR #322 (released in v0.8.0+). As a result, all LCM compactions usinggithub-copilot/*summary models fail withConnection error.and silently fall back to truncation, losing summaries.Repro
lossless-claw-enhancedwith agithub-copilot/*model as the LCM summary model.gateway.err.log:This happens for every Copilot-backed compaction, including with
gpt-5.4-mini,gemini-3.1-pro-preview, and Claude models routed throughgithub-copilot.Root cause
In
src/plugin/index.ts, thecomplete()bridge callsruntime.modelAuth.getApiKeyForModel()and forwards the resulting key directly tocompleteSimple(). For thegithub-copilotprovider this returns the raw GitHub user token (ghu_...), which the downstream pi-ai SDK can't use:tid=...)baseUrlmust be derived from the exchanged token'sproxy-epfield (e.g.api.enterprise.githubcopilot.comvsapi.individual.githubcopilot.com)OpenClaw core does this in its agent-runtime layer (
setRuntimeApiKeyForCompletion→resolveCopilotApiToken), but that layer isn't reachable from the plugin path — so LCM'scomplete()skips the exchange entirely.This is the exact same root cause as upstream issue Martian-Engineering/lossless-claw#269, fixed by PR #322 ("fix: use runtime-ready model auth for summarization", merged 2026-04-08).
Suggested fix
Easiest path: rebase / cherry-pick upstream PR #322 (and ideally sync to the latest upstream
v0.9.1, which also contains a number of compatibility fixes for newer OpenClaw releases).Local workaround currently in production on my side (in case it helps as a reference, but the real fix should be the upstream sync):
After this patch (and a gateway restart):
Connection error.count in LCM logs drops to 0.Environment
lossless-claw-enhanced0.5.2 (installed via clawhub)github-copilot(Individual plan)gpt-5.4-mini,gemini-3.1-pro-preview,claude-opus-4.7Why it matters
Compaction silently falling back to truncation means users on Copilot-backed setups quietly lose summary fidelity — the symptom shows up not as an error to the user, but as worse long-conversation memory and degraded recall. Worth syncing upstream sooner rather than later.
Happy to test once a fix lands. Thanks for maintaining the fork! 🙏