Upstream Claude Code bugs affecting PAI session context — compiled evidence from anthropics/claude-code #792
Replies: 2 comments
-
PostCompactRecovery hook — working solution for compaction context lossBuilt and validated a hook that addresses Bug 1 from this discussion (compaction destroys hook-injected context). Sharing findings in case useful. Architecture discovery
ValidationTested with a canary phrase (
ImplementationPR #799 adds The hook is ~130 lines, uses existing Settings.json registration"SessionStart": [
{ "hooks": [/* existing startup hooks */] },
{
"matcher": "compact",
"hooks": [
{ "type": "command", "command": "${PAI_DIR}/hooks/PostCompactRecovery.hook.ts" }
]
}
] |
Beta Was this translation helpful? Give feedback.
-
|
Massively fixed in 4.0.x |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I noticed session memory degrading in my PAI instance over the last couple of weeks and went looking for root causes. I found a cluster of open issues in
anthropics/claude-codethat directly affect how PAI's hook-injected context behaves — specifically around compaction and the Opus 4.6 model. Several existing PAI issues (#678, #690, #677, #787, #785) describe symptoms that trace back to these upstream bugs.Sharing what I found in case it's useful for anyone debugging similar issues.
Upstream Bug 1: Compaction Destroys Hook-Injected Context
Claude Code's automatic context compaction treats
<system-reminder>content from SessionStart hooks as regular conversation history. During compaction, this content is summarized or dropped.For PAI, this means the context loaded by
LoadContext(identity, response format, security rules, workflow routing) can be lost mid-session when compaction fires.Relevant upstream issues:
postCompacthook — currently no hook event fires after compaction, so there's no way to re-inject lost context.Upstream Bug 2: Opus 4.6 Instruction-Following Regression
The Opus 4.6 model (
claude-opus-4-6) has a documented regression in instruction adherence. This appears separate from compaction — the model ignores instructions it demonstrably has in context.Relevant upstream issues:
How These Compound for PAI
PAI's SessionStart hook injects a large context block. Per #690, v3.0 injects ~83KB (~25K tokens), starting sessions at 12–15% context consumed. This creates a specific failure chain:
This explains the pattern in #678 (context stays at 78% after /clear), #677 (session continuity breaks), and #787 (stale state after compaction).
Upstream Response Status
As of Feb 24, 2026: Anthropic has not responded to any of the referenced claude-code issues. No acknowledgments, no timelines, no official workarounds.
Possible PAI-Level Mitigations
These don't depend on upstream fixes:
postCompacthook yet (#3537), but a periodic context health-check that detects degradation and re-injects critical instructions could approximate it.References
claude-code upstream:
PAI:
Beta Was this translation helpful? Give feedback.
All reactions