0.2.3
Fix
Calibration using wrong message count after compressed turns
After a gradient-compressed turn (layers 1-4), calibrate() was called with the total DB message count instead of the compressed window size. On the next turn, newMsgCount ≈ 0 (dbCount − dbCount), so expectedInput ≈ lastKnownInput (the compressed prompt, e.g. 114K tokens). Since 114K < maxInput (168K), layer 0 fired and sent all messages uncompressed → 405K overflow on a 200K-limit model.
Fix: transform() now records the compressed message count in lastTransformedCount. The calibration call uses getLastTransformedCount() so the delta on the next turn is computed relative to the actual compressed window, not the DB snapshot.