-
Notifications
You must be signed in to change notification settings - Fork 15
Description
Consider high RAM usage of history implementation
claude analysis:
The transaction history is held entirely in memory as a Vue reactive ref (
walletHistoryinstore.ts). EachTransactionHistoryItemcontains full
inputs[]andoutputs[]arrays with cashaddress strings, token data, plus balance/fee/size metadata.Memory cost estimate
A typical 2-in/2-out transaction is roughly 0.5–1 KB as a JS object. For a wallet with 10k+ transactions that's 5–10 MB of live heap just for the
array.The real cost is during
getHistory()construction — mainnet-js fetches and decodes every raw transaction plus every prevout transaction in memory
simultaneously. For 10k txs that means decoding ~20k+ raw transactions, which can spike to 100–300+ MB during the call before GC reclaims the intermediate buffers.Current behavior
updateWalletHistory()withcount: -1fetches everything in one shot- The initial load fetches 100, then
requestIdleCallbackimmediately schedules a full unbounded load (store.tsline 642-645)- On every wallet event (new tx, send, etc.), the full history is re-fetched from scratch
- No protection when RAM is low — browser tab crashes or Electron becomes unresponsive
The UI already paginates rendering (100 items per page in
txHistory.vue), so the DOM isn't the problem — it's the data layer holding everything in memory.Platform impact
- Desktop browsers / Electron: V8 allows heaps up to ~2–4 GB, so the spike alone won't crash. But on low-RAM machines (4 GB laptops, common in
developing countries where BCH adoption is strong), the OS starts paging to disk causing freezes, or the OOM killer steps in.- Mobile (Capacitor): Mobile Safari and Chrome on Android impose much lower per-tab limits (~300–600 MB total). A 200 MB spike during history
construction on a phone already holding wallet state, token metadata, BCMR registries, etc. could cause a tab kill (iOS) or crash (Android). This is the highest risk platform.Possible approaches (in order of complexity)
- Cap the in-memory history — Remove the
requestIdleCallbackauto-escalation to full load. KeepisHistoryPartialtrue and show "Load more" on
demand. Simplest change, eliminates the unbounded spike for the common case.- Paginated fetching —
getHistoryalready supportsstart/count. Fetch pages as the user scrolls instead of loading all 10k at once.- Persist to IndexedDB, load on demand — Cache history items in IndexedDB keyed by tx hash. Only keep the visible page in the reactive ref. Survives
page reloads and speeds up re-initialization.