Skip to content

consider high RAM usage of new history implementation #206

@mr-zwets

Description

@mr-zwets

Consider high RAM usage of history implementation

claude analysis:

The transaction history is held entirely in memory as a Vue reactive ref (walletHistory in store.ts). Each TransactionHistoryItem contains full
inputs[] and outputs[] arrays with cashaddress strings, token data, plus balance/fee/size metadata.

Memory cost estimate

A typical 2-in/2-out transaction is roughly 0.5–1 KB as a JS object. For a wallet with 10k+ transactions that's 5–10 MB of live heap just for the
array.

The real cost is during getHistory() construction — mainnet-js fetches and decodes every raw transaction plus every prevout transaction in memory
simultaneously. For 10k txs that means decoding ~20k+ raw transactions, which can spike to 100–300+ MB during the call before GC reclaims the intermediate buffers.

Current behavior

  1. updateWalletHistory() with count: -1 fetches everything in one shot
  2. The initial load fetches 100, then requestIdleCallback immediately schedules a full unbounded load (store.ts line 642-645)
  3. On every wallet event (new tx, send, etc.), the full history is re-fetched from scratch
  4. No protection when RAM is low — browser tab crashes or Electron becomes unresponsive

The UI already paginates rendering (100 items per page in txHistory.vue), so the DOM isn't the problem — it's the data layer holding everything in memory.

Platform impact

  • Desktop browsers / Electron: V8 allows heaps up to ~2–4 GB, so the spike alone won't crash. But on low-RAM machines (4 GB laptops, common in
    developing countries where BCH adoption is strong), the OS starts paging to disk causing freezes, or the OOM killer steps in.
  • Mobile (Capacitor): Mobile Safari and Chrome on Android impose much lower per-tab limits (~300–600 MB total). A 200 MB spike during history
    construction on a phone already holding wallet state, token metadata, BCMR registries, etc. could cause a tab kill (iOS) or crash (Android). This is the highest risk platform.

Possible approaches (in order of complexity)

  1. Cap the in-memory history — Remove the requestIdleCallback auto-escalation to full load. Keep isHistoryPartial true and show "Load more" on
    demand. Simplest change, eliminates the unbounded spike for the common case.
  2. Paginated fetchinggetHistory already supports start/count. Fetch pages as the user scrolls instead of loading all 10k at once.
  3. Persist to IndexedDB, load on demand — Cache history items in IndexedDB keyed by tx hash. Only keep the visible page in the reactive ref. Survives
    page reloads and speeds up re-initialization.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions