Skip to content

Conversation

@jasagredo
Copy link
Contributor

A partial backport of #1714.

The forker was being opened always, and in the following situation it will be leaked and block the V1 LedgerDB lock forever:

  • 2 blocks
  • which are adopted in (increasing select view order) sequence (possibly by them being part of a slot battle and getting the losing one first)
  • very fast so that the mempool tries to revalidate exactly in between the blocks
  • it opens and holds forever a forker, it loses all reference to the forker (leaked resource)
  • then a flush happens and blocks forever as the (reading) forker in the previous step is open forever, and flushing requests a write lock

Copy link
Member

@amesgen amesgen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM apart from updating the PR title/commit message to mention that the core improvement here is that we now actually always close the forker. The fact that we don't open them when not necessary is nice/a great way to ensure this, but not the headline.

@jasagredo jasagredo force-pushed the js/fix-mempool-dangling-forker branch from a9b2d6b to a996bf1 Compare October 29, 2025 10:24
@jasagredo jasagredo changed the title Ensure mempool only open forkers when actually revalidating Ensure mempool always closes stale forkers Oct 29, 2025
@jasagredo
Copy link
Contributor Author

I'm actually going to leave out the change of allocating the thread, because we need resource-registry 0.2 and I don't want to update it here too

@jasagredo jasagredo force-pushed the js/fix-mempool-dangling-forker branch from 6a92cb5 to 1010e94 Compare October 29, 2025 12:09
@jasagredo jasagredo self-assigned this Oct 29, 2025
@jasagredo jasagredo moved this to 👀 In review in Consensus Team Backlog Oct 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: 👀 In review

Development

Successfully merging this pull request may close these issues.

3 participants