Skip to content

Conversation

@vlad-lesin
Copy link
Contributor

In one of the practical cloud MariaDB setups, a server node accesses its
datadir over the network, but also has a fast local SSD storage for
temporary data. The content of such temporary storage is lost when the
server container is destroyed.

The commit uses this ephemeral fast local storage (SSD) as an extension of
the portion of InnoDB buffer pool (DRAM) that caches persistent data
pages. This cache is separated from the persistent storage of data files
and ib_logfile0 and ignored during backup.

The following system variables were introduced:

innodb_extended_buffer_pool_size - the size of external buffer pool
file, if it equals to 0, external buffer pool will not be used;

innodb_extended_buffer_pool_path - the path to external buffer pool
file.

If innodb_extended_buffer_pool_size is not equal to 0, external buffer
pool file will be (re)created on startup.

Only clean pages will be flushed to external buffer pool file. There is
no need to flush dirty pages, as such pages will become clean after
flushing, and then will be evicted when they reach the tail of LRU list.

The general idea of this commit is to flush clean pages to external
buffer pool file when they are evicted.

A page can be evicted either by transaction thread or by background
thread of page cleaner. In some cases transaction thread is waiting for
page cleaner thread to finish its job. We can't do flushing in external
buffer pool file when transaction threads are waiting for eviction,
that would heart performance. That's why the only case for flushing is
when page cleaner thread evicts pages in background and there are no
waiters. For this purpose buf_pool_t::done_flush_list_waiters_count
variable was introduced, we flush evicted clean pages only if the
variable is zeroed.

Clean pages are evicted in buf_flush_LRU_list_batch() to keep some
amount of pages in buffer pool's free list. That's why we flush every
second page to external buffer pool file, otherwise there could be not
enough amount of pages in free list to let transaction threads to
allocate buffer pool pages without page cleaner waiting. This might be
not a good solution, but this is enough for prototyping.

External buffer pool page is introduced to store information in buffer
pool page hash about the certain page can be read from external buffer
pool file. The first several members of such page must be the same as the
members of internal page. External page frame must be equal to the
certain value to distinguish external page from internal one. External
buffer pages are preallocated on startup in external pages array. We
could get rid of the frame in external page, and check if the page's
address belongs to the array to distinguish external and internal pages.

There are also external pages free and LRU lists. When some internal page
is decided to be flushed in external buffer pool file, a new external
page is allocated either from the head of external free list, or from
the tail of external LRU list. Both lists are protected with
buf_pool.mutex. It makes sense, because a page is removed from internal
LRU list during eviction under buf_pool.mutex.

Then internal page is locked and the allocated external page is attached
to io request for external buffer pool file, and when write request is
completed, the internal page is replaced with external one in page hash,
external page is pushed to the head of external LRU list and internal
page is unlocked. After internal page was removed from external free list,
it was not placed in external LRU, and placed there only
after write completion, so the page can't be used by the other threads
until write is completed.

Page hash chain get element function has additional template parameter,
which notifies the function if external pages must be ignored or not. We
don't ignore external pages in page hash in two cases, when some page is
initialized for read and when one is reinitialized for new page creating.

When an internal page is initialized for read and external page with the
same page id is found in page hash, the internal page is locked,
the external page in replaced with newly initialized internal page in the
page hash chain, the external page is removed from external LRU list and
attached to io request to external buffer pool file. When the io request
is completed, external page is returned to external free list,
internal page is unlocked. So during read external page absents in both
external LRU and free lists and can't be reused.

When an internal page is initialized for new page creating and external
pages with the same page id is found in page hash, we just remove external
page from the page hash chain and external LRU list and push it to the
head of external free list. So the external page can be used for future
flushing.

There is also some magic with watch sentinels. If watch is goint to be
set for the external page with the same page id in page hash, we replace
the external page with sentinel in page hash and attach external
page to the sentinel's frame. When such sentinel with attached external
page should be removed, we replace it with the attached external page in
page hash instead of just removing the sentinel. This idea is not fully
implemented, as the function, which allocates external pages, does not
take into account that page hash can contain sentinels with attached
external pages. And, anyway, this code must be removed if we cherry-pick
the commit to 11.* branch, as change buffer is already removed in those
versions.

The pages are flushed to and read from external buffer pool file with
the same manner as they are flushed to their spaces, i.e. compressed and
encrypted pages stay compressed and encrypted in external buffer pool
file.
  • The Jira issue number for this PR is: MDEV-______

Description

TODO: fill description here

Release Notes

TODO: What should the release notes say about this change?
Include any changed system variables, status variables or behaviour. Optionally list any https://mariadb.com/kb/ pages that need changing.

How can this PR be tested?

TODO: modify the automated test suite to verify that the PR causes MariaDB to behave as intended.
Consult the documentation on "Writing good test cases".

If the changes are not amenable to automated testing, please explain why not and carefully describe how to test manually.

Basing the PR against the correct MariaDB version

  • This is a new feature or a refactoring, and the PR is based against the main branch.
  • This is a bug fix, and the PR is based against the earliest maintained branch in which the bug can be reproduced.

PR quality check

  • I checked the CODING_STANDARDS.md file and my PR conforms to this where appropriate.
  • For any trivial modifications to the PR, I am ok with the reviewer making the changes themselves.

@vlad-lesin vlad-lesin requested a review from dr-m January 5, 2026 13:32
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

In one of the practical cloud MariaDB setups, a server node accesses its
datadir over the network, but also has a fast local SSD storage for
temporary data. The content of such temporary storage is lost when the
server container is destroyed.

The commit uses this ephemeral fast local storage (SSD) as an extension of
the portion of InnoDB buffer pool (DRAM) that caches persistent data
pages. This cache is separated from the persistent storage of data files
and ib_logfile0 and ignored during backup.

The following system variables were introduced:

innodb_extended_buffer_pool_size - the size of external buffer pool
file, if it equals to 0, external buffer pool will not be used;

innodb_extended_buffer_pool_path - the path to external buffer pool
file.

If innodb_extended_buffer_pool_size is not equal to 0, external buffer
pool file will be created on startup.

Only clean pages will be flushed to external buffer pool file. There is
no need to flush dirty pages, as such pages will become clean after
flushing, and then will be evicted when they reach the tail of LRU list.

The general idea of this commit is to flush clean pages to external
buffer pool file when they are evicted.

A page can be evicted either by transaction thread or by background
thread of page cleaner. In some cases transaction thread is waiting for
page cleaner thread to finish its job. We can't do flushing in external
buffer pool file when transaction threads are waithing for eviction,
that would heart performance. That's why the only case for flushing is
when page cleaner thread evicts pages in background and there are no
waiters. For this purprose buf_pool_t::done_flush_list_waiters_count
variable was introduced, we flush evicted clean pages only if the
variable is zeroed.

Clean pages are evicted in buf_flush_LRU_list_batch() to keep some
amount of pages in buffer pool's free list. That's why we flush every
second page to external buffer pool file, otherwise there could be not
enought amount of pages in free list to let transaction threads to
allocate buffer pool pages without page cleaner waiting. This might be
not a good solution, but this is enought for prototyping.

External buffer pool page is introduced to store information in buffer
pool page hash about the certain page can be read from external buffer
pool file. The first several members of such page must be the same as the
members of internal page. External page frame must be equal to the
certain value to disthinguish external page from internal one. External
buffer pages are preallocated on startup in external pages array. We
could get rid of the frame in external page, and check if the page's
address belongs to the array to distinguish external and internal pages.

There are also external pages free and LRU lists. When some internal page
is decided to be flushed in external buffer pool file, a new external
page is allocated eighter from the head of external free list, or from
the tail of external LRU list. Both lists are protected with
buf_pool.mutex. It makes sense, because a page is removed from internal
LRU list during eviction under buf_pool.mutex.

Then internal page is locked and the allocated external page is attached
to io request for external buffer pool file, and when write request is
completed, the internal page is replaced with external one in page hash,
external page is pushed to the head of external LRU list and internal
page is unlocked. After internal page was removed from external free list,
it was not placed in external LRU, and placed there only
after write completion, so the page can't be used by the other threads
until write is completed.

Page hash chain get element function has additional template parameter,
which notifies the function if external pages must be ignored or not. We
don't ignore external pages in page hash in two cases, when some page is
initialized for read and when one is reinitialized for new page creating.

When an internal page is initialized for read and external page with the
same page id is found in page hash, the internal page is locked,
the external page in replaced with newly initialized internal page in the
page hash chain, the external page is removed from external LRU list and
attached to io request to external buffer pool file. When the io request
is completed, external page is returned to external free list,
internal page is unlocked. So during read external page absents in both
external LRU and free lists and can't be reused.

When an internal page is initialized for new page creating and external
pages with the same page id is found in page hash, we just remove external
page from the page hash chain and external LRU list and push it to the
head of external free list. So the external page can be used for future
flushing.

The pages are flushed to and read from external buffer pool file with
the same manner as they are flushed to their spaces, i.e. compressed and
encrypted pages stay compressed and encrypted in external buffer pool
file.
@vlad-lesin vlad-lesin force-pushed the 11.8-MDEV-31956-ext_buf_pool branch from 8d5d027 to ee7d993 Compare January 7, 2026 07:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Development

Successfully merging this pull request may close these issues.

4 participants