Skip to content

Commit f2b277c

Browse files
Hugh Dickinstorvalds
authored andcommitted
memfd: fix F_SEAL_WRITE after shmem huge page allocated
Wangyong reports: after enabling tmpfs filesystem to support transparent hugepage with the following command: echo always > /sys/kernel/mm/transparent_hugepage/shmem_enabled the docker program tries to add F_SEAL_WRITE through the following command, but it fails unexpectedly with errno EBUSY: fcntl(5, F_ADD_SEALS, F_SEAL_WRITE) = -1. That is because memfd_tag_pins() and memfd_wait_for_pins() were never updated for shmem huge pages: checking page_mapcount() against page_count() is hopeless on THP subpages - they need to check total_mapcount() against page_count() on THP heads only. Make memfd_tag_pins() (compared > 1) as strict as memfd_wait_for_pins() (compared != 1): either can be justified, but given the non-atomic total_mapcount() calculation, it is better now to be strict. Bear in mind that total_mapcount() itself scans all of the THP subpages, when choosing to take an XA_CHECK_SCHED latency break. Also fix the unlikely xa_is_value() case in memfd_wait_for_pins(): if a page has been swapped out since memfd_tag_pins(), then its refcount must have fallen, and so it can safely be untagged. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Hugh Dickins <[email protected]> Reported-by: Zeal Robot <[email protected]> Reported-by: wangyong <[email protected]> Cc: Mike Kravetz <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: CGEL ZTE <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Song Liu <[email protected]> Cc: Yang Yang <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 942341d commit f2b277c

File tree

1 file changed

+28
-12
lines changed

1 file changed

+28
-12
lines changed

mm/memfd.c

Lines changed: 28 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -31,20 +31,28 @@
3131
static void memfd_tag_pins(struct xa_state *xas)
3232
{
3333
struct page *page;
34-
unsigned int tagged = 0;
34+
int latency = 0;
35+
int cache_count;
3536

3637
lru_add_drain();
3738

3839
xas_lock_irq(xas);
3940
xas_for_each(xas, page, ULONG_MAX) {
40-
if (xa_is_value(page))
41-
continue;
42-
page = find_subpage(page, xas->xa_index);
43-
if (page_count(page) - page_mapcount(page) > 1)
41+
cache_count = 1;
42+
if (!xa_is_value(page) &&
43+
PageTransHuge(page) && !PageHuge(page))
44+
cache_count = HPAGE_PMD_NR;
45+
46+
if (!xa_is_value(page) &&
47+
page_count(page) - total_mapcount(page) != cache_count)
4448
xas_set_mark(xas, MEMFD_TAG_PINNED);
49+
if (cache_count != 1)
50+
xas_set(xas, page->index + cache_count);
4551

46-
if (++tagged % XA_CHECK_SCHED)
52+
latency += cache_count;
53+
if (latency < XA_CHECK_SCHED)
4754
continue;
55+
latency = 0;
4856

4957
xas_pause(xas);
5058
xas_unlock_irq(xas);
@@ -73,7 +81,8 @@ static int memfd_wait_for_pins(struct address_space *mapping)
7381

7482
error = 0;
7583
for (scan = 0; scan <= LAST_SCAN; scan++) {
76-
unsigned int tagged = 0;
84+
int latency = 0;
85+
int cache_count;
7786

7887
if (!xas_marked(&xas, MEMFD_TAG_PINNED))
7988
break;
@@ -87,10 +96,14 @@ static int memfd_wait_for_pins(struct address_space *mapping)
8796
xas_lock_irq(&xas);
8897
xas_for_each_marked(&xas, page, ULONG_MAX, MEMFD_TAG_PINNED) {
8998
bool clear = true;
90-
if (xa_is_value(page))
91-
continue;
92-
page = find_subpage(page, xas.xa_index);
93-
if (page_count(page) - page_mapcount(page) != 1) {
99+
100+
cache_count = 1;
101+
if (!xa_is_value(page) &&
102+
PageTransHuge(page) && !PageHuge(page))
103+
cache_count = HPAGE_PMD_NR;
104+
105+
if (!xa_is_value(page) && cache_count !=
106+
page_count(page) - total_mapcount(page)) {
94107
/*
95108
* On the last scan, we clean up all those tags
96109
* we inserted; but make a note that we still
@@ -103,8 +116,11 @@ static int memfd_wait_for_pins(struct address_space *mapping)
103116
}
104117
if (clear)
105118
xas_clear_mark(&xas, MEMFD_TAG_PINNED);
106-
if (++tagged % XA_CHECK_SCHED)
119+
120+
latency += cache_count;
121+
if (latency < XA_CHECK_SCHED)
107122
continue;
123+
latency = 0;
108124

109125
xas_pause(&xas);
110126
xas_unlock_irq(&xas);

0 commit comments

Comments
 (0)