Skip to content

Commit 2033c98

Browse files
Matthew Wilcox (Oracle)akpm00
authored andcommitted
mm: remove invalidate_inode_page()
All callers are now converted to call mapping_evict_folio(). Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Cc: Naoya Horiguchi <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 761d79f commit 2033c98

File tree

2 files changed

+2
-10
lines changed

2 files changed

+2
-10
lines changed

mm/internal.h

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -139,7 +139,6 @@ int truncate_inode_folio(struct address_space *mapping, struct folio *folio);
139139
bool truncate_inode_partial_folio(struct folio *folio, loff_t start,
140140
loff_t end);
141141
long mapping_evict_folio(struct address_space *mapping, struct folio *folio);
142-
long invalidate_inode_page(struct page *page);
143142
unsigned long mapping_try_invalidate(struct address_space *mapping,
144143
pgoff_t start, pgoff_t end, unsigned long *nr_failed);
145144

mm/truncate.c

Lines changed: 2 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -294,13 +294,6 @@ long mapping_evict_folio(struct address_space *mapping, struct folio *folio)
294294
return remove_mapping(mapping, folio);
295295
}
296296

297-
long invalidate_inode_page(struct page *page)
298-
{
299-
struct folio *folio = page_folio(page);
300-
301-
return mapping_evict_folio(folio_mapping(folio), folio);
302-
}
303-
304297
/**
305298
* truncate_inode_pages_range - truncate range of pages specified by start & end byte offsets
306299
* @mapping: mapping to truncate
@@ -559,9 +552,9 @@ unsigned long invalidate_mapping_pages(struct address_space *mapping,
559552
EXPORT_SYMBOL(invalidate_mapping_pages);
560553

561554
/*
562-
* This is like invalidate_inode_page(), except it ignores the page's
555+
* This is like mapping_evict_folio(), except it ignores the folio's
563556
* refcount. We do this because invalidate_inode_pages2() needs stronger
564-
* invalidation guarantees, and cannot afford to leave pages behind because
557+
* invalidation guarantees, and cannot afford to leave folios behind because
565558
* shrink_page_list() has a temp ref on them, or because they're transiently
566559
* sitting in the folio_add_lru() caches.
567560
*/

0 commit comments

Comments
 (0)