mm: remove isolate_lru_page()

There are no more callers of isolate_lru_page(), remove it.

[wangkefeng.wang@huawei.com: convert page to folio in comment and document, per Matthew]
  Link: https://lkml.kernel.org/r/20240826144114.1928071-1-wangkefeng.wang@huawei.com
Link: https://lkml.kernel.org/r/20240826065814.1336616-6-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Kefeng Wang 2024-08-26 14:58:13 +08:00 committed by Andrew Morton
parent 58bf8c2bf4
commit 775d28fd45
9 changed files with 25 additions and 33 deletions

View File

@ -63,15 +63,15 @@ and then a low level description of how the low level details work.
In kernel use of migrate_pages() In kernel use of migrate_pages()
================================ ================================
1. Remove pages from the LRU. 1. Remove folios from the LRU.
Lists of pages to be migrated are generated by scanning over Lists of folios to be migrated are generated by scanning over
pages and moving them into lists. This is done by folios and moving them into lists. This is done by
calling isolate_lru_page(). calling folio_isolate_lru().
Calling isolate_lru_page() increases the references to the page Calling folio_isolate_lru() increases the references to the folio
so that it cannot vanish while the page migration occurs. so that it cannot vanish while the folio migration occurs.
It also prevents the swapper or other scans from encountering It also prevents the swapper or other scans from encountering
the page. the folio.
2. We need to have a function of type new_folio_t that can be 2. We need to have a function of type new_folio_t that can be
passed to migrate_pages(). This function should figure out passed to migrate_pages(). This function should figure out
@ -84,10 +84,10 @@ In kernel use of migrate_pages()
How migrate_pages() works How migrate_pages() works
========================= =========================
migrate_pages() does several passes over its list of pages. A page is moved migrate_pages() does several passes over its list of folios. A folio is moved
if all references to a page are removable at the time. The page has if all references to a folio are removable at the time. The folio has
already been removed from the LRU via isolate_lru_page() and the refcount already been removed from the LRU via folio_isolate_lru() and the refcount
is increased so that the page cannot be freed while page migration occurs. is increased so that the folio cannot be freed while folio migration occurs.
Steps: Steps:

View File

@ -80,7 +80,7 @@ on an additional LRU list for a few reasons:
(2) We want to be able to migrate unevictable folios between nodes for memory (2) We want to be able to migrate unevictable folios between nodes for memory
defragmentation, workload management and memory hotplug. The Linux kernel defragmentation, workload management and memory hotplug. The Linux kernel
can only migrate folios that it can successfully isolate from the LRU can only migrate folios that it can successfully isolate from the LRU
lists (or "Movable" pages: outside of consideration here). If we were to lists (or "Movable" folios: outside of consideration here). If we were to
maintain folios elsewhere than on an LRU-like list, where they can be maintain folios elsewhere than on an LRU-like list, where they can be
detected by folio_isolate_lru(), we would prevent their migration. detected by folio_isolate_lru(), we would prevent their migration.
@ -230,7 +230,7 @@ In Nick's patch, he used one of the struct page LRU list link fields as a count
of VM_LOCKED VMAs that map the page (Rik van Riel had the same idea three years of VM_LOCKED VMAs that map the page (Rik van Riel had the same idea three years
earlier). But this use of the link field for a count prevented the management earlier). But this use of the link field for a count prevented the management
of the pages on an LRU list, and thus mlocked pages were not migratable as of the pages on an LRU list, and thus mlocked pages were not migratable as
isolate_lru_page() could not detect them, and the LRU list link field was not folio_isolate_lru() could not detect them, and the LRU list link field was not
available to the migration subsystem. available to the migration subsystem.
Nick resolved this by putting mlocked pages back on the LRU list before Nick resolved this by putting mlocked pages back on the LRU list before

View File

@ -50,8 +50,8 @@ mbind()设置一个新的内存策略。一个进程的页面也可以通过sys_
1. 从LRU中移除页面。 1. 从LRU中移除页面。
要迁移的页面列表是通过扫描页面并把它们移到列表中来生成的。这是通过调用 isolate_lru_page() 要迁移的页面列表是通过扫描页面并把它们移到列表中来生成的。这是通过调用 folio_isolate_lru()
来完成的。调用isolate_lru_page()增加了对该页的引用,这样在页面迁移发生时它就不会 来完成的。调用folio_isolate_lru()增加了对该页的引用,这样在页面迁移发生时它就不会
消失。它还可以防止交换器或其他扫描器遇到该页。 消失。它还可以防止交换器或其他扫描器遇到该页。
@ -65,7 +65,7 @@ migrate_pages()如何工作
======================= =======================
migrate_pages()对它的页面列表进行了多次处理。如果当时对一个页面的所有引用都可以被移除, migrate_pages()对它的页面列表进行了多次处理。如果当时对一个页面的所有引用都可以被移除,
那么这个页面就会被移动。该页已经通过isolate_lru_page()从LRU中移除并且refcount被 那么这个页面就会被移动。该页已经通过folio_isolate_lru()从LRU中移除并且refcount被
增加,以便在页面迁移发生时不释放该页。 增加,以便在页面迁移发生时不释放该页。
步骤: 步骤:

View File

@ -114,7 +114,7 @@
* ->private_lock (try_to_unmap_one) * ->private_lock (try_to_unmap_one)
* ->i_pages lock (try_to_unmap_one) * ->i_pages lock (try_to_unmap_one)
* ->lruvec->lru_lock (follow_page_mask->mark_page_accessed) * ->lruvec->lru_lock (follow_page_mask->mark_page_accessed)
* ->lruvec->lru_lock (check_pte_range->isolate_lru_page) * ->lruvec->lru_lock (check_pte_range->folio_isolate_lru)
* ->private_lock (folio_remove_rmap_pte->set_page_dirty) * ->private_lock (folio_remove_rmap_pte->set_page_dirty)
* ->i_pages lock (folio_remove_rmap_pte->set_page_dirty) * ->i_pages lock (folio_remove_rmap_pte->set_page_dirty)
* bdi.wb->list_lock (folio_remove_rmap_pte->set_page_dirty) * bdi.wb->list_lock (folio_remove_rmap_pte->set_page_dirty)

View File

@ -93,13 +93,6 @@ struct page *grab_cache_page_write_begin(struct address_space *mapping,
} }
EXPORT_SYMBOL(grab_cache_page_write_begin); EXPORT_SYMBOL(grab_cache_page_write_begin);
bool isolate_lru_page(struct page *page)
{
if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
return false;
return folio_isolate_lru((struct folio *)page);
}
void putback_lru_page(struct page *page) void putback_lru_page(struct page *page)
{ {
folio_putback_lru(page_folio(page)); folio_putback_lru(page_folio(page));

View File

@ -416,7 +416,6 @@ extern unsigned long highest_memmap_pfn;
/* /*
* in mm/vmscan.c: * in mm/vmscan.c:
*/ */
bool isolate_lru_page(struct page *page);
bool folio_isolate_lru(struct folio *folio); bool folio_isolate_lru(struct folio *folio);
void putback_lru_page(struct page *page); void putback_lru_page(struct page *page);
void folio_putback_lru(struct folio *folio); void folio_putback_lru(struct folio *folio);

View File

@ -627,8 +627,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
} }
/* /*
* We can do it before isolate_lru_page because the * We can do it before folio_isolate_lru because the
* page can't be freed from under us. NOTE: PG_lock * folio can't be freed from under us. NOTE: PG_lock
* is needed to serialize against split_huge_page * is needed to serialize against split_huge_page
* when invoked from the VM. * when invoked from the VM.
*/ */
@ -1874,7 +1874,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
result = SCAN_FAIL; result = SCAN_FAIL;
goto xa_unlocked; goto xa_unlocked;
} }
/* drain lru cache to help isolate_lru_page() */ /* drain lru cache to help folio_isolate_lru() */
lru_add_drain(); lru_add_drain();
} else if (folio_trylock(folio)) { } else if (folio_trylock(folio)) {
folio_get(folio); folio_get(folio);
@ -1889,7 +1889,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
page_cache_sync_readahead(mapping, &file->f_ra, page_cache_sync_readahead(mapping, &file->f_ra,
file, index, file, index,
end - index); end - index);
/* drain lru cache to help isolate_lru_page() */ /* drain lru cache to help folio_isolate_lru() */
lru_add_drain(); lru_add_drain();
folio = filemap_lock_folio(mapping, index); folio = filemap_lock_folio(mapping, index);
if (IS_ERR(folio)) { if (IS_ERR(folio)) {

View File

@ -328,8 +328,8 @@ static bool migrate_vma_check_page(struct page *page, struct page *fault_page)
/* /*
* One extra ref because caller holds an extra reference, either from * One extra ref because caller holds an extra reference, either from
* isolate_lru_page() for a regular page, or migrate_vma_collect() for * folio_isolate_lru() for a regular folio, or migrate_vma_collect() for
* a device page. * a device folio.
*/ */
int extra = 1 + (page == fault_page); int extra = 1 + (page == fault_page);

View File

@ -906,8 +906,8 @@ atomic_t lru_disable_count = ATOMIC_INIT(0);
/* /*
* lru_cache_disable() needs to be called before we start compiling * lru_cache_disable() needs to be called before we start compiling
* a list of pages to be migrated using isolate_lru_page(). * a list of folios to be migrated using folio_isolate_lru().
* It drains pages on LRU cache and then disable on all cpus until * It drains folios on LRU cache and then disable on all cpus until
* lru_cache_enable is called. * lru_cache_enable is called.
* *
* Must be paired with a call to lru_cache_enable(). * Must be paired with a call to lru_cache_enable().