mm: compaction: skip the memory hole rapidly when isolating free pages

Just like commit 9721fd8235 ("mm: compaction: skip memory hole
rapidly when isolating migratable pages"), I can see it will also take
more time to skip the larger memory hole (range: 0x1000000000 -
0x1800000000) when isolating free pages on my machine with below memory
layout.  So like commit 9721fd8235, adding a new helper to skip the
memory hole rapidly, which can reduce the time consumed from about 70us
to less than 1us.

[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
[    0.000000]   DMA32    empty
[    0.000000]   Normal   [mem 0x0000000100000000-0x0000001fa7ffffff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x0000000040000000-0x0000000fffffffff]
[    0.000000]   node   0: [mem 0x0000001800000000-0x0000001fa3c7ffff]
[    0.000000]   node   0: [mem 0x0000001fa3c80000-0x0000001fa3ffffff]
[    0.000000]   node   0: [mem 0x0000001fa4000000-0x0000001fa402ffff]
[    0.000000]   node   0: [mem 0x0000001fa4030000-0x0000001fa40effff]
[    0.000000]   node   0: [mem 0x0000001fa40f0000-0x0000001fa73cffff]
[    0.000000]   node   0: [mem 0x0000001fa73d0000-0x0000001fa745ffff]
[    0.000000]   node   0: [mem 0x0000001fa7460000-0x0000001fa746ffff]
[    0.000000]   node   0: [mem 0x0000001fa7470000-0x0000001fa758ffff]
[    0.000000]   node   0: [mem 0x0000001fa7590000-0x0000001fa7ffffff]

[shikemeng@huaweicloud.com: avoid missing last page block in section after skip offline sections]
  Link: https://lkml.kernel.org/r/20230804110454.2935878-1-shikemeng@huaweicloud.com
  Link: https://lkml.kernel.org/r/20230804110454.2935878-2-shikemeng@huaweicloud.com
Link: https://lkml.kernel.org/r/d2ba7e41ee566309b594311207ffca736375fc16.1688715750.git.baolin.wang@linux.alibaba.com
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Baolin Wang 2023-07-07 16:51:47 +08:00 committed by Andrew Morton
parent 94ec20035b
commit e6e0c76730

View File

@ -249,11 +249,36 @@ static unsigned long skip_offline_sections(unsigned long start_pfn)
return 0;
}
/*
* If the PFN falls into an offline section, return the end PFN of the
* next online section in reverse. If the PFN falls into an online section
* or if there is no next online section in reverse, return 0.
*/
static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
{
unsigned long start_nr = pfn_to_section_nr(start_pfn);
if (!start_nr || online_section_nr(start_nr))
return 0;
while (start_nr-- > 0) {
if (online_section_nr(start_nr))
return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION;
}
return 0;
}
#else
static unsigned long skip_offline_sections(unsigned long start_pfn)
{
return 0;
}
static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
{
return 0;
}
#endif
/*
@ -1668,8 +1693,15 @@ static void isolate_freepages(struct compact_control *cc)
page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn,
zone);
if (!page)
if (!page) {
unsigned long next_pfn;
next_pfn = skip_offline_sections_reverse(block_start_pfn);
if (next_pfn)
block_start_pfn = max(next_pfn, low_pfn);
continue;
}
/* Check the block is suitable for migration */
if (!suitable_migration_target(cc, page))