The function vmemmap_pte_range() was refactored into vmemmap_pte_entry() by commit fb93ed63345f ("mm: hugetlb_vmemmap: use walk_page_range_novma() to simplify the code"). Both functions share the key behavior that the reuse page is identified first before remapping begins. Update the comment accordingly. Signed-off-by: kexinsun --- mm/hugetlb_vmemmap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index a9280259e12a..5156e4038b5f 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -350,7 +350,7 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end, /* * In order to make remapping routine most efficient for the huge pages, * the routine of vmemmap page table walking has the following rules - * (see more details from the vmemmap_pte_range()): + * (see more details from the vmemmap_pte_entry()): * * - The range [@start, @end) and the range [@reuse, @reuse + PAGE_SIZE) * should be continuous. -- 2.25.1