For the huge-folio free list, unzeroed huge folios are now inserted at the tail; a follow-on patch will place pre-zeroed ones at the head, so that allocations can obtain a pre-zeroed huge folio with minimal search. Also, placing newly zeroed pages at the head of the queue so they're chosen first in the next allocation helps keep the cache hot. Signed-off-by: Li Zhe --- mm/hugetlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a7e582abe9f9..42d327152da9 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1312,7 +1312,7 @@ static void enqueue_hugetlb_folio(struct hstate *h, struct folio *folio) VM_BUG_ON_FOLIO(folio_ref_count(folio), folio); VM_WARN_ON_FOLIO(folio_test_hugetlb_zeroing(folio), folio); - list_move(&folio->lru, &h->hugepage_freelists[nid]); + list_move_tail(&folio->lru, &h->hugepage_freelists[nid]); h->free_huge_pages++; h->free_huge_pages_node[nid]++; prep_clear_zeroed(folio); -- 2.20.1