Once support for THP migration of zone device pages is enabled, device private swap entries will be found during the walk not only for PTEs but also for PMDs. Therefore, it is necessary to extend to PMDs the special handling which is already in place for PTEs when device private pages are owned by the caller: instead of faulting or skipping the range, the correct behavior is to use the swap entry to populate HMM PFNs. Even though subsequent PFNs can be inferred when handling large order PFNs, the PFN list is still fully populated because this is currently expected by HMM users. Cc: Andrew Morton Cc: Jason Gunthorpe Cc: Leon Romanovsky Cc: Zi Yan Cc: Alistair Popple Cc: Balbir Singh Cc: David Airlie Cc: Christian König Cc: Mika Penttilä Cc: Thomas Hellstrom Cc: Matthew Brost Signed-off-by: Francois Dugast --- mm/hmm.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/mm/hmm.c b/mm/hmm.c index d545e2494994..d449fc4647d7 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -355,6 +355,29 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, } if (!pmd_present(pmd)) { +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION + swp_entry_t entry = pmd_to_swp_entry(pmd); + + if (is_device_private_entry(entry) && + pfn_swap_entry_folio(entry)->pgmap->owner == + range->dev_private_owner) { + unsigned long cpu_flags = HMM_PFN_VALID | + hmm_pfn_flags_order(PMD_SHIFT - PAGE_SHIFT); + unsigned long pfn = swp_offset_pfn(entry); + unsigned long i; + + if (is_writable_device_private_entry(entry)) + cpu_flags |= HMM_PFN_WRITE; + + for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) { + hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + hmm_pfns[i] |= pfn | cpu_flags; + } + + return 0; + } +#endif /* CONFIG_ARCH_ENABLE_THP_MIGRATION */ + if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) return -EFAULT; return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); -- 2.43.0