From: Lai Jiangshan Use the large-page metadata to avoid pointless attempts to search SP. If the target GFN falls within a range where a large page is allowed, then there cannot be a shadow page for that GFN; a shadow page in the range would itself disallow using a large page. In that case, there is nothing to unsync and mmu_try_to_unsync_pages() can return immediately. This is always true for TDP MMU without nested TDP, and holds for a significant fraction of cases with shadow paging even all SPs are 4K. For shadow paging, this optimization theoretically avoids work for about 1/e ~= 37% of GFNs, assuming one guest page table per 2M of memory and that each GPT falls randomly into the 2M memory buckets. In a simple test setup, it skipped unsync in a much higher percentage of cases, mainly because the guest buddy allocator clusters GPTs into fewer buckets. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4535d2836004..555075fb63d9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2932,6 +2932,14 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, struct kvm_mmu_page *sp; bool locked = false; + /* + * If large page is allowed, there is no shadow page in the GFN range, + * because the presence of a shadow page in that range would prevent + * using a large page. + */ + if (!lpage_info_slot(gfn, slot, PG_LEVEL_2M)->disallow_lpage) + return 0; + /* * Force write-protection if the page is being tracked. Note, the page * track machinery is used to write-protect upper-level shadow pages, -- 2.19.1.6.gb485710b