The implementation of `page_align()` in `processor.c` calculates alignment incorrectly for values that are already aligned. Specifically, `(v + vm->page_size) & ~(vm->page_size - 1)` aligns to the *next* page boundary even if `v` is already page-aligned, potentially wasting a page of memory. Fix the calculation to use standard alignment logic: `(v + vm->page_size - 1) & ~(vm->page_size - 1)`. Fixes: 7a6629ef746d ("kvm: selftests: add virt mem support for aarch64") Reviewed-by: Andrew Jones Signed-off-by: Fuad Tabba --- tools/testing/selftests/kvm/lib/arm64/processor.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c index 5b379da8cb90..607a4e462984 100644 --- a/tools/testing/selftests/kvm/lib/arm64/processor.c +++ b/tools/testing/selftests/kvm/lib/arm64/processor.c @@ -23,7 +23,7 @@ static vm_vaddr_t exception_handlers; static uint64_t page_align(struct kvm_vm *vm, uint64_t v) { - return (v + vm->page_size) & ~(vm->page_size - 1); + return (v + vm->page_size - 1) & ~(vm->page_size - 1); } static uint64_t pgd_index(struct kvm_vm *vm, vm_vaddr_t gva) -- 2.52.0.351.gbe84eed79e-goog