The RISC-V SBI Steal-Time Accounting (STA) extension requires the shared memory physical address to be 64-byte aligned, or set to all-ones to explicitly disable steal-time accounting. KVM exposes the SBI STA shared memory configuration to userspace via KVM_SET_ONE_REG. However, the current implementation of kvm_sbi_ext_sta_set_reg() does not validate the alignment of the configured shared memory address. As a result, userspace can install a misaligned shared memory address that violates the SBI specification. Such an invalid configuration may later reach runtime code paths that assume a valid and properly aligned shared memory region. In particular, KVM_RUN can trigger the following WARN_ON in kvm_riscv_vcpu_record_steal_time(): WARNING: arch/riscv/kvm/vcpu_sbi_sta.c:49 at kvm_riscv_vcpu_record_steal_time WARN_ON paths are not expected to be reachable during normal runtime execution, and may result in a kernel panic when panic_on_warn is enabled. Fix this by validating the computed shared memory GPA at the KVM_SET_ONE_REG boundary. A temporary GPA is constructed and checked before committing it to vcpu->arch.sta.shmem. The validation allows either a 64-byte aligned GPA or INVALID_GPA (all-ones), which disables STA as defined by the SBI specification. This prevents invalid userspace state from reaching runtime code paths that assume SBI STA invariants and avoids unexpected WARN_ON behavior. Fixes: f61ce890b1f074 ("RISC-V: KVM: Add support for SBI STA registers") Signed-off-by: Jiakai Xu Signed-off-by: Jiakai Xu Reviewed-by: Andrew Jones --- V5 -> V6: Initialized new_shmem to INVALID_GPA as suggested. V4 -> V5: Added parentheses to function name in subject. V3 -> V4: Declared new_shmem at the top of kvm_sbi_ext_sta_set_reg(). Initialized new_shmem to 0 instead of vcpu->arch.sta.shmem. Added blank lines per review feedback. V2 -> V3: Added parentheses to function name in subject. V1 -> V2: Added Fixes tag. --- arch/riscv/kvm/vcpu_sbi_sta.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/riscv/kvm/vcpu_sbi_sta.c b/arch/riscv/kvm/vcpu_sbi_sta.c index afa0545c3bcfc..3b834709b429f 100644 --- a/arch/riscv/kvm/vcpu_sbi_sta.c +++ b/arch/riscv/kvm/vcpu_sbi_sta.c @@ -181,6 +181,7 @@ static int kvm_sbi_ext_sta_set_reg(struct kvm_vcpu *vcpu, unsigned long reg_num, unsigned long reg_size, const void *reg_val) { unsigned long value; + gpa_t new_shmem = INVALID_GPA; if (reg_size != sizeof(unsigned long)) return -EINVAL; @@ -191,18 +192,18 @@ static int kvm_sbi_ext_sta_set_reg(struct kvm_vcpu *vcpu, unsigned long reg_num, if (IS_ENABLED(CONFIG_32BIT)) { gpa_t hi = upper_32_bits(vcpu->arch.sta.shmem); - vcpu->arch.sta.shmem = value; - vcpu->arch.sta.shmem |= hi << 32; + new_shmem = value; + new_shmem |= hi << 32; } else { - vcpu->arch.sta.shmem = value; + new_shmem = value; } break; case KVM_REG_RISCV_SBI_STA_REG(shmem_hi): if (IS_ENABLED(CONFIG_32BIT)) { gpa_t lo = lower_32_bits(vcpu->arch.sta.shmem); - vcpu->arch.sta.shmem = ((gpa_t)value << 32); - vcpu->arch.sta.shmem |= lo; + new_shmem = ((gpa_t)value << 32); + new_shmem |= lo; } else if (value != 0) { return -EINVAL; } @@ -211,6 +212,11 @@ static int kvm_sbi_ext_sta_set_reg(struct kvm_vcpu *vcpu, unsigned long reg_num, return -ENOENT; } + if (new_shmem != INVALID_GPA && !IS_ALIGNED(new_shmem, 64)) + return -EINVAL; + + vcpu->arch.sta.shmem = new_shmem; + return 0; } -- 2.34.1