The BPF verifier assumes `insn_aux->nospec_result` is only set for direct memory writes (e.g., `*(u32*)(r1+off) = r2`). However, the assertion fails to account for helper calls (e.g., `bpf_skb_load_bytes_relative`) that perform writes to stack memory. Make the check more precise to resolve this. The problem is that `BPF_CALL` instructions have `BPF_CLASS(insn->code) == BPF_JMP`, which triggers the warning check: - Helpers like `bpf_skb_load_bytes_relative` write to stack memory - `check_helper_call()` loops through `meta.access_size`, calling `check_mem_access(..., BPF_WRITE)` - `check_stack_write()` sets `insn_aux->nospec_result = 1` - Since `BPF_CALL` is encoded as `BPF_JMP | BPF_CALL`, the warning fires Execution flow: ``` 1. Drop capabilities → Enable Spectre mitigation 2. Load BPF program └─> do_check() ├─> check_cond_jmp_op() → Marks dead branch as speculative │ └─> push_stack(..., speculative=true) ├─> pop_stack() → state->speculative = 1 ├─> check_helper_call() → Processes helper in dead branch │ └─> check_mem_access(..., BPF_WRITE) │ └─> insn_aux->nospec_result = 1 └─> Checks: state->speculative && insn_aux->nospec_result └─> BPF_CLASS(insn->code) == BPF_JMP → WARNING ``` To fix the assert, it would be nice to be able to reuse bpf_insn_successors() here, but bpf_insn_successors()->cnt is not exactly what we want as it may also be 1 for BPF_JA. Instead, we could check opcode_info.can_jump, but then we would have to share the table between the functions. This would mean moving the table out of the function and adding bpf_opcode_info(). As the verifier_bug_if() only runs for insns with nospec_result set, the impact on verification time would likely still be negligible. However, I assume sharing bpf_opcode_info() between liveness.c and verifier.c will not be worth it. It seems as only adjust_jmp_off() could also be simplified using it, and there imm/off is touched. Thus it is maybe better to rely on exact opcode/class matching there. Therefore, to avoid this sharing only for a verifier_bug_if(), just check the opcode. This should now cover all opcodes for which can_jump in bpf_insn_successors() is true. Parts of the description and example are taken from the bug report. Fixes: dadb59104c64 ("bpf: Fix aux usage after do_check_insn()") Signed-off-by: Luis Gerhorst Reported-by: Yinhao Hu Reported-by: Kaiyan Mei Reported-by: Dongliang Mu Closes: https://lore.kernel.org/bpf/7678017d-b760-4053-a2d8-a6879b0dbeeb@hust.edu.cn/ --- kernel/bpf/verifier.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index c2f2650db9fd..e7ff8394e0da 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -21065,17 +21065,19 @@ static int do_check(struct bpf_verifier_env *env) * may skip a nospec patched-in after the jump. This can * currently never happen because nospec_result is only * used for the write-ops - * `*(size*)(dst_reg+off)=src_reg|imm32` which must - * never skip the following insn. Still, add a warning - * to document this in case nospec_result is used - * elsewhere in the future. + * `*(size*)(dst_reg+off)=src_reg|imm32` and helper + * calls. These must never skip the following insn + * (i.e., bpf_insn_successors()'s opcode_info.can_jump + * is false). Still, add a warning to document this in + * case nospec_result is used elsewhere in the future. * * All non-branch instructions have a single * fall-through edge. For these, nospec_result should * already work. */ - if (verifier_bug_if(BPF_CLASS(insn->code) == BPF_JMP || - BPF_CLASS(insn->code) == BPF_JMP32, env, + if (verifier_bug_if((BPF_CLASS(insn->code) == BPF_JMP || + BPF_CLASS(insn->code) == BPF_JMP32) && + BPF_OP(insn->code) != BPF_CALL, env, "speculation barrier after jump instruction may not have the desired effect")) return -EFAULT; process_bpf_exit: -- 2.52.0