syzbot reported a stack-out-of-bounds write in __bpf_get_stack() triggered via bpf_get_stack() when capturing a kernel stack trace. After the recent refactor that introduced stack_map_calculate_max_depth(), the code in stack_map_get_build_id_offset() (and related helpers) stopped clamping the number of trace entries (`trace_nr`) to the number of elements that fit into the stack map value (`num_elem`). As a result, if the captured stack contained more frames than the map value can hold, the subsequent memcpy() would write past the end of the buffer, triggering a KASAN report like: BUG: KASAN: stack-out-of-bounds in __bpf_get_stack+0x... Write of size N at addr ... by task syz-executor... Restore the missing clamp by limiting `trace_nr` to `num_elem` before computing the copy length. This mirrors the pre-refactor logic and ensures we never copy more bytes than the destination buffer can hold. No functional change intended beyond reintroducing the missing bound check. Reported-by: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function") Signed-off-by: Brahmajit Das --- Changes in v2: - Use max_depth instead of num_elem logic, this logic is similar to what we are already using __bpf_get_stackid Changes in v1: - RFC patch that restores the number of trace entries by setting trace_nr to trace_nr or num_elem based on whichever is the smallest. Link: https://lore.kernel.org/all/20251110211640.963-1-listout@listout.xyz/ --- kernel/bpf/stackmap.c | 1 + 1 file changed, 1 insertion(+) diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index 2365541c81dd..f9081de43689 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -480,6 +480,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, } trace_nr = trace->nr - skip; + trace_nr = min_t(u32, trace_nr, max_depth - skip); copy_len = trace_nr * elem_size; ips = trace->ip + skip; -- 2.51.2