There is a patch intended to fix suspicious RCU usage in get_callchain_entry(), but it is incorrect. Specifically, rcu_read_lock()/rcu_read_unlock() is not called when may_fault == false. Previous discussion: https://lore.kernel.org/all/CAEf4BzaYL9zZN8TZyRHW3_O3vbHc7On+NSunrkDvDQx2=wwyRw@mail.gmail.com/#R For perf's callchain, rcu_read_lock()/rcu_read_unlock() should be called when trace_in == false. Fixes: d4dd9775ec24 ("bpf: wire up sleepable bpf_get_stack() and bpf_get_task_stack() helpers") Reported-by: syzbot+72a43cdb78469f7fbad1@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=72a43cdb78469f7fbad1 Tested-by: syzbot+72a43cdb78469f7fbad1@syzkaller.appspotmail.com Signed-off-by: Qing Wang --- kernel/bpf/stackmap.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index da3d328f5c15..f97d4aa9d038 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -460,7 +460,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, max_depth = stack_map_calculate_max_depth(size, elem_size, flags); - if (may_fault) + if (!trace_in) rcu_read_lock(); /* need RCU for perf's callchain below */ if (trace_in) { @@ -474,7 +474,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, } if (unlikely(!trace) || trace->nr < skip) { - if (may_fault) + if (!trace_in) rcu_read_unlock(); goto err_fault; } @@ -494,7 +494,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, } /* trace/ips should not be dereferenced after this point */ - if (may_fault) + if (!trace_in) rcu_read_unlock(); if (user_build_id) -- 2.34.1