The get_branch_snapshot test checks that bpf_get_branch_snapshot() doesn't waste too many branch entries on infrastructure overhead. The threshold of < 10 was calibrated for x86 where about 7 entries are wasted. On ARM64, the BPF trampoline generates more branches than x86, resulting in about 13 wasted entries. The overhead comes from the BPF trampoline calling __bpf_prog_enter_recur which on ARM64 makes out-of-line calls to __rcu_read_lock and generates more conditional branches than x86: [#12] bpf_testmod_loop_test+0x40 -> bpf_trampoline_...+0x48 [#11] bpf_trampoline_...+0x68 -> __bpf_prog_enter_recur+0x0 [#10] __bpf_prog_enter_recur+0x20 -> __bpf_prog_enter_recur+0x118 [#09] __bpf_prog_enter_recur+0x154 -> __bpf_prog_enter_recur+0x160 [#08] __bpf_prog_enter_recur+0x164 -> __bpf_prog_enter_recur+0x2c [#07] __bpf_prog_enter_recur+0x2c -> __rcu_read_lock+0x0 [#06] __rcu_read_lock+0x18 -> __bpf_prog_enter_recur+0x30 [#05] __bpf_prog_enter_recur+0x9c -> __bpf_prog_enter_recur+0xf0 [#04] __bpf_prog_enter_recur+0xf4 -> __bpf_prog_enter_recur+0xa8 [#03] __bpf_prog_enter_recur+0xb8 -> __bpf_prog_enter_recur+0x100 [#02] __bpf_prog_enter_recur+0x114 -> bpf_trampoline_...+0x6c [#01] bpf_trampoline_...+0x78 -> bpf_prog_...test1+0x0 [#00] bpf_prog_...test1+0x58 -> arm_brbe_snapshot_branch_stack+0x0 Use an architecture-specific threshold of < 14 for ARM64 to accommodate this overhead while still detecting regressions. Signed-off-by: Puranjay Mohan --- .../selftests/bpf/prog_tests/get_branch_snapshot.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c b/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c index 0394a1156d99..8d1a3480767f 100644 --- a/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c +++ b/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c @@ -116,13 +116,18 @@ void serial_test_get_branch_snapshot(void) ASSERT_GT(skel->bss->test1_hits, 6, "find_looptest_in_lbr"); - /* Given we stop LBR in software, we will waste a few entries. + /* Given we stop LBR/BRBE in software, we will waste a few entries. * But we should try to waste as few as possible entries. We are at - * about 7 on x86_64 systems. - * Add a check for < 10 so that we get heads-up when something - * changes and wastes too many entries. + * about 7 on x86_64 and about 13 on arm64 systems (the arm64 BPF + * trampoline generates more branches than x86_64). + * Add a check so that we get heads-up when something changes and + * wastes too many entries. */ +#if defined(__aarch64__) + ASSERT_LT(skel->bss->wasted_entries, 14, "check_wasted_entries"); +#else ASSERT_LT(skel->bss->wasted_entries, 10, "check_wasted_entries"); +#endif cleanup: get_branch_snapshot__destroy(skel); -- 2.52.0