From: Mykyta Yatsenko When a BPF_F_LOCK update races with a concurrent delete, the freed element can be immediately recycled by alloc_htab_elem(). The fast path in htab_map_update_elem() performs a lockless lookup and then calls copy_map_value_locked() under the element's spin_lock. If alloc_htab_elem() recycles the same memory, it overwrites the value with plain copy_map_value(), without taking the spin_lock, causing torn writes. Use copy_map_value_locked() when BPF_F_LOCK is set so the new element's value is written under the embedded spin_lock, serializing against any stale lock holders. Fixes: 96049f3afd50 ("bpf: introduce BPF_F_LOCK flag") Reported-by: Aaron Esau Closes: https://lore.kernel.org/all/CADucPGRvSRpkneb94dPP08YkOHgNgBnskTK6myUag_Mkjimihg@mail.gmail.com/ Signed-off-by: Mykyta Yatsenko --- kernel/bpf/hashtab.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index bc6bc8bb871d..f7ac1ec7be8b 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -1138,6 +1138,10 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key, } else if (fd_htab_map_needs_adjust(htab)) { size = round_up(size, 8); memcpy(htab_elem_value(l_new, key_size), value, size); + } else if (map_flags & BPF_F_LOCK) { + copy_map_value_locked(&htab->map, + htab_elem_value(l_new, key_size), + value, false); } else { copy_map_value(&htab->map, htab_elem_value(l_new, key_size), value); } -- 2.52.0