Eulgyu Kim reported a slab-use-after-free issue when resizing a set and gc runs in parallel. Resizing may run parallel with already running gc or gc can start but notice that resizing started. The operation which finishes last must destroy the original set. The logic for the testing is: "I was the last user of the set and it was resized". However setting the counters in resizing was: "the set will be resized and I'm going to use the set". That created a small racing window at the testing phase. Fix the order in the resizing functions. Reported by: Eulgyu Kim Signed-off-by: Jozsef Kadlecsik --- net/netfilter/ipset/ip_set_hash_gen.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h index 71b57c731dcb..023a3d7aeba0 100644 --- a/net/netfilter/ipset/ip_set_hash_gen.h +++ b/net/netfilter/ipset/ip_set_hash_gen.h @@ -681,8 +681,9 @@ mtype_resize(struct ip_set *set, bool retried) * between the original and resized sets. */ orig = ipset_dereference_bh_nfnl(h->table); - atomic_set(&orig->ref, 1); atomic_inc(&orig->uref); + smp_mb__after_atomic(); + atomic_set(&orig->ref, 1); pr_debug("attempt to resize set %s from %u to %u, t %p\n", set->name, orig->htable_bits, htable_bits, orig); for (r = 0; r < ahash_numof_locks(orig->htable_bits); r++) { @@ -799,6 +800,7 @@ mtype_resize(struct ip_set *set, bool retried) cleanup: rcu_read_unlock_bh(); atomic_set(&orig->ref, 0); + smp_mb__before_atomic(); atomic_dec(&orig->uref); mtype_ahash_destroy(set, t, false); if (ret == -EAGAIN) -- 2.39.5