Zhengchuan Liang reported that because resize does not copy the comment extension into the resized set but uses it's pointer, ongoing gc can free the extension in the original set which then results stale pointer in the resized one. The proposed patch was to recreate the extensions for every element in the resized set. It is both expensive and wastes memory, so better skip gc when resizing in progress detected: resizing will destroy the original set anyway, so doing gc on it unnecessary. Reported by: Zhengchuan Liang Signed-off-by: Jozsef Kadlecsik --- net/netfilter/ipset/ip_set_hash_gen.h | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h index 377b4be9e4d5..71b57c731dcb 100644 --- a/net/netfilter/ipset/ip_set_hash_gen.h +++ b/net/netfilter/ipset/ip_set_hash_gen.h @@ -501,6 +501,8 @@ mtype_gc_do(struct ip_set *set, struct htype *h, struct htable *t, u32 r) continue; pos = smp_load_acquire(&n->pos); for (j = 0, d = 0; j < pos; j++) { + if (atomic_read(&t->ref)) + goto resize_in_progress; if (!test_bit(j, n->used)) { d++; continue; @@ -552,6 +554,7 @@ mtype_gc_do(struct ip_set *set, struct htype *h, struct htable *t, u32 r) kfree_rcu(n, rcu); } } +resize_in_progress: spin_unlock_bh(&t->hregion[r].lock); } @@ -672,7 +675,10 @@ mtype_resize(struct ip_set *set, bool retried) spin_lock_init(&t->hregion[i].lock); /* There can't be another parallel resizing, - * but dumping, gc, kernel side add/del are possible + * but dumping, kernel side add/del are possible. + * gc must detect ongoing resize when comments are in use + * in order not to free the comment extension area shared + * between the original and resized sets. */ orig = ipset_dereference_bh_nfnl(h->table); atomic_set(&orig->ref, 1); -- 2.39.5