When allocating slabobj_ext array in alloc_slab_obj_exts(), the array can be allocated from the same slab we're allocating the array for. This led to obj_exts_in_slab() incorrectly returning true [1], although the array is not allocated from wasted space of the slab. Vlastimil Babka observed that this problem should be fixed even when ignoring its incompatibility with obj_exts_in_slab(), because it creates slabs that are never freed as there is always at least one allocated object. To avoid this, use the next kmalloc size or large kmalloc when kmalloc_slab() returns the same cache we're allocating the array for. In case of random kmalloc caches, there are multiple kmalloc caches for the same size and the cache is selected based on the caller address. Because it is fragile to ensure the same caller address is passed to kmalloc_slab(), kmalloc_noprof(), and kmalloc_node_noprof(), fall back to (s->object_size + 1) when the sizes are equal. Note that this doesn't happen when memory allocation profiling is disabled, as when the allocation of the array is triggered by memory cgroup (KMALLOC_CGROUP), the array is allocated from KMALLOC_NORMAL. Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-lkp/202601231457.f7b31e09-lkp@intel.com [1] Cc: stable@vger.kernel.org Fixes: 4b8736964640 ("mm/slab: add allocation accounting into slab allocation and free paths") Signed-off-by: Harry Yoo --- mm/slub.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 55 insertions(+), 7 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 3ff1c475b0f1..43ddb96c4081 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2104,6 +2104,52 @@ static inline void init_slab_obj_exts(struct slab *slab) slab->obj_exts = 0; } +/* + * Calculate the allocation size for slabobj_ext array. + * + * When memory allocation profiling is enabled, the obj_exts array + * could be allocated from the same slab cache it's being allocated for. + * This would prevent the slab from ever being freed because it would + * always contain at least one allocated object (its own obj_exts array). + * + * To avoid this, increase the allocation size when we detect the array + * would come from the same cache, forcing it to use a different cache. + */ +static inline size_t obj_exts_alloc_size(struct kmem_cache *s, + struct slab *slab, gfp_t gfp) +{ + size_t sz = sizeof(struct slabobj_ext) * slab->objects; + struct kmem_cache *obj_exts_cache; + + /* + * slabobj_ext array for KMALLOC_CGROUP allocations + * are served from KMALLOC_NORMAL caches. + */ + if (!mem_alloc_profiling_enabled()) + return sz; + + if (sz > KMALLOC_MAX_CACHE_SIZE) + return sz; + + obj_exts_cache = kmalloc_slab(sz, NULL, gfp, 0); + if (s == obj_exts_cache) + return obj_exts_cache->object_size + 1; + + /* + * Random kmalloc caches have multiple caches per size, and the cache + * is selected by the caller address. Since caller address may differ + * between kmalloc_slab() and actual allocation, bump size when both + * are normal kmalloc caches of same size. + */ + if (IS_ENABLED(CONFIG_RANDOM_KMALLOC_CACHES) && + is_kmalloc_normal(s) && + is_kmalloc_normal(obj_exts_cache) && + (s->object_size == obj_exts_cache->object_size)) + return obj_exts_cache->object_size + 1; + + return sz; +} + int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, gfp_t gfp, bool new_slab) { @@ -2112,26 +2158,26 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, unsigned long new_exts; unsigned long old_exts; struct slabobj_ext *vec; + size_t sz; gfp &= ~OBJCGS_CLEAR_MASK; /* Prevent recursive extension vector allocation */ gfp |= __GFP_NO_OBJ_EXT; + sz = obj_exts_alloc_size(s, slab, gfp); + /* * Note that allow_spin may be false during early boot and its * restricted GFP_BOOT_MASK. Due to kmalloc_nolock() only supporting * architectures with cmpxchg16b, early obj_exts will be missing for * very early allocations on those. */ - if (unlikely(!allow_spin)) { - size_t sz = objects * sizeof(struct slabobj_ext); - + if (unlikely(!allow_spin)) vec = kmalloc_nolock(sz, __GFP_ZERO | __GFP_NO_OBJ_EXT, slab_nid(slab)); - } else { - vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, - slab_nid(slab)); - } + else + vec = kmalloc_node(sz, gfp | __GFP_ZERO, slab_nid(slab)); + if (!vec) { /* * Try to mark vectors which failed to allocate. @@ -2145,6 +2191,8 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, return -ENOMEM; } + VM_WARN_ON_ONCE(virt_to_slab(vec)->slab_cache == s); + new_exts = (unsigned long)vec; if (unlikely(!allow_spin)) new_exts |= OBJEXTS_NOSPIN_ALLOC; -- 2.43.0