Advanced Filters
Published Title Version Author Status
2026-05-13 10:57 UTC kasan: hw_tags: some micro-optimizations 1 dev.jain@arm.com finished in 4h41m0s
2026-05-12 21:05 UTC mm/virtio: skip redundant zeroing of host-zeroed pages 7 mst@redhat.com finished in 4h52m0s
2026-05-12 03:50 UTC mm/slub: hold cpus_read_lock around flush_rcu_sheaves_on_cache() 3 wangqing7171@gmail.com finished in 5h2m0s
2026-05-12 03:46 UTC mm/slub: hold cpus_read_lock around flush_rcu_sheaves_on_cache() 2 wangqing7171@gmail.com finished in 5h23m0s
2026-05-11 20:00 UTC slab: support for compiler-assisted type-based slab cache partitioning 4 elver@google.com finished in 4h57m0s
2026-05-11 09:01 UTC mm/virtio: skip redundant zeroing of host-zeroed pages 6 mst@redhat.com finished in 5h24m0s
2026-05-08 08:21 UTC mm/slub: hold cpus_read_lock around flush_rcu_sheaves_on_cache() 1 wangqing7171@gmail.com finished in 4h55m0s
2026-05-07 17:37 UTC rcu: Add debugfs interface for pending callback monitoring 1 gustavold@gmail.com finished in 4h18m0s
2026-04-30 14:04 UTC mm/slub: defer freelist construction until after bulk allocation from a new slab 8 hu.shengming@zte.com.cn finished in 5h30m0s
2026-04-30 08:35 UTC mm/slub: initialize allocated object's freepointer before debug check 1 hu.shengming@zte.com.cn finished in 4h55m0s
2026-04-27 09:42 UTC mm, slab: add an optimistic __slab_try_return_freelist() 2 vbabka@kernel.org finished in 4h59m0s
2026-04-27 07:09 UTC mm/page_alloc,slab: return NULL early from *_nolock() memory allocation APIs in NMI on UP 2 harry@kernel.org finished in 1h16m0s
2026-04-24 13:24 UTC slab: support for compiler-assisted type-based slab cache partitioning 3 elver@google.com finished in 4h24m0s
2026-04-21 14:49 UTC mm, slab: add an optimistic __slab_try_return_freelist() 1 vbabka@kernel.org finished in 4h7m0s
2026-04-18 00:06 UTC docs: Add overview and SLUB allocator sections to slab documentation 1 sef1548@gmail.com finished in 1h24m0s
2026-04-16 13:25 UTC slub: fix data loss and overflow in krealloc() 1 elver@google.com finished in 4h16m0s
2026-04-16 09:10 UTC kvfree_rcu() improvements 2 harry@kernel.org finished in 4h14m0s
2026-04-15 14:37 UTC slab: support for compiler-assisted type-based slab cache partitioning 2 elver@google.com finished in 4h37m0s
2026-04-15 08:52 UTC mm/slub: defer freelist construction until after bulk allocation from a new slab 7 hu.shengming@zte.com.cn finished in 4h31m0s
2026-04-13 15:04 UTC mm/slub: defer freelist construction until after bulk allocation from a new slab 6 hu.shengming@zte.com.cn finished in 4h28m0s
2026-04-10 11:16 UTC slub: spill refill leftover objects into percpu sheaves 1 hao.li@linux.dev finished in 4h36m0s
2026-04-09 12:43 UTC mm/slub: defer freelist construction until after bulk allocation from a new slab 5 hu.shengming@zte.com.cn skipped
2026-04-08 15:28 UTC mm/slub: defer freelist construction until after bulk allocation from a new slab 4 hu.shengming@zte.com.cn finished in 1h15m0s
2026-04-07 11:59 UTC slub: clarify kmem_cache_refill_sheaf() comments 3 hao.li@linux.dev finished in 1h16m0s
2026-04-07 09:59 UTC slub: clarify kmem_cache_refill_sheaf() comments 2 hao.li@linux.dev finished in 57m0s
2026-04-06 13:50 UTC mm/slub: defer freelist construction until after bulk allocation from a new slab 3 hu.shengming@zte.com.cn finished in 1h8m0s
2026-04-06 09:09 UTC slub_kunit: add a test case for {kmalloc,kfree}_nolock 1 harry@kernel.org finished in 56m0s
2026-04-03 07:37 UTC slub: use N_NORMAL_MEMORY in can_free_to_pcs to handle remote frees 1 hao.li@linux.dev finished in 15m0s
2026-04-01 10:59 UTC slab: remove the SLUB_DEBUG functionality and config option 1 vbabka@kernel.org finished in 4h23m0s
2026-04-01 04:57 UTC mm/slub: skip freelist construction for whole-slab bulk refill 2 hu.shengming@zte.com.cn finished in 1h13m0s
2026-03-31 11:12 UTC slab: support for compiler-assisted type-based slab cache partitioning 1 elver@google.com finished in 4h32m0s
2026-03-30 12:05 UTC slub_kunit: add a test case for {kmalloc,kfree}_nolock 1 harry@kernel.org finished in 1h16m0s
2026-03-30 03:57 UTC mm/memory_hotplug: maintain N_NORMAL_MEMORY during hotplug 2 hao.li@linux.dev finished in 4h26m0s
2026-03-28 04:55 UTC mm/slub: skip freelist construction for whole-slab bulk refill 1 hu.shengming@zte.com.cn finished in 4h3m0s
2026-03-27 12:42 UTC mm/memory_hotplug: maintain N_NORMAL_MEMORY during hotplug 1 hao.li@linux.dev finished in 4h40m0s
2026-03-27 05:58 UTC mm/slab: align kmalloc to cacheline when DMA API debugging is active 1 mikhail.v.gavrilov@gmail.com finished in 1h8m0s
2026-03-24 21:35 UTC slab,rcu: disable KVFREE_RCU_BATCHED for strict grace period 1 jannh@google.com finished in 1h22m0s
2026-03-19 16:03 UTC mm: Switch gfp_t to unsigned long 1 jackmanb@google.com finished in 4h11m0s
2026-03-12 11:42 UTC slub: clarify kmem_cache_refill_sheaf() failure behavior 1 hao.li@linux.dev finished in 1h23m0s
2026-03-11 18:22 UTC slab: remove alloc_full_sheaf() 1 vbabka@kernel.org skipped
2026-03-11 09:36 UTC slab: fix memory leak when refill_sheaf() fails 1 wangqing7171@gmail.com finished in 4h9m0s
2026-03-11 08:25 UTC slab: support memoryless nodes with sheaves 1 vbabka@kernel.org finished in 3h58m0s
2026-03-10 11:38 UTC fix kmem over-charging for embedded obj_exts array 1 ranxiaokai627@163.com finished in 4h20m0s
2026-03-09 07:22 UTC mm/slab: fix an incorrect check in obj_exts_alloc_size() 1 harry.yoo@oracle.com finished in 3h51m0s
2026-03-03 13:57 UTC mm/slab: change stride type from unsigned short to unsigned int 1 harry.yoo@oracle.com finished in 4h0m0s
2026-03-02 19:50 UTC : memcg: obj stock and slab stat caching cleanups 1 hannes@cmpxchg.org finished in 4h9m0s
2026-03-02 10:13 UTC MAINTAINERS: add co-maintainer and reviewer for SLAB ALLOCATOR 1 vbabka@kernel.org finished in 1h1m0s
2026-02-27 03:07 UTC mm/slab: a debug patch to investigate the issue further 1 harry.yoo@oracle.com finished in 4h7m0s
2026-02-26 11:51 UTC memcg: fix slab accounting in refill_obj_stock() trylock path 1 hao.li@linux.dev finished in 4h18m0s
2026-02-23 13:33 UTC mm/slab: pass __GFP_NOWARN to refill_sheaf() if fallback is available 1 harry.yoo@oracle.com finished in 4h3m0s
2026-02-23 07:58 UTC mm/slab: initialize slab->stride early to avoid memory ordering issues 1 harry.yoo@oracle.com finished in 4h13m0s
2026-02-11 09:42 UTC slab: distinguish lock and trylock for sheaf_flush_main() 1 vbabka@suse.cz finished in 4h5m0s
2026-02-10 08:18 UTC fix lockdep warnings with kmalloc_nolock() 1 harry.yoo@oracle.com skipped
2026-02-10 04:46 UTC mm/slab: support kmalloc_nolock() -> kfree[_rcu]() 1 harry.yoo@oracle.com finished in 3h50m0s
2026-02-09 12:10 UTC mm/slab: support kmalloc_nolock() -> kfree[_rcu]() 1 harry.yoo@oracle.com finished in 4h17m0s
2026-02-06 17:13 UTC mm/slab: fix lockdep warnings with kmalloc_nolock() 1 harry.yoo@oracle.com skipped
2026-02-06 09:34 UTC k[v]free_rcu() improvements 1 harry.yoo@oracle.com finished in 3h57m0s
2026-02-05 12:07 UTC slub: let need_slab_obj_exts() return false if SLAB_NO_OBJ_EXT is set 1 hao.li@linux.dev finished in 3h48m0s
2026-02-04 10:14 UTC mm/slab: Add alloc_tagging_slab_free_hook for memcg_alloc_abort_single 2 hao.ge@linux.dev finished in 3h57m0s
2026-01-29 09:07 UTC slub: avoid list_lock contention from __refill_objects_any() 1 vbabka@suse.cz finished in 4h8m0s
2026-01-27 10:31 UTC Only allow SLAB_OBJ_EXT_IN_OBJ for unmergeable caches 1 harry.yoo@oracle.com finished in 3h56m0s
2026-01-26 12:57 UTC mm/slab: avoid allocating slabobj_ext array from its own slab 1 harry.yoo@oracle.com finished in 3h55m0s
2026-01-24 10:46 UTC mm/slab: avoid allocating slabobj_ext array from its own slab 1 harry.yoo@oracle.com finished in 3h53m0s
2026-01-23 06:52 UTC slab: replace cpu (partial) slabs with sheaves 4 vbabka@suse.cz finished in 3h49m0s
2026-01-21 13:16 UTC mm/slab: fix false lockdep warning in __kfree_rcu_sheaf() 1 harry.yoo@oracle.com finished in 3h51m0s
2026-01-21 06:57 UTC slab: replace cache_from_obj() with inline checks 2 vbabka@suse.cz finished in 4h18m0s
2026-01-20 09:35 UTC slab: replace cache_from_obj() with inline checks 1 vbabka@suse.cz finished in 4h4m0s
2026-01-16 14:40 UTC slab: replace cpu (partial) slabs with sheaves 3 vbabka@suse.cz skipped
2026-01-13 06:18 UTC mm/slab: reduce slab accounting memory overhead by allocating slabobj_ext metadata within unsed slab space 1 harry.yoo@oracle.com finished in 4h23m0s
2026-01-12 15:16 UTC slab: replace cpu (partial) slabs with sheaves 2 vbabka@suse.cz finished in 4h11m0s
2026-01-05 08:02 UTC mm/slab: reduce slab accounting memory overhead by allocating slabobj_ext metadata within unsed slab space 1 harry.yoo@oracle.com finished in 4h5m0s
2025-12-29 12:24 UTC slub: clarify object field layout comments 2 hao.li@linux.dev finished in 49m0s
2025-12-24 12:51 UTC slub: clarify object field layout comments 1 hao.li@linux.dev finished in 1h0m0s
2025-12-22 11:08 UTC mm/slab: reduce slab accounting memory overhead by allocating slabobj_ext metadata within unsed slab space 1 harry.yoo@oracle.com finished in 3h52m0s