| Published | Title | Version | Author | Status | 
|---|---|---|---|---|
| 2025-10-29 01:43 UTC | codetag: debug: handle existing CODETAG_EMPTY in mark_objexts_empty for slabobj_ext | 2 | hao.ge@linux.dev | finished in 44m0s | 
| 2025-10-28 13:58 UTC | Eliminate Dying Memory Cgroup | 1 | qi.zheng@linux.dev | finished
              in 3h43m0s [1 findings] | 
| 2025-10-27 12:28 UTC | mm/slab: reduce slab accounting memory overhead by allocating slabobj_ext metadata within unused slab space | 1 | harry.yoo@oracle.com | skipped | 
| 2025-10-27 12:00 UTC | mm/slab: ensure all metadata in slab object are word-aligned | 1 | harry.yoo@oracle.com | finished in 3h58m0s | 
| 2025-10-27 08:52 UTC | codetag: debug: Handle existing CODETAG_EMPTY in mark_objexts_empty for slabobj_ext | 1 | hao.ge@linux.dev | finished in 43m0s | 
| 2025-10-26 01:05 UTC | Introduce per-cgroup compression priority | 1 | jinji.z.zhong@gmail.com | finished in 3h42m0s | 
| 2025-10-24 20:44 UTC | Prepare slab for memdescs | 3 | willy@infradead.org | finished in 3h55m0s | 
| 2025-10-24 17:06 UTC | slab: switch away from the legacy param parser | 1 | ptesarik@suse.com | finished in 4h5m0s | 
| 2025-10-23 14:33 UTC | slab: Fix obj_ext is mistakenly considered NULL due to race condition | 2 | hao.ge@linux.dev | finished in 3h46m0s | 
| 2025-10-23 13:52 UTC | slab: replace cpu (partial) slabs with sheaves | 1 | vbabka@suse.cz | skipped | 
| 2025-10-23 13:16 UTC | mm/slab: ensure all metadata in slab object are word-aligned | 1 | harry.yoo@oracle.com | finished in 3h46m0s | 
| 2025-10-23 12:01 UTC | slab: fix slab accounting imbalance due to defer_deactivate_slab() | 2 | vbabka@suse.cz | finished in 3h44m0s | 
| 2025-10-23 01:21 UTC | slab: Fix obj_ext is mistakenly considered NULL due to race condition | 1 | hao.ge@linux.dev | finished in 50m0s | 
| 2025-10-22 17:23 UTC | slab: perform inc_slabs_node() as part of new_slab() | 1 | vbabka@suse.cz | finished in 4h4m0s | 
| 2025-10-21 23:44 UTC | memcg: manually uninline __memcg_memory_event | 1 | shakeel.butt@linux.dev | finished in 3h50m0s | 
| 2025-10-21 11:00 UTC | mm: Remove reference to destructor in comment in calculate_sizes() | 1 | william.kucharski@oracle.com | finished in 55m0s | 
| 2025-10-21 01:03 UTC | slab: Avoid race on slab->obj_exts in alloc_slab_obj_exts | 3 | hao.ge@linux.dev | finished in 3h43m0s | 
| 2025-10-20 14:30 UTC | slab: Avoid race on slab->obj_exts in alloc_slab_obj_exts | 2 | hao.ge@linux.dev | finished in 3h54m0s | 
| 2025-10-17 06:48 UTC | slab: add flags in cache_show | 1 | kassey.li@oss.qualcomm.com | finished in 4h0m0s | 
| 2025-10-17 04:57 UTC | slab: Avoid race on slab->obj_exts in alloc_slab_obj_exts | 1 | hao.ge@linux.dev | finished in 38m0s | 
| 2025-10-16 16:10 UTC | memcg: net: track network throttling due to memcg memory pressure | 2 | shakeel.butt@linux.dev | finished in 3h44m0s | 
| 2025-10-16 01:31 UTC | memcg: net: track network throttling due to memcg memory pressure | 1 | shakeel.butt@linux.dev | finished in 3h55m0s | 
| 2025-10-15 14:16 UTC | slab: reset obj_ext when it is not actually valid during freeing | 5 | hao.ge@linux.dev | finished in 2h32m0s | 
| 2025-10-15 12:59 UTC | slab: clear OBJEXTS_ALLOC_FAIL when freeing a slab | 4 | hao.ge@linux.dev | finished
              in 2h50m0s [1 findings] | 
| 2025-10-15 06:35 UTC | reparent the THP split queue | 5 | qi.zheng@linux.dev | finished in 4h5m0s | 
| 2025-10-14 15:27 UTC | slab: Add check for memcg_data != OBJEXTS_ALLOC_FAIL in folio_memcg_kmem | 3 | hao.ge@linux.dev | finished in 3h42m0s | 
| 2025-10-14 14:08 UTC | slab: Add check for memcg_data's upper bits in folio_memcg_kmem | 2 | hao.ge@linux.dev | finished in 4h4m0s | 
| 2025-10-14 12:17 UTC | mempool: clarify behavior of mempool_alloc_preallocated() | 1 | thomas.weissschuh@linutronix.de | finished in 38m0s | 
| 2025-10-14 09:31 UTC | slab: Introduce __SECOND_OBJEXT_FLAG for objext_flags | 1 | hao.ge@linux.dev | finished
              in 1h20m0s [1 findings] | 
| 2025-10-14 08:40 UTC | slab: fix clearing freelist in free_deferred_objects() | 1 | vbabka@suse.cz | finished in 3h49m0s | 
| 2025-10-11 08:45 UTC | slab: fix barn NULL pointer dereference on memoryless nodes | 1 | vbabka@suse.cz | finished in 3h39m0s | 
| 2025-10-07 12:50 UTC | memcg: expose socket memory pressure in a cgroup | 5 | daniel.sedlak@cdn77.com | finished in 3h51m0s | 
| 2025-10-07 05:25 UTC | slub: Don't call lockdep_unregister_key() for immature kmem_cache. | 1 | kuniyu@google.com | finished in 3h47m0s | 
| 2025-10-06 17:51 UTC | mm: readahead: make thp readahead conditional to mmap_miss logic | 3 | roman.gushchin@linux.dev | finished in 3h48m0s | 
| 2025-10-06 01:54 UTC | mm: readahead: make thp readahead conditional to mmap_miss logic | 2 | roman.gushchin@linux.dev | finished in 3h46m0s | 
| 2025-10-03 20:38 UTC | mm/zswap: misc cleanup of code and documentations | 1 | sj@kernel.org | finished in 3h44m0s | 
| 2025-10-03 16:53 UTC | reparent the THP split queue | 4 | qi.zheng@linux.dev | finished in 3h48m0s | 
| 2025-09-30 08:34 UTC | slab: Fix using this_cpu_ptr() in preemptible context | 1 | ranxiaokai627@163.com | finished in 4h38m0s | 
| 2025-09-30 06:38 UTC | slab: Add allow_spin check to eliminate kmemleak warnings | 1 | ranxiaokai627@163.com | finished
              in 42m0s [1 findings] | 
| 2025-09-30 05:48 UTC | mm: readahead: make thp readahead conditional to mmap_miss logic | 1 | roman.gushchin@linux.dev | finished in 3h44m0s | 
| 2025-09-29 00:26 UTC | mm: Fix some typos in mm module | 2 | jianyungao89@gmail.com | finished in 38m0s | 
| 2025-09-28 11:16 UTC | reparent the THP split queue | 3 | zhengqi.arch@bytedance.com | finished in 3h42m0s | 
| 2025-09-27 08:06 UTC | mm: Fix some typos in mm module | 1 | jianyungao89@gmail.com | finished in 39m0s | 
| 2025-09-26 08:06 UTC | alloc_tag: Fix boot failure due to NULL pointer dereference | 1 | ranxiaokai627@163.com | finished in 40m0s | 
| 2025-09-24 14:59 UTC | mm: ASI direct map management | 1 | jackmanb@google.com | finished
              in 40m0s [1 findings] | 
| 2025-09-23 09:16 UTC | reparent the THP split queue | 2 | zhengqi.arch@bytedance.com | finished in 3h44m0s | 
| 2025-09-22 22:02 UTC | memcg: skip cgroup_file_notify if spinning is not allowed | 2 | shakeel.butt@linux.dev | finished in 3h46m0s | 
| 2025-09-22 17:03 UTC | mm/slab: Add size validation in kmalloc_array_* functions | 1 | viswanathiyyappan@gmail.com | finished in 3h44m0s | 
| 2025-09-22 02:14 UTC | mm/thp: fix MTE tag mismatch when replacing zero-filled subpages | 1 | lance.yang@linux.dev | skipped | 
| 2025-09-19 03:46 UTC | reparent the THP split queue | 1 | zhengqi.arch@bytedance.com | finished in 3h36m0s | 
| 2025-09-17 21:29 UTC | memcg: Don't wait writeback completion when release memcg. | 6 | sunjunchao@bytedance.com | finished in 3h44m0s | 
| 2025-09-16 16:01 UTC | fixup: alloc_tag: mark inaccurate allocation counters in /proc/allocinfo output | 1 | surenb@google.com | skipped | 
| 2025-09-15 23:02 UTC | alloc_tag: mark inaccurate allocation counters in /proc/allocinfo output | 2 | surenb@google.com | finished in 43m0s | 
| 2025-09-15 20:09 UTC | fixes for slab->obj_exts allocation failure handling | 1 | surenb@google.com | finished in 3h44m0s | 
| 2025-09-15 19:51 UTC | CMA balancing | 1 | fvdl@google.com | finished in 3h44m0s | 
| 2025-09-15 13:55 UTC | slab: struct slab pointer validation improvements | 2 | vbabka@suse.cz | finished in 1h7m0s | 
| 2025-09-11 17:02 UTC | slab: struct slab pointer validation improvements | 1 | vbabka@suse.cz | finished in 3h37m0s | 
| 2025-09-10 11:54 UTC | Prepare slab for memdescs | 2 | willy@infradead.org | skipped | 
| 2025-09-10 08:01 UTC | SLUB percpu sheaves | 8 | vbabka@suse.cz | finished
              in 28m0s [1 findings] | 
| 2025-09-10 00:59 UTC | mm/slub: Removing unnecessary variable accesses in the get_freelist() | 1 | rgbi3307@gmail.com | finished
              in 1h5m0s [1 findings] | 
| 2025-09-09 23:49 UTC | alloc_tag: mark inaccurate allocation counters in /proc/allocinfo output | 1 | surenb@google.com | finished in 53m0s | 
| 2025-09-09 09:52 UTC | bpf/helpers: Use __GFP_HIGH instead of GFP_ATOMIC in __bpf_async_init() | 2 | yepeilin@google.com | finished in 3h53m0s | 
| 2025-09-09 07:48 UTC | mm/slub: Use folio_nr_pages() in __free_slab() | 1 | husong@kylinos.cn | finished in 3h43m0s | 
| 2025-09-09 06:53 UTC | mm: swap: Gather swap entries and batch async release | 0 | liulei.rjpt@vivo.com | finished in 3h46m0s | 
| 2025-09-09 01:33 UTC | mm/slub: Refactor note_cmpxchg_failure for better readability | 3 | ye.liu@linux.dev | finished in 42m0s | 
| 2025-09-08 09:42 UTC | mm/slub: Refactor note_cmpxchg_failure for better readability | 2 | ye.liu@linux.dev | finished in 53m0s | 
| 2025-09-08 07:19 UTC | mm/slub: Refactor note_cmpxchg_failure for better readability | 1 | ye.liu@linux.dev | finished in 53m0s | 
| 2025-09-05 23:45 UTC | bpf/helpers: Use __GFP_HIGH instead of GFP_ATOMIC in __bpf_async_init() | 1 | yepeilin@google.com | finished in 3h47m0s | 
| 2025-09-05 20:16 UTC | memcg: skip cgroup_file_notify if spinning is not allowed | 1 | shakeel.butt@linux.dev | finished in 3h43m0s | 
| 2025-09-05 09:38 UTC | mm/memcg: v1: account event registrations and drop world-writable cgroup.event_control | 1 | stanislav.fort@aisle.com | finished in 3h39m0s | 
| 2025-09-04 18:12 UTC | mm/memcg: v1: account event registrations and drop world-writable cgroup.event_control | 1 | stanislav.fort@aisle.com | finished in 3h48m0s | 
| 2025-09-03 12:59 UTC | SLUB percpu sheaves | 7 | vbabka@suse.cz | finished in 3h57m0s | 
| 2025-09-03 07:30 UTC | samples/cgroup: rm unused MEMCG_EVENTS macro | 1 | zhangjiao2@cmss.chinamobile.com | finished in 53m0s | 
| 2025-09-01 08:46 UTC | mm: slub: avoid wake up kswapd in set_track_prepare | 5 | yangshiguang1011@163.com | finished in 4h0m0s | 
| 2025-08-30 02:09 UTC | mm: slub: avoid wake up kswapd in set_track_prepare | 4 | yangshiguang1011@163.com | finished in 3h40m0s | 
| 2025-08-29 15:47 UTC | Prepare slab for memdescs | 1 | willy@infradead.org | skipped | 
| 2025-08-27 11:37 UTC | mm/slab: reduce slab accounting memory overhead by allocating slabobj_ext metadata within unused slab space | 1 | harry.yoo@oracle.com | finished in 3h49m0s | 
| 2025-08-27 08:26 UTC | SLUB percpu sheaves | 6 | vbabka@suse.cz | finished in 3h53m0s | 
| 2025-08-26 12:16 UTC | memcg: Don't wait writeback completion when release memcg. | 2 | sunjunchao@bytedance.com | finished in 3h39m0s | 
| 2025-08-26 06:23 UTC | mm/slub: Fix debugfs stack trace sorting and simplify sort call | 2 | visitorckw@gmail.com | finished in 3h42m0s | 
| 2025-08-25 15:44 UTC | slab: support for compiler-assisted type-based slab cache partitioning | 1 | elver@google.com | finished in 3h51m0s | 
| 2025-08-25 12:17 UTC | mm: slub: avoid wake up kswapd in set_track_prepare | 3 | yangshiguang1011@163.com | finished in 3h44m0s | 
| 2025-08-25 01:34 UTC | mm/slub: Fix debugfs stack trace sorting and simplify sort call | 1 | visitorckw@gmail.com | finished in 3h38m0s | 
| 2025-08-21 22:51 UTC | mm: fix CONFIG_MEMCG build for AS_KERNEL_FILE | 1 | boris@bur.io | finished in 36m0s | 
| 2025-08-21 21:55 UTC | introduce kernel file mapped folios | 4 | boris@bur.io | finished in 3h41m0s | 
| 2025-08-19 00:36 UTC | introduce uncharged file mapped folios | 3 | boris@bur.io | finished in 3h45m0s | 
| 2025-08-18 18:46 UTC | memcg: remove warning from folio_lruvec | 1 | shakeel.butt@linux.dev | finished in 3h55m0s | 
| 2025-08-18 17:01 UTC | mm: BPF OOM | 1 | roman.gushchin@linux.dev | finished in 3h55m0s | 
| 2025-08-15 20:16 UTC | net-memcg: Gather memcg code under CONFIG_MEMCG. | 5 | kuniyu@google.com | finished in 3h45m0s | 
| 2025-08-15 18:32 UTC | mm: readahead: improve mmap_miss heuristic for concurrent faults | 1 | roman.gushchin@linux.dev | finished in 3h55m0s | 
| 2025-08-14 20:08 UTC | net-memcg: Gather memcg code under CONFIG_MEMCG. | 4 | kuniyu@google.com | finished in 3h49m0s | 
| 2025-08-14 11:16 UTC | mm: slub: avoid wake up kswapd in set_track_prepare | 2 | yangshiguang1011@163.com | finished in 3h43m0s | 
| 2025-08-13 14:57 UTC | memcg: Optimize exit to user space | 1 | tglx@linutronix.de | finished in 3h48m0s | 
| 2025-08-12 17:58 UTC | net-memcg: Decouple controlled memcg from sk->sk_prot->memory_allocated. | 3 | kuniyu@google.com | finished in 3h40m0s | 
| 2025-08-12 08:30 UTC | mempool: rename struct mempool_s to struct mempool | 1 | hch@lst.de | finished in 3h39m0s | 
| 2025-08-11 17:30 UTC | net-memcg: Decouple controlled memcg from sk->sk_prot->memory_allocated. | 2 | kuniyu@google.com | finished
              in 31m0s [1 findings] | 
| 2025-08-05 06:44 UTC | memcg: expose socket memory pressure in a cgroup | 4 | daniel.sedlak@cdn77.com | finished in 4h35m0s | 
| 2025-07-22 07:11 UTC | memcg: expose socket memory pressure in a cgroup | 3 | daniel.sedlak@cdn77.com | finished in 1h38m0s | 
| 2025-07-21 20:35 UTC | net-memcg: Allow decoupling memcg from sk->sk_prot->memory_allocated. | 1 | kuniyu@google.com | skipped |