| Published | Title | Version | Author | Status |
|---|---|---|---|---|
| 2025-10-24 20:44 UTC | Prepare slab for memdescs | 3 | willy@infradead.org | finished in 3h55m0s |
| 2025-10-24 17:06 UTC | slab: switch away from the legacy param parser | 1 | ptesarik@suse.com | finished in 4h5m0s |
| 2025-10-24 11:30 UTC | mm: MISC follow-up patches for linux/pgalloc.h | 1 | harry.yoo@oracle.com | finished in 4h5m0s |
| 2025-10-24 07:41 UTC | remove is_swap_[pte, pmd]() + non-swap confusion | 1 | lorenzo.stoakes@oracle.com | finished in 23m0s |
| 2025-10-23 14:33 UTC | slab: Fix obj_ext is mistakenly considered NULL due to race condition | 2 | hao.ge@linux.dev | finished in 3h46m0s |
| 2025-10-23 13:52 UTC | slab: replace cpu (partial) slabs with sheaves | 1 | vbabka@suse.cz | skipped |
| 2025-10-23 13:16 UTC | mm/slab: ensure all metadata in slab object are word-aligned | 1 | harry.yoo@oracle.com | finished in 3h46m0s |
| 2025-10-23 12:01 UTC | slab: fix slab accounting imbalance due to defer_deactivate_slab() | 2 | vbabka@suse.cz | finished in 3h44m0s |
| 2025-10-23 01:21 UTC | slab: Fix obj_ext is mistakenly considered NULL due to race condition | 1 | hao.ge@linux.dev | finished in 50m0s |
| 2025-10-22 17:23 UTC | slab: perform inc_slabs_node() as part of new_slab() | 1 | vbabka@suse.cz | finished in 4h4m0s |
| 2025-10-21 11:00 UTC | mm: Remove reference to destructor in comment in calculate_sizes() | 1 | william.kucharski@oracle.com | finished in 55m0s |
| 2025-10-21 06:35 UTC | mm/memory: Do not populate page table entries beyond i_size. | 1 | kirill@shutemov.name | finished in 3h49m0s |
| 2025-10-21 01:03 UTC | slab: Avoid race on slab->obj_exts in alloc_slab_obj_exts | 3 | hao.ge@linux.dev | finished in 3h43m0s |
| 2025-10-20 14:30 UTC | slab: Avoid race on slab->obj_exts in alloc_slab_obj_exts | 2 | hao.ge@linux.dev | finished in 3h54m0s |
| 2025-10-19 07:25 UTC | mm: MISC follow-up patches for linux/pgalloc.h | 1 | harry.yoo@oracle.com | finished in 40m0s |
| 2025-10-17 06:48 UTC | slab: add flags in cache_show | 1 | kassey.li@oss.qualcomm.com | finished in 4h0m0s |
| 2025-10-17 04:57 UTC | slab: Avoid race on slab->obj_exts in alloc_slab_obj_exts | 1 | hao.ge@linux.dev | finished in 38m0s |
| 2025-10-15 14:16 UTC | slab: reset obj_ext when it is not actually valid during freeing | 5 | hao.ge@linux.dev | finished in 2h32m0s |
| 2025-10-15 12:59 UTC | slab: clear OBJEXTS_ALLOC_FAIL when freeing a slab | 4 | hao.ge@linux.dev |
finished
in 2h50m0s
[1 findings] |
| 2025-10-15 06:35 UTC | reparent the THP split queue | 5 | qi.zheng@linux.dev | finished in 4h5m0s |
| 2025-10-15 00:07 UTC | bpf: Replace bpf_map_kmalloc_node() with kmalloc_nolock() to allocate bpf_async_cb structures. | 2 | alexei.starovoitov@gmail.com | finished in 3h42m0s |
| 2025-10-14 21:25 UTC | bpf: Replace bpf_map_kmalloc_node() with kmalloc_nolock() to allocate bpf_async_cb structures. | 1 | alexei.starovoitov@gmail.com | finished in 3h43m0s |
| 2025-10-14 19:49 UTC | =?UTF-8?q?Add=20a=20new=20test=20'migrate.cow=5Fafter=5Ff?= =?UTF-8?q?ork'=20that=20verifies=20correct=20RMAP=20handling=20of=20Copy-?= =?UTF-8?q?On-Write=20pages=20after=20fork().=20Before=20a=20write,=20pare?= =?UTF-8?q?nt=20and=20child=20share=20the=20same=20PFN;=20after=20a=20writ?= =?UTF-8?q?e,=20the=20child=E2=80=99s=20PFN=20differs,=20confirming=20prop?= =?UTF-8?q?er=20COW=20duplication.?= | 1 | dalalitamar@gmail.com | finished in 59m0s |
| 2025-10-14 15:27 UTC | slab: Add check for memcg_data != OBJEXTS_ALLOC_FAIL in folio_memcg_kmem | 3 | hao.ge@linux.dev | finished in 3h42m0s |
| 2025-10-14 14:08 UTC | slab: Add check for memcg_data's upper bits in folio_memcg_kmem | 2 | hao.ge@linux.dev | finished in 4h4m0s |
| 2025-10-14 12:17 UTC | mempool: clarify behavior of mempool_alloc_preallocated() | 1 | thomas.weissschuh@linutronix.de | finished in 38m0s |
| 2025-10-14 09:31 UTC | slab: Introduce __SECOND_OBJEXT_FLAG for objext_flags | 1 | hao.ge@linux.dev |
finished
in 1h20m0s
[1 findings] |
| 2025-10-14 08:40 UTC | slab: fix clearing freelist in free_deferred_objects() | 1 | vbabka@suse.cz | finished in 3h49m0s |
| 2025-10-11 08:45 UTC | slab: fix barn NULL pointer dereference on memoryless nodes | 1 | vbabka@suse.cz | finished in 3h39m0s |
| 2025-10-07 05:25 UTC | slub: Don't call lockdep_unregister_key() for immature kmem_cache. | 1 | kuniyu@google.com | finished in 3h47m0s |
| 2025-10-03 16:53 UTC | reparent the THP split queue | 4 | qi.zheng@linux.dev | finished in 3h48m0s |
| 2025-10-02 08:12 UTC | DEPT(DEPendency Tracker) | 17 | byungchul@sk.com | finished in 3h43m0s |
| 2025-09-30 08:34 UTC | slab: Fix using this_cpu_ptr() in preemptible context | 1 | ranxiaokai627@163.com | finished in 4h38m0s |
| 2025-09-30 08:10 UTC | mm/rmap: fix soft-dirty and uffd-wp bit loss when remapping zero-filled mTHP subpage to shared zeropage | 5 | lance.yang@linux.dev | finished in 3h45m0s |
| 2025-09-30 07:10 UTC | mm/rmap: fix soft-dirty and uffd-wp bit loss when remapping zero-filled mTHP subpage to shared zeropage | 4 | lance.yang@linux.dev |
finished
in 3h41m0s
[1 findings] |
| 2025-09-30 06:38 UTC | slab: Add allow_spin check to eliminate kmemleak warnings | 1 | ranxiaokai627@163.com |
finished
in 42m0s
[1 findings] |
| 2025-09-30 06:05 UTC | mm/rmap: fix soft-dirty and uffd-wp bit loss when remapping zero-filled mTHP subpage to shared zeropage | 3 | lance.yang@linux.dev |
finished
in 3h46m0s
[1 findings] |
| 2025-09-30 04:33 UTC | mm/rmap: fix soft-dirty and uffd-wp bit loss when remapping zero-filled mTHP subpage to shared zeropage | 2 | lance.yang@linux.dev | finished in 3h44m0s |
| 2025-09-29 00:26 UTC | mm: Fix some typos in mm module | 2 | jianyungao89@gmail.com | finished in 38m0s |
| 2025-09-28 11:16 UTC | reparent the THP split queue | 3 | zhengqi.arch@bytedance.com | finished in 3h42m0s |
| 2025-09-28 04:48 UTC | mm/rmap: fix soft-dirty bit loss when remapping zero-filled mTHP subpage to shared zeropage | 1 | lance.yang@linux.dev | finished in 3h41m0s |
| 2025-09-27 08:06 UTC | mm: Fix some typos in mm module | 1 | jianyungao89@gmail.com | finished in 39m0s |
| 2025-09-26 09:24 UTC | mm: silence data-race in update_hiwater_rss | 1 | lance.yang@linux.dev | finished in 49m0s |
| 2025-09-26 08:06 UTC | alloc_tag: Fix boot failure due to NULL pointer dereference | 1 | ranxiaokai627@163.com | finished in 40m0s |
| 2025-09-23 09:16 UTC | reparent the THP split queue | 2 | zhengqi.arch@bytedance.com | finished in 3h44m0s |
| 2025-09-23 07:10 UTC | Improve UFFDIO_MOVE scalability by removing anon_vma lock | 2 | lokeshgidra@google.com | finished in 3h36m0s |
| 2025-09-22 17:03 UTC | mm/slab: Add size validation in kmalloc_array_* functions | 1 | viswanathiyyappan@gmail.com | finished in 3h44m0s |
| 2025-09-18 11:21 UTC | mm: Improve mlock tracking for large folios | 1 | kirill@shutemov.name | finished in 3h40m0s |
| 2025-09-18 05:51 UTC | Improve UFFDIO_MOVE scalability by removing anon_vma lock | 1 | lokeshgidra@google.com | finished in 1h6m0s |
| 2025-09-16 19:12 UTC | mm/hugetlb: fix copy_hugetlb_page_range() to use ->pt_share_count | 3 | jane.chu@oracle.com | finished in 3h50m0s |
| 2025-09-16 16:01 UTC | fixup: alloc_tag: mark inaccurate allocation counters in /proc/allocinfo output | 1 | surenb@google.com | skipped |
| 2025-09-16 07:22 UTC | mm/vmscan: Add readahead LRU to improve readahead file page reclamation efficiency | 0 | liulei.rjpt@vivo.com | finished in 3h41m0s |
| 2025-09-16 02:21 UTC | slab: Disallow kprobes in ___slab_alloc() | 1 | alexei.starovoitov@gmail.com | finished in 3h39m0s |
| 2025-09-16 01:46 UTC | slab: Clarify comments regarding pfmemalloc and NUMA preferences | 1 | alexei.starovoitov@gmail.com | finished in 36m0s |
| 2025-09-16 00:45 UTC | mm/hugetlb: fix copy_hugetlb_page_range() to use ->pt_share_count | 3 | jane.chu@oracle.com | finished in 3h38m0s |
| 2025-09-15 23:02 UTC | alloc_tag: mark inaccurate allocation counters in /proc/allocinfo output | 2 | surenb@google.com | finished in 43m0s |
| 2025-09-15 20:09 UTC | fixes for slab->obj_exts allocation failure handling | 1 | surenb@google.com | finished in 3h44m0s |
| 2025-09-15 13:55 UTC | slab: struct slab pointer validation improvements | 2 | vbabka@suse.cz | finished in 1h7m0s |
| 2025-09-11 17:02 UTC | slab: struct slab pointer validation improvements | 1 | vbabka@suse.cz | finished in 3h37m0s |
| 2025-09-10 19:27 UTC | mm/hugetlb: fix copy_hugetlb_page_range() to use ->pt_share_count | 2 | jane.chu@oracle.com | finished in 3h44m0s |
| 2025-09-10 11:54 UTC | Prepare slab for memdescs | 2 | willy@infradead.org | skipped |
| 2025-09-10 08:01 UTC | SLUB percpu sheaves | 8 | vbabka@suse.cz |
finished
in 28m0s
[1 findings] |
| 2025-09-09 23:49 UTC | alloc_tag: mark inaccurate allocation counters in /proc/allocinfo output | 1 | surenb@google.com | finished in 53m0s |
| 2025-09-09 18:43 UTC | mm/hugetlb: fix copy_hugetlb_page_range() to use ->pt_share_count | 1 | jane.chu@oracle.com | finished in 3h46m0s |
| 2025-09-09 07:48 UTC | mm/slub: Use folio_nr_pages() in __free_slab() | 1 | husong@kylinos.cn | finished in 3h43m0s |
| 2025-09-09 01:33 UTC | mm/slub: Refactor note_cmpxchg_failure for better readability | 3 | ye.liu@linux.dev | finished in 42m0s |
| 2025-09-09 01:00 UTC | slab: Re-entrant kmalloc_nolock() | 5 | alexei.starovoitov@gmail.com | skipped |
| 2025-09-08 19:49 UTC | MAINTAINERS: add Jann Horn as rmap reviewer | 1 | lorenzo.stoakes@oracle.com | finished in 35m0s |
| 2025-09-08 14:05 UTC | mm/rmap: make num_children and num_active_vmas update in internally | 2 | yajun.deng@linux.dev | finished in 3h38m0s |
| 2025-09-08 09:42 UTC | mm/slub: Refactor note_cmpxchg_failure for better readability | 2 | ye.liu@linux.dev | finished in 53m0s |
| 2025-09-08 07:19 UTC | mm/slub: Refactor note_cmpxchg_failure for better readability | 1 | ye.liu@linux.dev | finished in 53m0s |
| 2025-09-08 04:49 UTC | mm: always call rmap_walk() on locked folios | 1 | lokeshgidra@google.com | finished in 3h40m0s |
| 2025-09-05 13:20 UTC | mm/rmap: make num_children and num_active_vmas update in internally | 1 | yajun.deng@linux.dev |
finished
in 1h27m0s
[1 findings] |
| 2025-09-03 12:59 UTC | SLUB percpu sheaves | 7 | vbabka@suse.cz | finished in 3h57m0s |
| 2025-09-01 11:08 UTC | maple_tree: slub sheaves conversion | 1 | vbabka@suse.cz |
finished
in 46m0s
[1 findings] |
| 2025-09-01 08:46 UTC | mm: slub: avoid wake up kswapd in set_track_prepare | 5 | yangshiguang1011@163.com | finished in 4h0m0s |
| 2025-08-30 02:09 UTC | mm: slub: avoid wake up kswapd in set_track_prepare | 4 | yangshiguang1011@163.com | finished in 3h40m0s |
| 2025-08-29 15:47 UTC | Prepare slab for memdescs | 1 | willy@infradead.org | skipped |
| 2025-08-27 11:37 UTC | mm/slab: reduce slab accounting memory overhead by allocating slabobj_ext metadata within unused slab space | 1 | harry.yoo@oracle.com | finished in 3h49m0s |
| 2025-08-27 08:26 UTC | SLUB percpu sheaves | 6 | vbabka@suse.cz | finished in 3h53m0s |
| 2025-08-26 06:58 UTC | docs/mm: explain when and why rmap locks need to be taken during mremap() | 1 | harry.yoo@oracle.com | finished in 3h45m0s |
| 2025-08-26 06:23 UTC | mm/slub: Fix debugfs stack trace sorting and simplify sort call | 2 | visitorckw@gmail.com | finished in 3h42m0s |
| 2025-08-25 15:44 UTC | slab: support for compiler-assisted type-based slab cache partitioning | 1 | elver@google.com | finished in 3h51m0s |
| 2025-08-25 12:17 UTC | mm: slub: avoid wake up kswapd in set_track_prepare | 3 | yangshiguang1011@163.com | finished in 3h44m0s |
| 2025-08-25 01:34 UTC | mm/slub: Fix debugfs stack trace sorting and simplify sort call | 1 | visitorckw@gmail.com | finished in 3h38m0s |
| 2025-08-22 02:07 UTC | mm: fix KASAN build error due to p*d_populate_kernel() | 3 | harry.yoo@oracle.com | skipped |
| 2025-08-21 11:57 UTC | mm: fix KASAN build error due to p*d_populate_kernel() | 2 | harry.yoo@oracle.com | skipped |
| 2025-08-21 09:35 UTC | mm: fix KASAN build error due to p*d_populate_kernel() | 1 | harry.yoo@oracle.com | skipped |
| 2025-08-19 08:00 UTC | test that rmap behaves as expected | 4 | richard.weiyang@gmail.com | skipped |
| 2025-08-18 17:53 UTC | mm/mremap: fix WARN with uffd that has remap events disabled | 1 | david@redhat.com | finished in 3h56m0s |
| 2025-08-18 02:29 UTC | assert rmap behaves as expected | 3 | richard.weiyang@gmail.com | finished in 3h45m0s |
| 2025-08-18 02:02 UTC | mm, x86: fix crash due to missing page table sync and make it harder to miss | 1 | harry.yoo@oracle.com | finished in 3h51m0s |
| 2025-08-17 15:17 UTC | mm/migrate: Fix NULL movable_ops if CONFIG_ZSMALLOC=m | 1 | chenhuacai@loongson.cn | finished in 3h49m0s |
| 2025-08-17 08:35 UTC | mm/migrate: Fix NULL movable_ops if CONFIG_ZSMALLOC=m | 1 | chenhuacai@loongson.cn | finished in 3h45m0s |
| 2025-08-17 03:26 UTC | mm/rmap: small cleanup for __folio_remove_rmap() | 2 | richard.weiyang@gmail.com | skipped |
| 2025-08-16 07:21 UTC | mm/migrate: Fix NULL movable_ops if CONFIG_ZSMALLOC=m | 1 | chenhuacai@loongson.cn |
finished
in 39m0s
[1 findings] |
| 2025-08-15 09:05 UTC | mm/migrate: Fix NULL movable_ops if CONFIG_ZSMALLOC=m | 1 | chenhuacai@loongson.cn | finished in 3h45m0s |
| 2025-08-15 08:49 UTC | mm/rmap: small cleanup for __folio_remove_rmap() | 1 | richard.weiyang@gmail.com | skipped |
| 2025-08-14 20:05 UTC | mm/rmap: Always inline __folio_rmap_sanity_checks() | 1 | nathan@kernel.org | skipped |
| 2025-08-14 15:32 UTC | mm: slowtier page promotion based on PTE A bit | 1 | raghavendra.kt@amd.com | skipped |