| Published | Title | Version | Author | Status |
|---|---|---|---|---|
| 2025-10-14 08:40 UTC | slab: fix clearing freelist in free_deferred_objects() | 1 | vbabka@suse.cz | finished in 3h49m0s |
| 2025-10-11 08:45 UTC | slab: fix barn NULL pointer dereference on memoryless nodes | 1 | vbabka@suse.cz | finished in 3h39m0s |
| 2025-10-07 05:25 UTC | slub: Don't call lockdep_unregister_key() for immature kmem_cache. | 1 | kuniyu@google.com | finished in 3h47m0s |
| 2025-10-03 16:53 UTC | reparent the THP split queue | 4 | qi.zheng@linux.dev | finished in 3h48m0s |
| 2025-10-02 08:12 UTC | DEPT(DEPendency Tracker) | 17 | byungchul@sk.com | finished in 3h43m0s |
| 2025-09-30 08:34 UTC | slab: Fix using this_cpu_ptr() in preemptible context | 1 | ranxiaokai627@163.com | finished in 4h38m0s |
| 2025-09-30 08:10 UTC | mm/rmap: fix soft-dirty and uffd-wp bit loss when remapping zero-filled mTHP subpage to shared zeropage | 5 | lance.yang@linux.dev | finished in 3h45m0s |
| 2025-09-30 07:10 UTC | mm/rmap: fix soft-dirty and uffd-wp bit loss when remapping zero-filled mTHP subpage to shared zeropage | 4 | lance.yang@linux.dev |
finished
in 3h41m0s
[1 findings] |
| 2025-09-30 06:38 UTC | slab: Add allow_spin check to eliminate kmemleak warnings | 1 | ranxiaokai627@163.com |
finished
in 42m0s
[1 findings] |
| 2025-09-30 06:05 UTC | mm/rmap: fix soft-dirty and uffd-wp bit loss when remapping zero-filled mTHP subpage to shared zeropage | 3 | lance.yang@linux.dev |
finished
in 3h46m0s
[1 findings] |
| 2025-09-30 04:33 UTC | mm/rmap: fix soft-dirty and uffd-wp bit loss when remapping zero-filled mTHP subpage to shared zeropage | 2 | lance.yang@linux.dev | finished in 3h44m0s |
| 2025-09-29 00:26 UTC | mm: Fix some typos in mm module | 2 | jianyungao89@gmail.com | finished in 38m0s |
| 2025-09-28 11:16 UTC | reparent the THP split queue | 3 | zhengqi.arch@bytedance.com | finished in 3h42m0s |
| 2025-09-28 04:48 UTC | mm/rmap: fix soft-dirty bit loss when remapping zero-filled mTHP subpage to shared zeropage | 1 | lance.yang@linux.dev | finished in 3h41m0s |
| 2025-09-27 08:06 UTC | mm: Fix some typos in mm module | 1 | jianyungao89@gmail.com | finished in 39m0s |
| 2025-09-26 09:24 UTC | mm: silence data-race in update_hiwater_rss | 1 | lance.yang@linux.dev | finished in 49m0s |
| 2025-09-26 08:06 UTC | alloc_tag: Fix boot failure due to NULL pointer dereference | 1 | ranxiaokai627@163.com | finished in 40m0s |
| 2025-09-23 09:16 UTC | reparent the THP split queue | 2 | zhengqi.arch@bytedance.com | finished in 3h44m0s |
| 2025-09-23 07:10 UTC | Improve UFFDIO_MOVE scalability by removing anon_vma lock | 2 | lokeshgidra@google.com | finished in 3h36m0s |
| 2025-09-22 17:03 UTC | mm/slab: Add size validation in kmalloc_array_* functions | 1 | viswanathiyyappan@gmail.com | finished in 3h44m0s |
| 2025-09-18 11:21 UTC | mm: Improve mlock tracking for large folios | 1 | kirill@shutemov.name | finished in 3h40m0s |
| 2025-09-18 05:51 UTC | Improve UFFDIO_MOVE scalability by removing anon_vma lock | 1 | lokeshgidra@google.com | finished in 1h6m0s |
| 2025-09-16 19:12 UTC | mm/hugetlb: fix copy_hugetlb_page_range() to use ->pt_share_count | 3 | jane.chu@oracle.com | finished in 3h50m0s |
| 2025-09-16 16:01 UTC | fixup: alloc_tag: mark inaccurate allocation counters in /proc/allocinfo output | 1 | surenb@google.com | skipped |
| 2025-09-16 07:22 UTC | mm/vmscan: Add readahead LRU to improve readahead file page reclamation efficiency | 0 | liulei.rjpt@vivo.com | finished in 3h41m0s |
| 2025-09-16 02:21 UTC | slab: Disallow kprobes in ___slab_alloc() | 1 | alexei.starovoitov@gmail.com | finished in 3h39m0s |
| 2025-09-16 01:46 UTC | slab: Clarify comments regarding pfmemalloc and NUMA preferences | 1 | alexei.starovoitov@gmail.com | finished in 36m0s |
| 2025-09-16 00:45 UTC | mm/hugetlb: fix copy_hugetlb_page_range() to use ->pt_share_count | 3 | jane.chu@oracle.com | finished in 3h38m0s |
| 2025-09-15 23:02 UTC | alloc_tag: mark inaccurate allocation counters in /proc/allocinfo output | 2 | surenb@google.com | finished in 43m0s |
| 2025-09-15 20:09 UTC | fixes for slab->obj_exts allocation failure handling | 1 | surenb@google.com | finished in 3h44m0s |
| 2025-09-15 13:55 UTC | slab: struct slab pointer validation improvements | 2 | vbabka@suse.cz | finished in 1h7m0s |
| 2025-09-11 17:02 UTC | slab: struct slab pointer validation improvements | 1 | vbabka@suse.cz | finished in 3h37m0s |
| 2025-09-10 19:27 UTC | mm/hugetlb: fix copy_hugetlb_page_range() to use ->pt_share_count | 2 | jane.chu@oracle.com | finished in 3h44m0s |
| 2025-09-10 11:54 UTC | Prepare slab for memdescs | 2 | willy@infradead.org | skipped |
| 2025-09-10 08:01 UTC | SLUB percpu sheaves | 8 | vbabka@suse.cz |
finished
in 28m0s
[1 findings] |
| 2025-09-09 23:49 UTC | alloc_tag: mark inaccurate allocation counters in /proc/allocinfo output | 1 | surenb@google.com | finished in 53m0s |
| 2025-09-09 18:43 UTC | mm/hugetlb: fix copy_hugetlb_page_range() to use ->pt_share_count | 1 | jane.chu@oracle.com | finished in 3h46m0s |
| 2025-09-09 07:48 UTC | mm/slub: Use folio_nr_pages() in __free_slab() | 1 | husong@kylinos.cn | finished in 3h43m0s |
| 2025-09-09 01:33 UTC | mm/slub: Refactor note_cmpxchg_failure for better readability | 3 | ye.liu@linux.dev | finished in 42m0s |
| 2025-09-09 01:00 UTC | slab: Re-entrant kmalloc_nolock() | 5 | alexei.starovoitov@gmail.com | skipped |
| 2025-09-08 19:49 UTC | MAINTAINERS: add Jann Horn as rmap reviewer | 1 | lorenzo.stoakes@oracle.com | finished in 35m0s |
| 2025-09-08 14:05 UTC | mm/rmap: make num_children and num_active_vmas update in internally | 2 | yajun.deng@linux.dev | finished in 3h38m0s |
| 2025-09-08 09:42 UTC | mm/slub: Refactor note_cmpxchg_failure for better readability | 2 | ye.liu@linux.dev | finished in 53m0s |
| 2025-09-08 07:19 UTC | mm/slub: Refactor note_cmpxchg_failure for better readability | 1 | ye.liu@linux.dev | finished in 53m0s |
| 2025-09-08 04:49 UTC | mm: always call rmap_walk() on locked folios | 1 | lokeshgidra@google.com | finished in 3h40m0s |
| 2025-09-05 13:20 UTC | mm/rmap: make num_children and num_active_vmas update in internally | 1 | yajun.deng@linux.dev |
finished
in 1h27m0s
[1 findings] |
| 2025-09-03 12:59 UTC | SLUB percpu sheaves | 7 | vbabka@suse.cz | finished in 3h57m0s |
| 2025-09-01 11:08 UTC | maple_tree: slub sheaves conversion | 1 | vbabka@suse.cz |
finished
in 46m0s
[1 findings] |
| 2025-09-01 08:46 UTC | mm: slub: avoid wake up kswapd in set_track_prepare | 5 | yangshiguang1011@163.com | finished in 4h0m0s |
| 2025-08-30 02:09 UTC | mm: slub: avoid wake up kswapd in set_track_prepare | 4 | yangshiguang1011@163.com | finished in 3h40m0s |
| 2025-08-29 15:47 UTC | Prepare slab for memdescs | 1 | willy@infradead.org | skipped |
| 2025-08-27 11:37 UTC | mm/slab: reduce slab accounting memory overhead by allocating slabobj_ext metadata within unused slab space | 1 | harry.yoo@oracle.com | finished in 3h49m0s |
| 2025-08-27 08:26 UTC | SLUB percpu sheaves | 6 | vbabka@suse.cz | finished in 3h53m0s |
| 2025-08-26 06:58 UTC | docs/mm: explain when and why rmap locks need to be taken during mremap() | 1 | harry.yoo@oracle.com | finished in 3h45m0s |
| 2025-08-26 06:23 UTC | mm/slub: Fix debugfs stack trace sorting and simplify sort call | 2 | visitorckw@gmail.com | finished in 3h42m0s |
| 2025-08-25 15:44 UTC | slab: support for compiler-assisted type-based slab cache partitioning | 1 | elver@google.com | finished in 3h51m0s |
| 2025-08-25 12:17 UTC | mm: slub: avoid wake up kswapd in set_track_prepare | 3 | yangshiguang1011@163.com | finished in 3h44m0s |
| 2025-08-25 01:34 UTC | mm/slub: Fix debugfs stack trace sorting and simplify sort call | 1 | visitorckw@gmail.com | finished in 3h38m0s |
| 2025-08-22 02:07 UTC | mm: fix KASAN build error due to p*d_populate_kernel() | 3 | harry.yoo@oracle.com | skipped |
| 2025-08-21 11:57 UTC | mm: fix KASAN build error due to p*d_populate_kernel() | 2 | harry.yoo@oracle.com | skipped |
| 2025-08-21 09:35 UTC | mm: fix KASAN build error due to p*d_populate_kernel() | 1 | harry.yoo@oracle.com | skipped |
| 2025-08-19 08:00 UTC | test that rmap behaves as expected | 4 | richard.weiyang@gmail.com | skipped |
| 2025-08-18 17:53 UTC | mm/mremap: fix WARN with uffd that has remap events disabled | 1 | david@redhat.com | finished in 3h56m0s |
| 2025-08-18 02:29 UTC | assert rmap behaves as expected | 3 | richard.weiyang@gmail.com | finished in 3h45m0s |
| 2025-08-18 02:02 UTC | mm, x86: fix crash due to missing page table sync and make it harder to miss | 1 | harry.yoo@oracle.com | finished in 3h51m0s |
| 2025-08-17 15:17 UTC | mm/migrate: Fix NULL movable_ops if CONFIG_ZSMALLOC=m | 1 | chenhuacai@loongson.cn | finished in 3h49m0s |
| 2025-08-17 08:35 UTC | mm/migrate: Fix NULL movable_ops if CONFIG_ZSMALLOC=m | 1 | chenhuacai@loongson.cn | finished in 3h45m0s |
| 2025-08-17 03:26 UTC | mm/rmap: small cleanup for __folio_remove_rmap() | 2 | richard.weiyang@gmail.com | skipped |
| 2025-08-16 07:21 UTC | mm/migrate: Fix NULL movable_ops if CONFIG_ZSMALLOC=m | 1 | chenhuacai@loongson.cn |
finished
in 39m0s
[1 findings] |
| 2025-08-15 09:05 UTC | mm/migrate: Fix NULL movable_ops if CONFIG_ZSMALLOC=m | 1 | chenhuacai@loongson.cn | finished in 3h45m0s |
| 2025-08-15 08:49 UTC | mm/rmap: small cleanup for __folio_remove_rmap() | 1 | richard.weiyang@gmail.com | skipped |
| 2025-08-14 20:05 UTC | mm/rmap: Always inline __folio_rmap_sanity_checks() | 1 | nathan@kernel.org | skipped |
| 2025-08-14 15:32 UTC | mm: slowtier page promotion based on PTE A bit | 1 | raghavendra.kt@amd.com | skipped |
| 2025-08-14 11:16 UTC | mm: slub: avoid wake up kswapd in set_track_prepare | 2 | yangshiguang1011@163.com | finished in 3h43m0s |
| 2025-08-12 13:52 UTC | mm: remove redundant __GFP_NOWARN | 2 | rongqianfeng@vivo.com | finished in 3h44m0s |
| 2025-08-12 09:57 UTC | mm: remove redundant __GFP_NOWARN | 1 | rongqianfeng@vivo.com | finished in 3h43m0s |
| 2025-08-12 08:30 UTC | mempool: rename struct mempool_s to struct mempool | 1 | hch@lst.de | finished in 3h39m0s |
| 2025-08-11 05:34 UTC | mm, x86: fix crash due to missing page table sync and make it harder to miss | 1 | harry.yoo@oracle.com | finished in 3h41m0s |
| 2025-07-29 11:02 UTC | mm, page_pool: introduce a new page type for page pool in page type | 3 | byungchul@sk.com | skipped |
| 2025-07-28 08:20 UTC | mm, page_pool: introduce a new page type for page pool in page type | 2 | byungchul@sk.com | skipped |
| 2025-07-28 05:27 UTC | mm, page_pool: introduce a new page type for page pool in page type | 2 | byungchul@sk.com | skipped |
| 2025-07-21 05:49 UTC | mm, page_pool: introduce a new page type for page pool in page type | 1 | byungchul@sk.com | skipped |
| 2025-07-21 02:18 UTC | Split netmem from struct page | 12 | byungchul@sk.com | skipped |
| 2025-07-18 02:16 UTC | slab: Re-entrant kmalloc_nolock() | 4 | alexei.starovoitov@gmail.com | finished in 1m0s |
| 2025-07-17 07:00 UTC | Split netmem from struct page | 11 | byungchul@sk.com | skipped |
| 2025-07-14 12:00 UTC | Split netmem from struct page | 10 | byungchul@sk.com | skipped |
| 2025-07-10 08:28 UTC | Split netmem from struct page | 9 | byungchul@sk.com | finished in 3h34m0s |
| 2025-07-02 05:32 UTC | Split netmem from struct page | 8 | byungchul@sk.com | skipped |