Published | Title | Version | Author | Status |
---|---|---|---|---|
2025-09-11 17:02 UTC | slab: struct slab pointer validation improvements | 1 | vbabka@suse.cz | in progress |
2025-09-10 19:27 UTC | mm/hugetlb: fix copy_hugetlb_page_range() to use ->pt_share_count | 2 | jane.chu@oracle.com | finished in 3h44m0s |
2025-09-10 08:01 UTC | SLUB percpu sheaves | 8 | vbabka@suse.cz |
finished
in 28m0s
[1 findings] |
2025-09-09 23:49 UTC | alloc_tag: mark inaccurate allocation counters in /proc/allocinfo output | 1 | surenb@google.com | finished in 53m0s |
2025-09-09 18:43 UTC | mm/hugetlb: fix copy_hugetlb_page_range() to use ->pt_share_count | 1 | jane.chu@oracle.com | finished in 3h46m0s |
2025-09-09 07:48 UTC | mm/slub: Use folio_nr_pages() in __free_slab() | 1 | husong@kylinos.cn | finished in 3h43m0s |
2025-09-09 01:33 UTC | mm/slub: Refactor note_cmpxchg_failure for better readability | 3 | ye.liu@linux.dev | finished in 42m0s |
2025-09-09 01:00 UTC | slab: Re-entrant kmalloc_nolock() | 5 | alexei.starovoitov@gmail.com | skipped |
2025-09-08 19:49 UTC | MAINTAINERS: add Jann Horn as rmap reviewer | 1 | lorenzo.stoakes@oracle.com | finished in 35m0s |
2025-09-08 14:05 UTC | mm/rmap: make num_children and num_active_vmas update in internally | 2 | yajun.deng@linux.dev | finished in 3h38m0s |
2025-09-08 09:42 UTC | mm/slub: Refactor note_cmpxchg_failure for better readability | 2 | ye.liu@linux.dev | finished in 53m0s |
2025-09-08 07:19 UTC | mm/slub: Refactor note_cmpxchg_failure for better readability | 1 | ye.liu@linux.dev | finished in 53m0s |
2025-09-08 04:49 UTC | mm: always call rmap_walk() on locked folios | 1 | lokeshgidra@google.com | finished in 3h40m0s |
2025-09-05 13:20 UTC | mm/rmap: make num_children and num_active_vmas update in internally | 1 | yajun.deng@linux.dev |
finished
in 1h27m0s
[1 findings] |
2025-09-03 12:59 UTC | SLUB percpu sheaves | 7 | vbabka@suse.cz | finished in 3h57m0s |
2025-09-01 11:08 UTC | maple_tree: slub sheaves conversion | 1 | vbabka@suse.cz |
finished
in 46m0s
[1 findings] |
2025-09-01 08:46 UTC | mm: slub: avoid wake up kswapd in set_track_prepare | 5 | yangshiguang1011@163.com | finished in 4h0m0s |
2025-08-30 02:09 UTC | mm: slub: avoid wake up kswapd in set_track_prepare | 4 | yangshiguang1011@163.com | finished in 3h40m0s |
2025-08-29 15:47 UTC | Prepare slab for memdescs | 1 | willy@infradead.org | skipped |
2025-08-27 11:37 UTC | mm/slab: reduce slab accounting memory overhead by allocating slabobj_ext metadata within unused slab space | 1 | harry.yoo@oracle.com | finished in 3h49m0s |
2025-08-27 08:26 UTC | SLUB percpu sheaves | 6 | vbabka@suse.cz | finished in 3h53m0s |
2025-08-26 06:58 UTC | docs/mm: explain when and why rmap locks need to be taken during mremap() | 1 | harry.yoo@oracle.com | finished in 3h45m0s |
2025-08-26 06:23 UTC | mm/slub: Fix debugfs stack trace sorting and simplify sort call | 2 | visitorckw@gmail.com | finished in 3h42m0s |
2025-08-25 15:44 UTC | slab: support for compiler-assisted type-based slab cache partitioning | 1 | elver@google.com | finished in 3h51m0s |
2025-08-25 12:17 UTC | mm: slub: avoid wake up kswapd in set_track_prepare | 3 | yangshiguang1011@163.com | finished in 3h44m0s |
2025-08-25 01:34 UTC | mm/slub: Fix debugfs stack trace sorting and simplify sort call | 1 | visitorckw@gmail.com | finished in 3h38m0s |
2025-08-22 02:07 UTC | mm: fix KASAN build error due to p*d_populate_kernel() | 3 | harry.yoo@oracle.com | skipped |
2025-08-21 11:57 UTC | mm: fix KASAN build error due to p*d_populate_kernel() | 2 | harry.yoo@oracle.com | skipped |
2025-08-21 09:35 UTC | mm: fix KASAN build error due to p*d_populate_kernel() | 1 | harry.yoo@oracle.com | skipped |
2025-08-19 08:00 UTC | test that rmap behaves as expected | 4 | richard.weiyang@gmail.com | skipped |
2025-08-18 17:53 UTC | mm/mremap: fix WARN with uffd that has remap events disabled | 1 | david@redhat.com | finished in 3h56m0s |
2025-08-18 02:29 UTC | assert rmap behaves as expected | 3 | richard.weiyang@gmail.com | finished in 3h45m0s |
2025-08-18 02:02 UTC | mm, x86: fix crash due to missing page table sync and make it harder to miss | 1 | harry.yoo@oracle.com | finished in 3h51m0s |
2025-08-17 15:17 UTC | mm/migrate: Fix NULL movable_ops if CONFIG_ZSMALLOC=m | 1 | chenhuacai@loongson.cn | finished in 3h49m0s |
2025-08-17 08:35 UTC | mm/migrate: Fix NULL movable_ops if CONFIG_ZSMALLOC=m | 1 | chenhuacai@loongson.cn | finished in 3h45m0s |
2025-08-17 03:26 UTC | mm/rmap: small cleanup for __folio_remove_rmap() | 2 | richard.weiyang@gmail.com | skipped |
2025-08-16 07:21 UTC | mm/migrate: Fix NULL movable_ops if CONFIG_ZSMALLOC=m | 1 | chenhuacai@loongson.cn |
finished
in 39m0s
[1 findings] |
2025-08-15 09:05 UTC | mm/migrate: Fix NULL movable_ops if CONFIG_ZSMALLOC=m | 1 | chenhuacai@loongson.cn | finished in 3h45m0s |
2025-08-15 08:49 UTC | mm/rmap: small cleanup for __folio_remove_rmap() | 1 | richard.weiyang@gmail.com | skipped |
2025-08-14 20:05 UTC | mm/rmap: Always inline __folio_rmap_sanity_checks() | 1 | nathan@kernel.org | skipped |
2025-08-14 15:32 UTC | mm: slowtier page promotion based on PTE A bit | 1 | raghavendra.kt@amd.com | skipped |
2025-08-14 11:16 UTC | mm: slub: avoid wake up kswapd in set_track_prepare | 2 | yangshiguang1011@163.com | finished in 3h43m0s |
2025-08-12 13:52 UTC | mm: remove redundant __GFP_NOWARN | 2 | rongqianfeng@vivo.com | finished in 3h44m0s |
2025-08-12 09:57 UTC | mm: remove redundant __GFP_NOWARN | 1 | rongqianfeng@vivo.com | finished in 3h43m0s |
2025-08-12 08:30 UTC | mempool: rename struct mempool_s to struct mempool | 1 | hch@lst.de | finished in 3h39m0s |
2025-08-11 05:34 UTC | mm, x86: fix crash due to missing page table sync and make it harder to miss | 1 | harry.yoo@oracle.com | finished in 3h41m0s |
2025-07-29 11:02 UTC | mm, page_pool: introduce a new page type for page pool in page type | 3 | byungchul@sk.com | skipped |
2025-07-28 08:20 UTC | mm, page_pool: introduce a new page type for page pool in page type | 2 | byungchul@sk.com | skipped |
2025-07-28 05:27 UTC | mm, page_pool: introduce a new page type for page pool in page type | 2 | byungchul@sk.com | skipped |
2025-07-21 05:49 UTC | mm, page_pool: introduce a new page type for page pool in page type | 1 | byungchul@sk.com | skipped |
2025-07-21 02:18 UTC | Split netmem from struct page | 12 | byungchul@sk.com | skipped |
2025-07-18 02:16 UTC | slab: Re-entrant kmalloc_nolock() | 4 | alexei.starovoitov@gmail.com | finished in 1m0s |
2025-07-17 07:00 UTC | Split netmem from struct page | 11 | byungchul@sk.com | skipped |
2025-07-14 12:00 UTC | Split netmem from struct page | 10 | byungchul@sk.com | skipped |
2025-07-10 08:28 UTC | Split netmem from struct page | 9 | byungchul@sk.com | finished in 3h34m0s |
2025-07-02 05:32 UTC | Split netmem from struct page | 8 | byungchul@sk.com | skipped |