| Published | Title | Version | Author | Status |
|---|---|---|---|---|
| 2026-02-26 08:13 UTC | mm: khugepaged: simplify scanning progress in pmd | 1 | vernon2gm@gmail.com | - |
| 2026-02-26 03:22 UTC | khugepaged: mTHP support | 15 | npache@redhat.com | skipped |
| 2026-02-26 01:29 UTC | mm: khugepaged cleanups and mTHP prerequisites | 2 | npache@redhat.com | finished in 4h4m0s |
| 2026-02-22 00:45 UTC | mm/mmu_gather: define RCU version tlb_remove_table_one() in CONFIG_MMU_GATHER_RCU_TABLE_FREE | 1 | richard.weiyang@gmail.com | finished in 54m0s |
| 2026-02-17 16:10 UTC | Improve proc RSS accuracy | 17 | mathieu.desnoyers@efficios.com | finished in 4h2m0s |
| 2026-02-12 02:18 UTC | mm: khugepaged cleanups and mTHP prerequisites | 1 | npache@redhat.com | finished in 4h8m0s |
| 2026-02-05 03:31 UTC | mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp | 3 | richard.weiyang@gmail.com | finished in 3h49m0s |
| 2026-02-05 00:56 UTC | mm/memory_failure: reject unsupported non-folio compound page | 1 | ziy@nvidia.com | finished in 50m0s |
| 2026-02-04 00:42 UTC | mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp | 2 | richard.weiyang@gmail.com | finished in 4h0m0s |
| 2026-01-30 23:00 UTC | mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared thp | 1 | richard.weiyang@gmail.com | finished in 3h53m0s |
| 2026-01-30 15:53 UTC | Optimize zone->contiguous update | 9 | tianyou.li@intel.com | skipped |
| 2026-01-30 15:22 UTC | mm/memory hotplug: Fix zone->contiguous always false when hotplug | 1 | tianyou.li@intel.com | finished in 4h26m0s |
| 2026-01-30 15:07 UTC | mm/memory hotplug: Fix zone->contiguous always false when hotplug | 9 | tianyou.li@intel.com | finished in 4h28m0s |
| 2026-01-27 12:12 UTC | enable PT_RECLAIM on more 64-bit architectures | 4 | qi.zheng@linux.dev | finished in 52m0s |
| 2026-01-22 19:28 UTC | khugepaged: mTHP support | 14 | npache@redhat.com | finished in 3h59m0s |
| 2026-01-22 18:43 UTC | mm/mm_init: Don't cond_resched() in deferred_init_memmap_chunk() if called from deferred_grow_zone() | 3 | longman@redhat.com | finished in 56m0s |
| 2026-01-22 03:40 UTC | mm/mm_init: Don't call cond_resched() in deferred_init_memmap_chunk() if rcu_preempt_depth() set | 2 | longman@redhat.com | finished in 4h13m0s |
| 2026-01-21 19:10 UTC | mm/mm_init: Don't call cond_resched() in deferred_init_memmap_chunk() if rcu_preempt_depth() set | 1 | longman@redhat.com | finished in 4h1m0s |
| 2026-01-20 13:47 UTC | Optimize zone->contiguous update and issue fix | 8 | tianyou.li@intel.com | finished in 4h6m0s |
| 2026-01-18 19:22 UTC | mm/khugepaged: cleanups and scan limit fix | 1 | shivankg@amd.com | skipped |
| 2026-01-14 14:59 UTC | Improve proc RSS accuracy and OOM killer latency | 16 | mathieu.desnoyers@efficios.com | finished in 30m0s |
| 2026-01-14 14:36 UTC | mm: Fix OOM killer inaccuracy on large many-core systems | 2 | mathieu.desnoyers@efficios.com | finished in 4h7m0s |
| 2026-01-13 20:04 UTC | mm: Reduce latency of OOM killer task selection | 15 | mathieu.desnoyers@efficios.com | finished in 32m0s |
| 2026-01-13 19:47 UTC | mm: Fix OOM killer and proc stats inaccuracy on large many-core systems | 1 | mathieu.desnoyers@efficios.com | finished in 3h55m0s |
| 2026-01-12 20:00 UTC | mm: Fix OOM killer inaccuracy on large many-core systems | 14 | mathieu.desnoyers@efficios.com | finished in 4h5m0s |
| 2026-01-11 19:49 UTC | mm: Fix OOM killer inaccuracy on large many-core systems | 13 | mathieu.desnoyers@efficios.com | finished in 3h52m0s |
| 2026-01-11 15:02 UTC | mm: Fix OOM killer inaccuracy on large many-core systems | 12 | mathieu.desnoyers@efficios.com | finished in 4h0m0s |
| 2026-01-04 05:41 UTC | Improve khugepaged scan logic | 3 | vernon2gm@gmail.com | finished in 3h56m0s |
| 2025-12-31 03:00 UTC | mm/mmu_gather: remove @delay_remap of __tlb_remove_page_size() | 1 | richard.weiyang@gmail.com | finished in 4h1m0s |
| 2025-12-29 05:51 UTC | Improve khugepaged scan logic | 2 | vernon2gm@gmail.com |
finished
in 3h54m0s
[1 findings] |
| 2025-12-25 21:02 UTC | mm/vmstat: remove unused node and zone state helpers | 2 | richard.weiyang@gmail.com | finished in 3h51m0s |
| 2025-12-25 01:54 UTC | mm: remove unused function inc_node_state() | 1 | richard.weiyang@gmail.com | finished in 3h52m0s |
| 2025-12-24 17:46 UTC | mm: Fix OOM killer inaccuracy on large many-core systems | 11 | mathieu.desnoyers@efficios.com | skipped |
| 2025-12-24 11:13 UTC | mm/khugepaged: cleanups and scan limit fix | 1 | shivankg@amd.com | finished in 3h54m0s |
| 2025-12-23 12:25 UTC | mm/huge_memory: consolidate order-related checks into folio_check_splittable() | 2 | richard.weiyang@gmail.com |
finished
in 4h0m0s
[1 findings] |
| 2025-12-22 14:09 UTC | Optimize zone->contiguous update and issue fix | 7 | tianyou.li@intel.com | finished in 4h0m0s |
| 2025-12-22 13:45 UTC | mm/huge_memory: simplify page tracking in remap_page() during folio split | 3 | richard.weiyang@gmail.com | finished in 3h54m0s |
| 2025-12-21 12:46 UTC | page_alloc: allow migration of smaller hugepages during contig_alloc | 7 | gourry@gourry.net | finished in 3h55m0s |
| 2025-12-19 23:38 UTC | vfio: selftests: Clean up <uapi/linux/types.h> includes | 1 | dmatlack@google.com | finished in 54m0s |
| 2025-12-18 23:38 UTC | page_alloc: allow migration of smaller hugepages during contig_alloc | 6 | gourry@gourry.net | finished in 3h56m0s |
| 2025-12-18 19:08 UTC | page_alloc: allow migration of smaller hugepages during contig_alloc | 5 | gourry@gourry.net | finished in 3h52m0s |
| 2025-12-15 12:12 UTC | Optimize zone->contiguous update and issue fix | 6 | tianyou.li@intel.com | skipped |
| 2025-12-15 08:10 UTC | mm/page_alloc: change all pageblocks migrate type on coalescing | 2 | agordeev@linux.ibm.com | finished in 3h47m0s |
| 2025-12-15 00:48 UTC | mm/huge_memory: use end_folio to terminate anonymous folio remapping | 2 | richard.weiyang@gmail.com | finished in 3h51m0s |
| 2025-12-08 14:34 UTC | Optimize zone->contiguous update and issue fix | 5 | tianyou.li@intel.com | finished in 4h4m0s |
| 2025-12-03 06:30 UTC | page_alloc: allow migration of smaller hugepages during contig_alloc | 4 | gourry@gourry.net |
finished
in 38m0s
[1 findings] |
| 2025-12-01 17:46 UTC | khugepaged: mTHP support | 13 | npache@redhat.com | finished in 3h53m0s |
| 2025-12-01 12:25 UTC | mm/memory hotplug/unplug: Optimize zone->contiguous update when changes pfn range | 4 | tianyou.li@intel.com | finished in 3h50m0s |
| 2025-11-27 02:24 UTC | mm/huge_memory: use end_folio to terminate anonymous folio remapping | 1 | richard.weiyang@gmail.com | finished in 3h54m0s |
| 2025-11-26 21:06 UTC | Improve folio split related functions | 4 | ziy@nvidia.com | skipped |
| 2025-11-26 06:47 UTC | mm: use standard page table accessors | 1 | richard.weiyang@gmail.com | finished in 3h55m0s |
| 2025-11-26 03:50 UTC | Improve folio split related functions | 3 | ziy@nvidia.com | skipped |
| 2025-11-22 02:55 UTC | Improve folio split related functions | 2 | ziy@nvidia.com | finished in 3h50m0s |
| 2025-11-21 19:15 UTC | page_alloc: allow migration of smaller hugepages during contig_alloc | 3 | gourry@gourry.net | finished in 4h0m0s |
| 2025-11-20 21:03 UTC | mm: Fix OOM killer inaccuracy on large many-core systems | 9 | mathieu.desnoyers@efficios.com | finished in 41m0s |
| 2025-11-19 23:53 UTC | mm/huge_memory: fix NULL pointer deference when splitting folio | 2 | richard.weiyang@gmail.com | finished in 26m0s |
| 2025-11-19 13:13 UTC | mm/memory hotplug/unplug: Optimize zone->contiguous update when move pfn range | 3 | tianyou.li@intel.com |
finished
in 1h3m0s
[1 findings] |
| 2025-11-19 01:26 UTC | mm/huge_memory: fix NULL pointer deference when splitting shmem folio in swap cache | 1 | richard.weiyang@gmail.com | finished in 29m0s |
| 2025-11-14 07:57 UTC | mm/huge_memory: consolidate order-related checks into folio_split_supported() | 1 | richard.weiyang@gmail.com | finished in 54m0s |
| 2025-11-14 03:00 UTC | unify PMD scan results and remove redundant cleanup | 2 | richard.weiyang@gmail.com | finished in 58m0s |
| 2025-11-13 01:45 UTC | riscv: Memory type control for platforms with physical memory aliases | 3 | samuel.holland@sifive.com |
finished
in 37m0s
[1 findings] |
| 2025-11-12 02:00 UTC | mm/khugepaged: continue to collapse on SCAN_PMD_NONE | 1 | richard.weiyang@gmail.com | finished in 49m0s |
| 2025-11-10 08:17 UTC | reparent the THP split queue | 6 | qi.zheng@linux.dev | skipped |
| 2025-11-07 17:22 UTC | mm: Fix OOM killer inaccuracy on large many-core systems | 8 | mathieu.desnoyers@efficios.com | finished in 3h41m0s |
| 2025-11-06 03:41 UTC | mm/huge_memory: Define split_type and consolidate split support checks | 3 | richard.weiyang@gmail.com | skipped |
| 2025-11-05 16:29 UTC | mm/huge_memory: fix folio split check for anon folios in swapcache. | 1 | ziy@nvidia.com | finished in 4h16m0s |
| 2025-11-05 07:25 UTC | mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() | 2 | richard.weiyang@gmail.com | finished in 3h56m0s |
| 2025-11-01 02:11 UTC | mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() | 1 | richard.weiyang@gmail.com | finished in 3h48m0s |
| 2025-11-01 00:29 UTC | mm/huge_memory: Modularize and simplify folio splitting paths | 1 | richard.weiyang@gmail.com | finished in 3h45m0s |
| 2025-10-31 16:19 UTC | Optimize folio split in memory failure | 5 | ziy@nvidia.com | finished in 3h56m0s |
| 2025-10-31 14:42 UTC | mm: Fix OOM killer inaccuracy on large many-core systems | 7 | mathieu.desnoyers@efficios.com | finished in 3h45m0s |
| 2025-10-30 01:40 UTC | Optimize folio split in memory failure | 4 | ziy@nvidia.com | finished in 3h51m0s |
| 2025-10-24 19:28 UTC | page_alloc: allow migration of smaller hugepages during contig_alloc. | 3 | gourry@gourry.net | finished in 3h48m0s |
| 2025-10-23 03:05 UTC | mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order | 4 | ziy@nvidia.com | finished in 3h43m0s |
| 2025-10-22 18:37 UTC | khugepaged: mTHP support | 12 | npache@redhat.com | finished in 3h54m0s |
| 2025-10-22 03:35 UTC | Optimize folio split in memory failure | 3 | ziy@nvidia.com | finished in 3h55m0s |
| 2025-10-21 21:21 UTC | mm/huge_memory: cleanup __split_unmapped_folio() | 3 | richard.weiyang@gmail.com | finished in 3h42m0s |
| 2025-10-20 15:11 UTC | mm/khugepaged: guard is_zero_pfn() calls with pte_present() | 3 | lance.yang@linux.dev | finished in 3h57m0s |
| 2025-10-17 09:38 UTC | mm/khugepaged: guard is_zero_pfn() calls with pte_present() | 2 | lance.yang@linux.dev | finished in 4h1m0s |
| 2025-10-17 01:36 UTC | mm/huge_memory: do not change split_huge_page*() target order silently. | 3 | ziy@nvidia.com | skipped |
| 2025-10-16 10:44 UTC | selftests: complete kselftest include centralization | 3 | reddybalavignesh9979@gmail.com | finished in 1h6m0s |
| 2025-10-16 03:34 UTC | Do not change split folio target order | 2 | ziy@nvidia.com | skipped |
| 2025-10-16 00:46 UTC | mm/huge_memory: cleanup __split_unmapped_folio() | 2 | richard.weiyang@gmail.com | finished in 3h44m0s |
| 2025-10-15 09:29 UTC | mm/khugepaged: fix comment for default scan sleep duration | 1 | lianux.mm@gmail.com | finished in 42m0s |
| 2025-10-15 06:49 UTC | mm: prevent poison consumption when splitting THP | 4 | qiuxu.zhuo@intel.com | finished in 4h0m0s |
| 2025-10-15 06:43 UTC | mm/khugepaged: fix comment for default scan sleep duration | 1 | lianux.mm@gmail.com | finished in 54m0s |
| 2025-10-14 14:19 UTC | mm: prevent poison consumption when splitting THP | 3 | qiuxu.zhuo@intel.com | finished in 3h52m0s |
| 2025-10-14 13:46 UTC | mm/huge_memory: cleanup __split_unmapped_folio() | 1 | richard.weiyang@gmail.com | finished in 4h12m0s |
| 2025-10-10 14:11 UTC | mm/huge_memory: only get folio_order() once during __folio_split() | 1 | richard.weiyang@gmail.com | skipped |
| 2025-10-08 09:54 UTC | mm/huge_memory: cleanup for pmd folio installation | 3 | richard.weiyang@gmail.com | finished in 4h3m0s |
| 2025-10-08 04:37 UTC | refactor and merge PTE scanning logic | 3 | lance.yang@linux.dev | skipped |
| 2025-10-08 03:26 UTC | mm/khugepaged: abort collapse scan on non-swap entries | 3 | lance.yang@linux.dev | finished in 3h50m0s |
| 2025-10-07 00:50 UTC | mm/khugepaged: unify pmd folio installation with map_anon_folio_pmd() | 2 | richard.weiyang@gmail.com | skipped |
| 2025-10-06 14:43 UTC | mm/khugepaged: refactor and merge PTE scanning logic | 2 | lance.yang@linux.dev | skipped |
| 2025-10-04 09:25 UTC | mm/khugepaged: use map_anon_folio_pmd() in collapse_huge_page() | 1 | richard.weiyang@gmail.com | skipped |
| 2025-10-02 07:32 UTC | mm/khugepaged: refactor and merge PTE scanning logic | 1 | lance.yang@linux.dev | skipped |
| 2025-10-02 03:31 UTC | mm/compaction: some fix for the range passed to pageblock_pfn_to_page() | 1 | richard.weiyang@gmail.com | finished in 3h42m0s |
| 2025-10-02 01:38 UTC | mm/huge_memory: add pmd folio to ds_queue in do_huge_zero_wp_pmd() | 2 | richard.weiyang@gmail.com | finished in 3h44m0s |
| 2025-10-01 14:28 UTC | mm/huge_memory: add pmd folio to ds_queue in do_huge_zero_wp_pmd() | 1 | richard.weiyang@gmail.com | finished in 3h42m0s |
| 2025-10-01 09:18 UTC | mm_slot: following fixup for usage of mm_slot_entry() | 2 | richard.weiyang@gmail.com | finished in 3h46m0s |