| Published | Title | Version | Author | Status |
|---|---|---|---|---|
| 2025-12-12 16:18 UTC | introduce pagetable_alloc_nolock() | 1 | yeoreum.yun@arm.com | - |
| 2025-12-11 12:59 UTC | iommu: Add IOMMU_DEBUG_PAGEALLOC sanitizer | 4 | smostafa@google.com | finished in 3h57m0s |
| 2025-12-09 05:44 UTC | mm/vmalloc: clarify why vmap_range_noflush() might sleep | 2 | jackmanb@google.com | finished in 3h51m0s |
| 2025-12-08 05:19 UTC | mm/vmalloc: clarify why vmap_range_noflush() might sleep | 1 | jackmanb@google.com |
finished
in 40m0s
[1 findings] |
| 2025-12-05 23:32 UTC | Deprecate zone_reclaim_mode | 1 | joshua.hahnjy@gmail.com | finished in 4h4m0s |
| 2025-12-05 16:57 UTC | Direct Map Removal Support for guest_memfd | 8 | kalyazin@amazon.co.uk | finished in 3h58m0s |
| 2025-12-03 14:41 UTC | KVM: pfncache: Support guest_memfd without direct map | 1 | itazur@amazon.com |
finished
in 37m0s
[1 findings] |
| 2025-12-03 06:30 UTC | page_alloc: allow migration of smaller hugepages during contig_alloc | 4 | gourry@gourry.net |
finished
in 38m0s
[1 findings] |
| 2025-12-02 02:57 UTC | cgroup: switch to css_is_online() helper | 1 | chenridong@huaweicloud.com | finished in 45m0s |
| 2025-12-01 09:31 UTC | printk: add macros to simplify handling struct va_format | 2 | andrzej.hajda@intel.com | finished in 3h56m0s |
| 2025-12-01 06:00 UTC | mm/page_alloc: make percpu_pagelist_high_fraction reads lock-free | 1 | aboorvad@linux.ibm.com | finished in 3h47m0s |
| 2025-11-28 03:11 UTC | mm: add per-migratetype counts to buddy allocator and optimize pagetypeinfo access | 1 | zhanghongru06@gmail.com | finished in 4h1m0s |
| 2025-11-26 11:35 UTC | printk: add macros to simplify handling struct va_format | 1 | andrzej.hajda@intel.com | finished in 4h4m0s |
| 2025-11-24 20:08 UTC | iommu: Add IOMMU_DEBUG_PAGEALLOC sanitizer | 3 | smostafa@google.com | finished in 3h58m0s |
| 2025-11-21 19:27 UTC | mm, hugetlb: implement movable_gigantic_pages sysctl | 2 | gourry@gourry.net | finished in 4h7m0s |
| 2025-11-21 19:15 UTC | page_alloc: allow migration of smaller hugepages during contig_alloc | 3 | gourry@gourry.net | finished in 4h0m0s |
| 2025-11-15 03:02 UTC | mm/page_alloc: optimize lowmem_reserve max lookup using its semantic monotonicity | 2 | fujunjie1@qq.com | finished in 52m0s |
| 2025-11-14 15:18 UTC | KVM: guest_memfd: use write for population | 7 | kalyazin@amazon.co.uk | finished in 3h53m0s |
| 2025-11-14 10:40 UTC | mm/page_alloc: optimize lowmem_reserve max lookup using monotonicity | 1 | fujunjie1@qq.com | finished in 59m0s |
| 2025-11-13 23:37 UTC | x86/bugs: KVM: L1TF and MMIO Stale Data cleanups | 5 | seanjc@google.com | skipped |
| 2025-11-13 03:46 UTC | selftests/mm: fix division-by-zero in uffd-unit-tests | 1 | cmllamas@google.com | finished in 52m0s |
| 2025-11-12 19:29 UTC | Specific Purpose Memory NUMA Nodes | 2 | gourry@gourry.net | finished in 3h47m0s |
| 2025-11-07 22:49 UTC | Protected Memory NUMA Nodes | 1 | gourry@gourry.net | skipped |
| 2025-11-06 16:39 UTC | iommu: Add IOMMU_DEBUG_PAGEALLOC sanitizer | 2 | smostafa@google.com | finished in 2h22m0s |
| 2025-11-05 08:56 UTC | mm/page_alloc: don't warn about large allocations with __GFP_NOFAIL | 2 | libaokun@huaweicloud.com | finished in 3h47m0s |
| 2025-11-05 07:41 UTC | mm/page_alloc: don't warn about large allocations with __GFP_NOFAIL | 1 | libaokun@huaweicloud.com | finished in 4h8m0s |
| 2025-10-31 06:13 UTC | mm: allow __GFP_NOFAIL allocation up to BLK_MAX_BLOCK_SIZE to support LBS | 1 | libaokun@huaweicloud.com | finished in 3h53m0s |
| 2025-10-31 00:30 UTC | x86/bugs: KVM: L1TF and MMIO Stale Data cleanups | 4 | seanjc@google.com | skipped |
| 2025-10-29 21:26 UTC | Unify VERW mitigation for guests | 1 | pawan.kumar.gupta@linux.intel.com | finished in 3h40m0s |
| 2025-10-24 19:28 UTC | page_alloc: allow migration of smaller hugepages during contig_alloc. | 3 | gourry@gourry.net | finished in 3h48m0s |
| 2025-10-23 11:59 UTC | mm: hugetlb: allocate frozen gigantic folio | 4 | wangkefeng.wang@huawei.com |
finished
in 34m0s
[1 findings] |
| 2025-10-21 09:50 UTC | mm/page_alloc: Consider PCP pages as part of pfmemalloc_reserve | 1 | zhongjinji@honor.com | skipped |
| 2025-10-20 21:08 UTC | page_alloc: allow migration of smaller hugepages during contig_alloc. | 2 | gourry@gourry.net | finished in 3h50m0s |
| 2025-10-20 17:06 UTC | page_alloc: allow migration of smaller hugepages during contig_alloc. | 1 | gourry@gourry.net | finished in 3h45m0s |
| 2025-10-20 14:30 UTC | mm, vc_screen: move __free() handler that frees a page to a common header | 1 | rppt@kernel.org | finished in 20h30m0s |
| 2025-10-18 09:30 UTC | mm: treewide: make get_free_pages() and return void * | 1 | rppt@kernel.org | skipped |
| 2025-10-16 20:04 UTC | KVM: VMX: Unify L1D flush for L1TF | 3 | seanjc@google.com |
finished
in 35m0s
[1 findings] |
| 2025-10-15 17:50 UTC | mm/page_isolation: clarify FIXME around shrink_slab() in memory hotplug | 1 | manish1588@gmail.com | finished in 43m0s |
| 2025-10-15 17:50 UTC | mm/page_alloc: simplify and cleanup pcp locking | 2 | vbabka@suse.cz | skipped |
| 2025-10-15 17:13 UTC | KVM: x86: Unify L1TF flushing under per-CPU variable | 2 | jackmanb@google.com | finished in 3h40m0s |
| 2025-10-15 09:36 UTC | mm/page_alloc: simplify and cleanup pcp locking | 1 | vbabka@suse.cz | skipped |
| 2025-10-14 14:50 UTC | mm/page_alloc: Batch callers of free_pcppages_bulk | 5 | joshua.hahnjy@gmail.com | finished in 3h47m0s |
| 2025-10-13 19:08 UTC | mm/page_alloc: Batch callers of free_pcppages_bulk | 4 | joshua.hahnjy@gmail.com | finished in 25m0s |
| 2025-10-13 15:20 UTC | KVM: x86: Unify L1TF flushing under per-CPU variable | 1 | jackmanb@google.com |
finished
in 3h43m0s
[1 findings] |
| 2025-10-13 15:13 UTC | KVM: x86: selftests: add L1TF exploit test | 1 | jackmanb@google.com | finished in 39m0s |
| 2025-10-13 13:38 UTC | mm: hugetlb: allocate frozen gigantic folio | 3 | wangkefeng.wang@huawei.com | finished in 3h50m0s |
| 2025-10-13 10:16 UTC | mm: net: disable kswapd for high-order network buffer allocation | 1 | 21cnbao@gmail.com | finished in 4h1m0s |
| 2025-10-11 06:20 UTC | mm: vmscan: wakeup kswapd during node_reclaim | 1 | mawupeng1@huawei.com | finished in 3h45m0s |
| 2025-10-09 19:29 UTC | mm/page_alloc: pcp->batch cleanups | 2 | joshua.hahnjy@gmail.com | finished in 3h49m0s |
| 2025-10-07 19:12 UTC | KVM: selftests: Don't fall over when only one CPU | 1 | jackmanb@google.com | finished in 47m0s |
| 2025-10-03 17:32 UTC | iommu: Add IOMMU_DEBUG_PAGEALLOC sanitizer | 1 | smostafa@google.com | finished in 3h46m0s |
| 2025-10-02 20:46 UTC | mm/page_alloc: Batch callers of free_pcppages_bulk | 3 | joshua.hahnjy@gmail.com | finished in 3h46m0s |
| 2025-10-02 03:31 UTC | mm/compaction: some fix for the range passed to pageblock_pfn_to_page() | 1 | richard.weiyang@gmail.com | finished in 3h42m0s |
| 2025-10-01 17:56 UTC | mm/page_owner: add debugfs files 'show_handles' and 'show_stacks_handles' | 2 | mfo@igalia.com | finished in 3h48m0s |
| 2025-09-30 09:21 UTC | mm/page_owner: Rename proc-prefixed variables for clarity | 1 | husong@kylinos.cn | finished in 3h50m0s |
| 2025-09-25 08:50 UTC | mm/page_alloc: Cleanup for __del_page_from_free_list() | 0 | zhongjinji@honor.com | finished in 3h44m0s |
| 2025-09-24 20:44 UTC | mm/page_alloc: Batch callers of free_pcppages_bulk | 2 | joshua.hahnjy@gmail.com | finished in 3h43m0s |
| 2025-09-24 17:40 UTC | mm/page_owner: add options 'print_handle' and 'print_stack' for 'show_stacks' | 1 | mfo@igalia.com | finished in 3h59m0s |
| 2025-09-24 14:59 UTC | mm: ASI direct map management | 1 | jackmanb@google.com |
finished
in 40m0s
[1 findings] |
| 2025-09-23 00:19 UTC | mm/page_alloc: fix alignment for alloc_contig_pages_noprof() | 1 | richard.weiyang@gmail.com | skipped |
| 2025-09-19 19:52 UTC | mm/page_alloc: Batch callers of free_pcppages_bulk | 1 | joshua.hahnjy@gmail.com | finished in 3h42m0s |
| 2025-09-19 16:21 UTC | mm: page_alloc: avoid kswapd thrashing due to NUMA restrictions | 1 | hannes@cmpxchg.org | finished in 3h46m0s |
| 2025-09-18 18:14 UTC | mm/show_mem: update printk/pr_info messages and replace legacy printk(KERN_CONT ...) with pr_cont() | 1 | manish1588@gmail.com | finished in 45m0s |
| 2025-09-18 13:19 UTC | mm: hugetlb: allocate frozen gigantic folio | 2 | wangkefeng.wang@huawei.com | finished in 3h56m0s |
| 2025-09-16 07:22 UTC | mm/vmscan: Add readahead LRU to improve readahead file page reclamation efficiency | 0 | liulei.rjpt@vivo.com | finished in 3h41m0s |
| 2025-09-11 06:56 UTC | mm: hugetlb: allocate frozen gigantic folio | 1 | wangkefeng.wang@huawei.com | skipped |
| 2025-09-10 13:39 UTC | mm: hugetlb: cleanup hugetlb folio allocation | 3 | wangkefeng.wang@huawei.com | finished in 3h54m0s |
| 2025-09-10 09:22 UTC | mm/compaction: fix low_pfn advance on isolating hugetlb | 1 | richard.weiyang@gmail.com | finished in 3h47m0s |
| 2025-09-09 23:34 UTC | Minor fixes for memory allocation profiling | 1 | surenb@google.com | finished in 4h1m0s |
| 2025-09-09 06:53 UTC | mm: swap: Gather swap entries and batch async release | 0 | liulei.rjpt@vivo.com | finished in 3h46m0s |
| 2025-09-08 10:04 UTC | mm: re-enable kswapd when memory pressure subsides or demotion is toggled | 1 | flyinrm@gmail.com | finished in 4h18m0s |
| 2025-09-03 11:16 UTC | mm/show_mem: Bug fix for print mem alloc info | 3 | pyyjason@gmail.com | finished in 36m0s |
| 2025-09-02 15:57 UTC | mm/show_mem: Bug fix for print mem alloc info | 2 | pyyjason@gmail.com | finished in 35m0s |
| 2025-09-02 12:49 UTC | mm: show_mem: show number of zspages in show_free_areas | 2 | cascardo@igalia.com | finished in 14m0s |
| 2025-09-02 12:48 UTC | mm: hugetlb: cleanup and allocate frozen hugetlb folio | 2 | wangkefeng.wang@huawei.com | finished in 2h52m0s |
| 2025-09-01 18:37 UTC | mm: show_mem: show number of zspages in show_free_areas | 1 | cascardo@igalia.com | finished in 3h50m0s |
| 2025-09-01 15:03 UTC | mm: remove nth_page() | 2 | david@redhat.com | finished in 3h44m0s |
| 2025-08-29 15:56 UTC | selftests/mm/uffd: Refactor non-composite global vars into struct | 8 | ujwal.kundur@gmail.com | skipped |
| 2025-08-28 12:27 UTC | tools: testing: Use existing atomic.h for vma/maple tests | 2 | jackmanb@google.com | finished in 1h1m0s |
| 2025-08-28 03:06 UTC | mm: Use pr_warn_once() for min_free_kbytes warning | 1 | tongweilin@linux.alibaba.com | finished in 3h46m0s |
| 2025-08-27 22:01 UTC | mm: remove nth_page() | 1 | david@redhat.com | finished in 3h42m0s |
| 2025-08-27 18:34 UTC | mm/show_mem: Bug fix for print mem alloc info | 1 | pyyjason@gmail.com | finished in 50m0s |
| 2025-08-27 11:04 UTC | tools: testing: Use existing atomic.h for vma/radix-tree tests | 1 | jackmanb@google.com | finished in 40m0s |
| 2025-08-26 14:06 UTC | mm/page_alloc: Harmonize should_compact_retry() type | 1 | jackmanb@google.com | finished in 46m0s |
| 2025-08-21 20:06 UTC | mm: remove nth_page() | 1 | david@redhat.com |
finished
in 1h4m0s
[1 findings] |
| 2025-08-21 13:29 UTC | mm: Remove is_migrate_highatomic() | 1 | jackmanb@google.com | finished in 48m0s |
| 2025-08-18 18:58 UTC | mm/page_alloc: Occasionally relinquish zone lock in batch freeing | 1 | joshua.hahnjy@gmail.com | finished in 12m0s |
| 2025-08-17 06:52 UTC | selftests/mm/uffd: Refactor non-composite global vars into struct | 7 | ujwal.kundur@gmail.com | skipped |
| 2025-08-15 02:45 UTC | mm/page_alloc: simplify lowmem_reserve max calculation | 3 | ye.liu@linux.dev | finished in 3h50m0s |
| 2025-08-15 02:34 UTC | mm/page_alloc: simplify lowmem_reserve max calculation | 2 | ye.liu@linux.dev | finished in 4h1m0s |
| 2025-08-14 17:22 UTC | mm/page_alloc: only set ALLOC_HIGHATOMIC for __GPF_HIGH allocations | 1 | cascardo@igalia.com | finished in 3h52m0s |
| 2025-08-14 09:26 UTC | mm/show_mem: Print totalreserve_pages in show_mem output | 1 | ye.liu@linux.dev | finished in 3h41m0s |
| 2025-08-14 09:00 UTC | mm/page_alloc: simplify lowmem_reserve max calculation | 1 | ye.liu@linux.dev | finished in 3h50m0s |
| 2025-08-14 07:18 UTC | mm/page_alloc: Remove redundant pcp->free_count initialization in per_cpu_pages_init() | 1 | ye.liu@linux.dev | finished in 3h44m0s |
| 2025-07-29 11:02 UTC | mm, page_pool: introduce a new page type for page pool in page type | 3 | byungchul@sk.com | skipped |
| 2025-07-28 08:20 UTC | mm, page_pool: introduce a new page type for page pool in page type | 2 | byungchul@sk.com | skipped |
| 2025-07-28 05:27 UTC | mm, page_pool: introduce a new page type for page pool in page type | 2 | byungchul@sk.com | skipped |
| 2025-07-21 05:49 UTC | mm, page_pool: introduce a new page type for page pool in page type | 1 | byungchul@sk.com | skipped |
| 2025-07-21 02:18 UTC | Split netmem from struct page | 12 | byungchul@sk.com | skipped |
| 2025-07-17 07:00 UTC | Split netmem from struct page | 11 | byungchul@sk.com | skipped |