| Published | Title | Version | Author | Status |
|---|---|---|---|---|
| 2025-12-14 06:55 UTC | mm/pgtable: use ptdesc for pmd_huge_pte | 1 | alexs@kernel.org | finished in 3h55m0s |
| 2025-12-13 15:07 UTC | 9p/virtio: convert to extract_iter_to_sg() | 2 | devnull@kernel.org | finished in 3h51m0s |
| 2025-12-12 04:27 UTC | Enable vmalloc huge mappings by default on arm64 | 1 | dev.jain@arm.com | finished in 3h49m0s |
| 2025-12-11 08:16 UTC | support batch checking of references and unmapping for large folios | 2 | baolin.wang@linux.alibaba.com | finished in 4h18m0s |
| 2025-12-10 20:01 UTC | pagemap: Add alert to mapping_set_release_always() for mapping with no release_folio | 1 | dkarn@redhat.com | finished in 3h53m0s |
| 2025-12-10 15:43 UTC | mm: memcontrol: rename mem_cgroup_from_slab_obj() | 1 | hannes@cmpxchg.org | finished in 3h57m0s |
| 2025-12-09 21:04 UTC | 9p/virtio: restrict page pinning to user_backed_iter() iovec | 1 | devnull@kernel.org | finished in 3h53m0s |
| 2025-12-08 21:50 UTC | tmpfs: enforce the immutable flag on open files | 2 | nabijaczleweli@nabijaczleweli.xyz | finished in 3h52m0s |
| 2025-12-06 13:19 UTC | further damage-control lack of clone scalability | 3 | mjguzik@gmail.com | finished in 3h54m0s |
| 2025-12-06 12:03 UTC | tmpfs: enforce the immutable flag on open files | 1 | nabijaczleweli@nabijaczleweli.xyz | finished in 3h56m0s |
| 2025-12-06 10:14 UTC | mm: Hot page tracking and promotion infrastructure | 4 | bharata@amd.com | finished in 3h54m0s |
| 2025-12-05 19:43 UTC | mm/hugetlb: Eliminate fake head pages from vmemmap optimization | 1 | kas@kernel.org |
finished
in 43m0s
[1 findings] |
| 2025-12-05 18:22 UTC | drm: Reduce page tables overhead with THP | 13 | loic.molinari@collabora.com | finished in 4h5m0s |
| 2025-12-05 09:12 UTC | drm: Reduce page tables overhead with THP | 12 | loic.molinari@collabora.com | finished in 3h52m0s |
| 2025-12-05 05:59 UTC | ext4: unmap invalidated folios from page tables in mpage_release_unused_pages() | 3 | kartikey406@gmail.com | finished in 3h54m0s |
| 2025-12-04 19:29 UTC | mm, swap: swap table phase II: unify swapin use swap cache and cleanup flags | 4 | ryncsn@gmail.com | finished in 3h51m0s |
| 2025-12-04 14:26 UTC | lib: xarray: free unused spare node in xas_create_range() | 4 | shardul.b@mpiricsoftware.com | finished in 3h58m0s |
| 2025-12-04 13:48 UTC | filelock: fix conflict detection with userland file delegations | 2 | jlayton@kernel.org | finished in 4h3m0s |
| 2025-12-03 23:30 UTC | asdf | 6 | kees@kernel.org | finished in 1h14m0s |
| 2025-12-03 09:28 UTC | further damage-control lack of clone scalability | 2 | mjguzik@gmail.com | finished in 3h50m0s |
| 2025-12-02 10:17 UTC | drm: Reduce page tables overhead with THP | 11 | loic.molinari@collabora.com | finished in 3h59m0s |
| 2025-12-01 21:01 UTC | improve fadvise(POSIX_FADV_WILLNEED) with large folio | 1 | jaegeuk@kernel.org | finished in 3h56m0s |
| 2025-12-01 17:46 UTC | khugepaged: mTHP support | 13 | npache@redhat.com | finished in 3h53m0s |
| 2025-12-01 15:08 UTC | filelock: fix conflict detection with userland file delegations | 1 | jlayton@kernel.org | finished in 29m0s |
| 2025-12-01 07:45 UTC | lib: xarray: free unused spare node in xas_create_range() | 3 | shardul.b@mpiricsoftware.com | finished in 3h50m0s |
| 2025-11-28 18:52 UTC | drm: Reduce page tables overhead with THP | 10 | loic.molinari@collabora.com | finished in 3h50m0s |
| 2025-11-28 07:01 UTC | mm: Coccinelle-driven cleanups across memory management code | 4 | chandna.sahil@gmail.com | skipped |
| 2025-11-28 04:41 UTC | Remove device private pages from physical address space | 1 | jniethe@nvidia.com | finished in 3h54m0s |
| 2025-11-28 04:00 UTC | mm: fix vma_start_write_killable() signal handling | 3 | willy@infradead.org | finished in 4h3m0s |
| 2025-11-27 09:27 UTC | IDR fix for potential id mismatch | 1 | jan.sokolowski@intel.com | finished in 4h13m0s |
| 2025-11-27 05:11 UTC | mm: Coccinelle-driven cleanups across memory management code | 3 | chandna.sahil@gmail.com | skipped |
| 2025-11-27 04:59 UTC | ntfsplus: ntfs filesystem remake | 2 | linkinjeon@kernel.org | finished in 45m0s |
| 2025-11-26 17:44 UTC | mm: fix vma_start_write_killable() signal handling | 2 | willy@infradead.org | finished in 3h56m0s |
| 2025-11-26 03:42 UTC | mm: fix vma_start_write_killable() signal handling | 1 | willy@infradead.org | finished in 4h1m0s |
| 2025-11-25 00:56 UTC | support batched checks of the references for large folios | 1 | baolin.wang@linux.alibaba.com | finished in 3h50m0s |
| 2025-11-24 19:13 UTC | mm, swap: swap table phase II: unify swapin use swap cache and cleanup flags | 3 | ryncsn@gmail.com | finished in 4h3m0s |
| 2025-11-24 14:23 UTC | slab: Remove unnecessary call to compound_head() in alloc_from_pcs() | 1 | willy@infradead.org | finished in 4h7m0s |
| 2025-11-23 22:05 UTC | fs: Add uoff_t | 2 | willy@infradead.org | finished in 3h50m0s |
| 2025-11-23 03:04 UTC | mm: Coccinelle-driven cleanups across memory management code | 2 | chandna.sahil@gmail.com | skipped |
| 2025-11-22 01:42 UTC | slab: Introduce kmalloc_obj() and family | 5 | kees@kernel.org | finished in 50m0s |
| 2025-11-21 20:23 UTC | mm: folio_zero_user: clear contiguous pages | 9 | ankur.a.arora@oracle.com | finished in 3h50m0s |
| 2025-11-21 09:06 UTC | ext4: enable block size larger than page size | 4 | libaokun@huaweicloud.com | finished in 3h56m0s |
| 2025-11-21 04:00 UTC | netmem, io_uring/zcrx: access pp fields through @desc in net_iov | 1 | byungchul@sk.com | finished in 50m0s |
| 2025-11-20 21:03 UTC | mm: Fix OOM killer inaccuracy on large many-core systems | 9 | mathieu.desnoyers@efficios.com | finished in 41m0s |
| 2025-11-20 16:14 UTC | mm/filemap: Fix logic around SIGBUS in filemap_map_pages() | 1 | kirill@shutemov.name | finished in 32m0s |
| 2025-11-20 01:11 UTC | eth: fbnic: access @pp through netmem_desc instead of page | 1 | byungchul@sk.com | finished in 48m0s |
| 2025-11-19 04:26 UTC | mm: Tweak __vma_enter_locked() | 1 | willy@infradead.org | finished in 3h58m0s |
| 2025-11-17 22:46 UTC | Extend xas_split* to support splitting arbitrarily large entries | 1 | ackerleytng@google.com |
finished
in 52m0s
[1 findings] |
| 2025-11-16 18:11 UTC | mm, swap: swap table phase II: unify swapin use swap cache and cleanup flags | 2 | ryncsn@gmail.com | finished in 3h49m0s |
| 2025-11-16 05:42 UTC | mm/filemap: fix NULL pointer dereference in do_read_cache_folio() | 2 | ssrane_b23@ee.vjti.ac.in | finished in 52m0s |
| 2025-11-16 01:47 UTC | Only free healthy pages in high-order HWPoison folio | 1 | jiaqiyan@google.com | finished in 3h59m0s |
| 2025-11-16 01:32 UTC | memfd-based Userspace MFR Policy for HugeTLB | 2 | jiaqiyan@google.com | finished in 57m0s |
| 2025-11-14 19:37 UTC | mm/filemap: fix NULL pointer dereference in do_read_cache_folio() | 1 | ssrane_b23@ee.vjti.ac.in | finished in 53m0s |
| 2025-11-14 17:02 UTC | drm: Reduce page tables overhead with THP | 9 | loic.molinari@collabora.com | finished in 3h52m0s |
| 2025-11-14 09:21 UTC | block: enable per-cpu bio cache by default | 3 | changfengnan@bytedance.com | finished in 4h5m0s |
| 2025-11-14 07:57 UTC | mm/huge_memory: consolidate order-related checks into folio_split_supported() | 1 | richard.weiyang@gmail.com | finished in 54m0s |
| 2025-11-14 00:46 UTC | mm: shmem: allow fallback to smaller large orders for tmpfs mmap() access | 1 | baolin.wang@linux.alibaba.com | finished in 56m0s |
| 2025-11-13 19:13 UTC | gfs2: Prevent recursive memory reclaim | 1 | agruenba@redhat.com | finished in 1h11m0s |
| 2025-11-13 16:59 UTC | drm: Reduce page tables overhead with THP | 8 | loic.molinari@collabora.com | finished in 3h55m0s |
| 2025-11-13 14:04 UTC | Convert pgtable to use frozen pages | 1 | willy@infradead.org | finished in 1h21m0s |
| 2025-11-13 00:09 UTC | Prepare slab for memdescs | 4 | willy@infradead.org | finished in 53m0s |
| 2025-11-12 11:08 UTC | Enable vmalloc block mappings by default on arm64 | 1 | dev.jain@arm.com | finished in 51m0s |
| 2025-11-12 11:06 UTC | xfs: single block atomic writes for buffered IO | 1 | ojaswin@linux.ibm.com |
finished
in 3h43m0s
[1 findings] |
| 2025-11-11 22:08 UTC | Revert "null_blk: allow byte aligned memory offsets" | 1 | bvanassche@acm.org | finished in 3h44m0s |
| 2025-11-11 14:26 UTC | ext4: enable block size larger than page size | 3 | libaokun@huaweicloud.com | finished in 3h51m0s |
| 2025-11-11 14:06 UTC | block: fix merging data-less bios | 1 | kbusch@meta.com | finished in 3h50m0s |
| 2025-11-10 20:32 UTC | vma_start_write_killable | 2 | willy@infradead.org | finished in 3h43m0s |
| 2025-11-10 15:49 UTC | drm: Reduce page tables overhead with THP | 7 | loic.molinari@collabora.com |
finished
in 50m0s
[1 findings] |
| 2025-11-10 14:27 UTC | drm: Reduce page tables overhead with THP | 6 | loic.molinari@collabora.com | skipped |
| 2025-11-10 06:37 UTC | mm/ptdesc: Derive from the compound head in page_ptdesc() | 1 | anshuman.khandual@arm.com | finished in 1h1m0s |
| 2025-11-10 05:23 UTC | mm: Hot page tracking and promotion infrastructure | 3 | bharata@amd.com | finished in 3h43m0s |
| 2025-11-07 17:22 UTC | mm: Fix OOM killer inaccuracy on large many-core systems | 8 | mathieu.desnoyers@efficios.com | finished in 3h41m0s |
| 2025-11-07 14:42 UTC | ext4: enable block size larger than page size | 2 | libaokun@huaweicloud.com | finished in 3h44m0s |
| 2025-11-07 02:05 UTC | block: enable per-cpu bio cache by default | 2 | changfengnan@bytedance.com | finished in 3h42m0s |
| 2025-11-06 20:35 UTC | mm: Constify __dump_folio() arguments | 2 | willy@infradead.org | finished in 49m0s |
| 2025-11-06 20:33 UTC | mm: Constify __dump_folio() arguments | 1 | willy@infradead.org | finished in 1h4m0s |
| 2025-11-06 20:14 UTC | hugetlb: Optimise hugetlb_folio_init_tail_vmemmap() | 1 | willy@infradead.org | finished in 3h46m0s |
| 2025-11-05 10:58 UTC | MAINTAINERS: add idr core-api doc file to XARRAY | 1 | lbulwahn@redhat.com | finished in 44m0s |
| 2025-11-05 08:56 UTC | mm/page_alloc: don't warn about large allocations with __GFP_NOFAIL | 2 | libaokun@huaweicloud.com | finished in 3h47m0s |
| 2025-11-05 07:41 UTC | mm/page_alloc: don't warn about large allocations with __GFP_NOFAIL | 1 | libaokun@huaweicloud.com | finished in 4h8m0s |
| 2025-11-04 12:50 UTC | vfat: fix missing sb_min_blocksize() return value checks | 6 | yangyongpeng.storage@gmail.com | finished in 4h12m0s |
| 2025-11-03 18:03 UTC | vma_start_write_killable | 1 | willy@infradead.org |
finished
in 3h56m0s
[1 findings] |
| 2025-11-03 16:47 UTC | vfat: fix missing sb_min_blocksize() return value checks | 5 | yangyongpeng.storage@gmail.com | finished in 3h57m0s |
| 2025-11-03 16:36 UTC | vfat: fix missing sb_min_blocksize() return value checks | 4 | yangyongpeng.storage@gmail.com | finished in 4h9m0s |
| 2025-11-03 13:50 UTC | fix missing sb_min_blocksize() return value checks in some filesystems | 3 | yangyongpeng.storage@gmail.com | finished in 3h44m0s |
| 2025-11-02 16:38 UTC | fix missing sb_min_blocksize() return value checks in some filesystems | 2 | yangyongpeng.storage@gmail.com | finished in 3h44m0s |
| 2025-10-31 16:19 UTC | Optimize folio split in memory failure | 5 | ziy@nvidia.com | finished in 3h56m0s |
| 2025-10-31 14:42 UTC | mm: Fix OOM killer inaccuracy on large many-core systems | 7 | mathieu.desnoyers@efficios.com | finished in 3h45m0s |
| 2025-10-31 12:09 UTC | mm/secretmem: fix use-after-free race in fault handler | 2 | lance.yang@linux.dev | finished in 4h9m0s |
| 2025-10-31 09:18 UTC | mm/secretmem: fix use-after-free race in fault handler | 1 | lance.yang@linux.dev | finished in 3h51m0s |
| 2025-10-31 06:13 UTC | mm: allow __GFP_NOFAIL allocation up to BLK_MAX_BLOCK_SIZE to support LBS | 1 | libaokun@huaweicloud.com | finished in 3h53m0s |
| 2025-10-30 16:51 UTC | shmem: don't trim whole folio loop boundaries on partial truncate | 1 | bfoster@redhat.com | finished in 3h50m0s |
| 2025-10-30 01:40 UTC | Optimize folio split in memory failure | 4 | ziy@nvidia.com | finished in 3h51m0s |
| 2025-10-29 15:58 UTC | mm, swap: never bypass swap cache and cleanup flags (swap table phase II) | 1 | ryncsn@gmail.com |
finished
in 33m0s
[1 findings] |
| 2025-10-27 20:21 UTC | mm: folio_zero_user: clear contiguous pages | 8 | ankur.a.arora@oracle.com | finished in 3h49m0s |
| 2025-10-26 20:36 UTC | Guaranteed CMA | 2 | surenb@google.com | finished in 3h39m0s |
| 2025-10-26 10:01 UTC | mm, bpf: BPF-MM, BPF-THP | 12 | laoar.shao@gmail.com | finished in 3h46m0s |
| 2025-10-24 20:44 UTC | Prepare slab for memdescs | 3 | willy@infradead.org | finished in 3h55m0s |
| 2025-10-24 07:41 UTC | remove is_swap_[pte, pmd]() + non-swap confusion | 1 | lorenzo.stoakes@oracle.com | finished in 23m0s |
| 2025-10-23 11:59 UTC | mm: hugetlb: allocate frozen gigantic folio | 4 | wangkefeng.wang@huawei.com |
finished
in 34m0s
[1 findings] |