| Published | Title | Version | Author | Status |
|---|---|---|---|---|
| 2026-04-13 19:26 UTC | mm/vmalloc: Take vmap_purge_lock in shrinker | 1 | urezki@gmail.com | finished in 4h44m0s |
| 2026-04-08 02:51 UTC | mm/vmalloc: Speed up ioremap, vmalloc and vmap with contiguous memory | 1 | baohua@kernel.org | finished in 4h30m0s |
| 2026-04-05 23:52 UTC | rust: bump minimum Rust and `bindgen` versions | 2 | ojeda@kernel.org | skipped |
| 2026-04-04 08:36 UTC | mm/vmalloc: free unused pages on vrealloc() shrink | 10 | devnull@kernel.org | finished in 3h56m0s |
| 2026-04-03 07:52 UTC | mm/vmalloc: fix KMSAN uninit in decay_va_pool_node list handling | 1 | chenyichong@uniontech.com | finished in 2h7m0s |
| 2026-04-02 08:14 UTC | mm/vmalloc: fix KMSAN uninit-value warning in decay_va_pool_node() | 1 | wangqing7171@gmail.com | finished in 4h38m0s |
| 2026-04-01 17:16 UTC | mm/vmalloc: free unused pages on vrealloc() shrink | 9 | devnull@kernel.org | finished in 4h28m0s |
| 2026-04-01 11:45 UTC | rust: bump minimum Rust and `bindgen` versions | 1 | ojeda@kernel.org | finished in 1h28m0s |
| 2026-04-01 10:16 UTC | mm: Free contiguous order-0 pages efficiently | 6 | usama.anjum@arm.com | finished in 4h26m0s |
| 2026-03-31 20:23 UTC | mm/vmalloc: Use dedicated unbound workqueues for vmap drain | 3 | urezki@gmail.com | finished in 4h40m0s |
| 2026-03-31 15:21 UTC | mm: Free contiguous order-0 pages efficiently | 5 | usama.anjum@arm.com | finished in 4h43m0s |
| 2026-03-30 17:58 UTC | mm/vmalloc: Use dedicated unbound workqueue for vmap purge/drain | 2 | urezki@gmail.com |
finished
in 5h20m0s
[1 findings] |
| 2026-03-30 16:05 UTC | mm/vmalloc: Use dedicated unbound workqueue for vmap purge/drain | 1 | urezki@gmail.com | finished in 4h23m0s |
| 2026-03-27 12:57 UTC | mm: Free contiguous order-0 pages efficiently | 4 | usama.anjum@arm.com | finished in 4h25m0s |
| 2026-03-27 09:48 UTC | mm/vmalloc: free unused pages on vrealloc() shrink | 8 | devnull@kernel.org | finished in 3h53m0s |
| 2026-03-25 09:09 UTC | Implementation of Dynamic Housekeeping & Enhanced Isolation (DHEI) | 1 | realwujing@gmail.com |
finished
in 59m0s
[1 findings] |
| 2026-03-24 21:35 UTC | slab,rcu: disable KVFREE_RCU_BATCHED for strict grace period | 1 | jannh@google.com | finished in 1h22m0s |
| 2026-03-24 13:35 UTC | mm: Free contiguous order-0 pages efficiently | 3 | usama.anjum@arm.com | finished in 4h20m0s |
| 2026-03-24 13:26 UTC | KASAN: HW_TAGS: Disable tagging for stack and page-tables | 2 | usama.anjum@arm.com | finished in 4h7m0s |
| 2026-03-24 10:00 UTC | mm/vmalloc: free unused pages on vrealloc() shrink | 7 | devnull@kernel.org | finished in 4h0m0s |
| 2026-03-24 09:47 UTC | context_tracking,x86: Defer some IPIs until a user->kernel transition | 8 | vschneid@redhat.com |
finished
in 4h27m0s
[2 findings] |
| 2026-03-21 18:05 UTC | mm/vmalloc: free unused pages on vrealloc() shrink | 6 | devnull@kernel.org | finished in 4h35m0s |
| 2026-03-21 17:03 UTC | rcu-tasks: Avoid using mod_timer() in call_rcu_tasks_generic() | 1 | boqun@kernel.org | finished in 4h26m0s |
| 2026-03-21 10:58 UTC | mm: vmalloc: update outdated comment for renamed vread() | 1 | kexinsun@smail.nju.edu.cn | finished in 1h16m0s |
| 2026-03-20 22:29 UTC | rcu: Use an intermediate irq_work to start process_srcu() | 2 | boqun@kernel.org | skipped |
| 2026-03-20 18:14 UTC | rcu: Use an intermediate irq_work to start process_srcu() | 1 | boqun@kernel.org | skipped |
| 2026-03-19 16:03 UTC | mm: Switch gfp_t to unsigned long | 1 | jackmanb@google.com | finished in 4h11m0s |
| 2026-03-19 11:49 UTC | KASAN: HW_TAGS: Disable tagging for stack and page-tables | 1 | usama.anjum@arm.com | finished in 4h14m0s |
| 2026-03-19 07:43 UTC | mm/vmalloc: use dedicated unbound workqueue for vmap area draining | 2 | lirongqing@baidu.com |
finished
in 4h16m0s
[1 findings] |
| 2026-03-18 07:36 UTC | mm/vmalloc: use unbound workqueue for vmap area draining | 1 | lirongqing@baidu.com | finished in 1h17m0s |
| 2026-03-17 08:17 UTC | mm/vmalloc: free unused pages on vrealloc() shrink | 5 | devnull@kernel.org | finished in 3h53m0s |
| 2026-03-16 11:31 UTC | mm: Free contiguous order-0 pages efficiently | 2 | usama.anjum@arm.com |
finished
in 1h8m0s
[1 findings] |
| 2026-03-14 09:04 UTC | mm/vmalloc: free unused pages on vrealloc() shrink | 4 | devnull@kernel.org | finished in 4h0m0s |
| 2026-03-09 11:55 UTC | mm/vmalloc: free unused pages on vrealloc() shrink | 3 | devnull@kernel.org | finished in 3h51m0s |
| 2026-03-04 14:53 UTC | mm/vmalloc: free unused pages on vrealloc() shrink | 2 | devnull@kernel.org | finished in 4h14m0s |
| 2026-03-02 13:57 UTC | mm/vmalloc: free unused pages on vrealloc() shrink | 1 | devnull@kernel.org | finished in 3h52m0s |
| 2026-03-02 11:47 UTC | mm/vmalloc: Fix incorrect size reporting on allocation failure | 1 | urezki@gmail.com | finished in 4h32m0s |
| 2026-02-25 22:38 UTC | Fix KASAN support for KHO restored vmalloc regions | 1 | pasha.tatashin@soleen.com | finished in 4h26m0s |
| 2026-02-24 11:17 UTC | rust: add `Ownable` trait and `Owned` type | 16 | a.hindborg@kernel.org | finished in 1h11m0s |
| 2026-02-23 20:44 UTC | Hazard Pointers | 5 | mathieu.desnoyers@efficios.com | finished in 4h17m0s |
| 2026-02-23 16:01 UTC | mm: vmalloc: streamline vmalloc memory accounting | 2 | hannes@cmpxchg.org | finished in 3h50m0s |
| 2026-02-20 19:10 UTC | mm: vmalloc: streamline vmalloc memory accounting | 1 | hannes@cmpxchg.org | finished in 3h59m0s |
| 2026-02-12 16:33 UTC | mm: allow __GFP_RETRY_MAYFAIL in vmalloc | 1 | mpatocka@redhat.com | finished in 4h5m0s |
| 2026-02-10 04:46 UTC | mm/slab: support kmalloc_nolock() -> kfree[_rcu]() | 1 | harry.yoo@oracle.com | finished in 3h50m0s |
| 2026-02-09 12:10 UTC | mm/slab: support kmalloc_nolock() -> kfree[_rcu]() | 1 | harry.yoo@oracle.com | finished in 4h17m0s |
| 2026-02-06 09:34 UTC | k[v]free_rcu() improvements | 1 | harry.yoo@oracle.com | finished in 3h57m0s |
| 2026-02-06 07:04 UTC | Implementation of Dynamic Housekeeping & Enhanced Isolation (DHEI) | 1 | realwujing@gmail.com | finished in 3h59m0s |
| 2026-02-03 11:34 UTC | Inline helpers into Rust without full LTO | 2 | aliceryhl@google.com | finished in 1h0m0s |
| 2026-01-23 08:23 UTC | net/smc: buffer allocation and registration improvements | 1 | alibuda@linux.alibaba.com | finished in 4h32m0s |
| 2026-01-23 06:52 UTC | slab: replace cpu (partial) slabs with sheaves | 4 | vbabka@suse.cz | finished in 3h49m0s |
| 2026-01-21 13:16 UTC | mm/slab: fix false lockdep warning in __kfree_rcu_sheaf() | 1 | harry.yoo@oracle.com | finished in 3h51m0s |
| 2026-01-19 14:45 UTC | mm-kasan-fix-kasan-poisoning-in-vrealloc-fix | 1 | ryabinin.a.a@gmail.com | finished in 4h30m0s |
| 2026-01-16 15:46 UTC | rcu box container for Rust + maple tree load_rcu | 1 | aliceryhl@google.com | finished in 1h4m0s |
| 2026-01-16 14:40 UTC | slab: replace cpu (partial) slabs with sheaves | 3 | vbabka@suse.cz | skipped |
| 2026-01-16 13:28 UTC | mm-kasan-kunit-extend-vmalloc-oob-tests-to-cover-vrealloc-fix | 1 | ryabinin.a.a@gmail.com | finished in 1h10m0s |
| 2026-01-13 19:15 UTC | mm/kasan: Fix KASAN poisoning in vrealloc() | 1 | ryabinin.a.a@gmail.com | finished in 3h58m0s |
| 2026-01-12 15:16 UTC | slab: replace cpu (partial) slabs with sheaves | 2 | vbabka@suse.cz | finished in 4h11m0s |
| 2026-01-12 10:36 UTC | mm/vmalloc: prevent RCU stalls in kasan_release_vmalloc_node | 2 | kartikey406@gmail.com | finished in 4h7m0s |
| 2026-01-12 08:47 UTC | mm/vmalloc: prevent RCU stalls in kasan_release_vmalloc_node | 1 | kartikey406@gmail.com | finished in 4h2m0s |
| 2026-01-07 15:16 UTC | vmalloc: export vrealloc_node_align_noprof | 1 | aliceryhl@google.com | finished in 3h55m0s |
| 2026-01-06 15:56 UTC | blk-mq: avoid stall during boot due to synchronize_rcu_expedited | 1 | mpatocka@redhat.com | finished in 4h15m0s |
| 2026-01-05 16:17 UTC | Free contiguous order-0 pages efficiently | 1 | ryan.roberts@arm.com | finished in 3h57m0s |
| 2025-12-19 15:39 UTC | Compiler-Based Context- and Locking-Analysis | 5 | elver@google.com | finished in 3h51m0s |
| 2025-12-19 01:40 UTC | mm kernel-doc fixes | 1 | bagasdotme@gmail.com | finished in 47m0s |
| 2025-12-18 01:45 UTC | Hazard Pointers | 4 | mathieu.desnoyers@efficios.com | skipped |
| 2025-12-17 13:50 UTC | kasan: vmalloc: Fixes for the percpu allocator and vrealloc | 5 | m.wieczorretman@pm.me | finished in 3h51m0s |
| 2025-12-16 21:19 UTC | mm/vmalloc: Add large-order allocation helper | 1 | urezki@gmail.com | finished in 4h11m0s |
| 2025-12-15 10:40 UTC | mm/vmalloc: clarify why vmap_range_noflush() might sleep | 3 | jackmanb@google.com | finished in 4h4m0s |
| 2025-12-15 05:30 UTC | mm/vmalloc: map contiguous pages in batches for vmap() whenever possible | 1 | 21cnbao@gmail.com | finished in 3h53m0s |
| 2025-12-12 04:27 UTC | Enable vmalloc huge mappings by default on arm64 | 1 | dev.jain@arm.com | finished in 3h49m0s |
| 2025-12-09 05:44 UTC | mm/vmalloc: clarify why vmap_range_noflush() might sleep | 2 | jackmanb@google.com | finished in 3h51m0s |
| 2025-12-08 05:19 UTC | mm/vmalloc: clarify why vmap_range_noflush() might sleep | 1 | jackmanb@google.com |
finished
in 40m0s
[1 findings] |
| 2025-12-07 15:41 UTC | mm/slab: introduce kvfree_rcu_barrier_on_cache() for cache destruction | 1 | harry.yoo@oracle.com | finished in 3h52m0s |
| 2025-12-05 14:59 UTC | kasan: vmalloc: Fixes for the percpu allocator and vrealloc | 4 | m.wieczorretman@pm.me | finished in 3h57m0s |
| 2025-12-04 18:59 UTC | kasan: vmalloc: Fixes for the percpu allocator and vrealloc | 3 | m.wieczorretman@pm.me | finished in 3h55m0s |
| 2025-12-02 20:27 UTC | Inline helpers into Rust without full LTO | 1 | aliceryhl@google.com | finished in 3h57m0s |
| 2025-12-02 14:29 UTC | kasan: vmalloc: Fix incorrect tag assignment with multiple vm_structs | 2 | m.wieczorretman@pm.me | finished in 3h56m0s |
| 2025-12-02 10:16 UTC | mm/slab: introduce kvfree_rcu_barrier_on_cache() for cache destruction | 1 | harry.yoo@oracle.com | finished in 3h53m0s |
| 2025-12-02 08:48 UTC | slub: add barn_get_full_sheaf() and refine empty-main sheaf | 1 | haoli.tcs@gmail.com | skipped |
| 2025-12-01 18:18 UTC | lib/test_vmalloc.c: Minor fixes to test_vmalloc.c | 1 | audra@redhat.com | finished in 59m0s |
| 2025-11-29 12:36 UTC | kasan: hw_tags: fix a false positive case of vrealloc in alloced size | 1 | yeoreum.yun@arm.com | finished in 3h48m0s |
| 2025-11-28 11:37 UTC | mm/slab: introduce kvfree_rcu_barrier_on_cache() for cache destruction | 1 | harry.yoo@oracle.com | finished in 3h52m0s |
| 2025-11-22 09:03 UTC | mm/vmap: map contiguous pages in batches whenever possible | 1 | 21cnbao@gmail.com | finished in 3h48m0s |
| 2025-11-21 09:44 UTC | make vmalloc gfp flags usage more apparent | 4 | vishal.moola@gmail.com | finished in 4h10m0s |
| 2025-11-18 11:37 UTC | mm/vmalloc: warn only once when vmalloc detect invalid gfp flags | 2 | devnull@kernel.org | finished in 4h23m0s |
| 2025-11-18 00:05 UTC | mm/vmalloc: warn only once when vmalloc detect invalid gfp flags | 1 | devnull@kernel.org | finished in 41m0s |
| 2025-11-17 17:35 UTC | make vmalloc gfp flags usage more apparent | 3 | vishal.moola@gmail.com | finished in 4h0m0s |
| 2025-11-14 15:01 UTC | context_tracking,x86: Defer some IPIs until a user->kernel transition | 7 | vschneid@redhat.com | skipped |
| 2025-11-12 18:58 UTC | make vmalloc gfp flags usage more apparent | 2 | vishal.moola@gmail.com |
finished
in 46m0s
[1 findings] |
| 2025-11-12 11:08 UTC | Enable vmalloc block mappings by default on arm64 | 1 | dev.jain@arm.com | finished in 51m0s |
| 2025-11-10 16:04 UTC | make vmalloc gfp flags usage more apparent | 1 | vishal.moola@gmail.com |
finished
in 1h28m0s
[1 findings] |
| 2025-11-04 14:49 UTC | kasan: vmalloc: Fix incorrect tag assignment with multiple vm_structs | 1 | m.wieczorretman@pm.me | finished in 3h47m0s |
| 2025-11-03 19:04 UTC | make vmalloc gfp flags usage more apparent | 2 | vishal.moola@gmail.com | finished in 3h49m0s |
| 2025-10-30 16:43 UTC | make vmalloc gfp flags usage more apparent | 1 | vishal.moola@gmail.com | finished in 3h48m0s |
| 2025-10-23 13:52 UTC | slab: replace cpu (partial) slabs with sheaves | 1 | vbabka@suse.cz | skipped |
| 2025-10-22 08:26 UTC | Fix stale IOTLB entries for kernel address space | 7 | baolu.lu@linux.intel.com | finished in 3h46m0s |
| 2025-10-21 19:44 UTC | mm/vmalloc: request large order pages from buddy allocator | 1 | vishal.moola@gmail.com | finished in 4h3m0s |
| 2025-10-20 04:49 UTC | vmalloc: Separate gfp_mask adjunctive parentheses in __vmalloc_node_noprof() kernel-doc comment | 1 | bagasdotme@gmail.com | finished in 40m0s |
| 2025-10-18 19:25 UTC | mm/vmalloc: Use kmalloc_array() instead of kmalloc() | 1 | mehdi.benhadjkhelifa@gmail.com | finished in 42m0s |
| 2025-10-15 19:24 UTC | rust: replace kernel::str::CStr w/ core::ffi::CStr | 17 | tamird@gmail.com |
finished
in 22m0s
[1 findings] |