2025-10-08 03:26 UTC |
mm/khugepaged: abort collapse scan on non-swap entries |
3 |
lance.yang@linux.dev |
finished
in 3h50m0s
|
2025-10-04 09:30 UTC |
drm: Reduce page tables overhead with THP |
3 |
loic.molinari@collabora.com |
skipped
|
2025-10-03 16:53 UTC |
reparent the THP split queue |
4 |
qi.zheng@linux.dev |
finished
in 3h48m0s
|
2025-10-01 03:22 UTC |
mm/khugepaged: abort collapse scan on non-swap entries |
2 |
lance.yang@linux.dev |
finished
in 3h42m0s
|
2025-09-30 00:44 UTC |
mm/damon/vaddr: do not repeat pte_offset_map_lock() until success |
1 |
sj@kernel.org |
finished
in 3h41m0s
|
2025-09-29 20:03 UTC |
drm: Optimize page tables overhead with THP |
1 |
loic.molinari@collabora.com |
skipped
|
2025-09-29 01:02 UTC |
Live Update Orchestrator |
4 |
pasha.tatashin@soleen.com |
finished
in 3h41m0s
|
2025-09-28 11:16 UTC |
reparent the THP split queue |
3 |
zhengqi.arch@bytedance.com |
finished
in 3h42m0s
|
2025-09-26 21:16 UTC |
mm/userfaultfd: modulize memory types |
3 |
peterx@redhat.com |
finished
in 3h40m0s
|
2025-09-25 07:32 UTC |
optimize the logic for handling dirty file folios during reclaim |
1 |
baolin.wang@linux.alibaba.com |
finished
in 3h42m0s
|
2025-09-24 10:02 UTC |
mm/khugepaged: abort collapse scan on non-swap entries |
1 |
lance.yang@linux.dev |
skipped
|
2025-09-24 04:58 UTC |
mm: clean up is_guard_pte_marker() |
1 |
lance.yang@linux.dev |
finished
in 3h40m0s
|
2025-09-23 09:16 UTC |
reparent the THP split queue |
2 |
zhengqi.arch@bytedance.com |
finished
in 3h44m0s
|
2025-09-22 02:14 UTC |
mm/thp: fix MTE tag mismatch when replacing zero-filled subpages |
1 |
lance.yang@linux.dev |
skipped
|
2025-09-19 03:46 UTC |
reparent the THP split queue |
1 |
zhengqi.arch@bytedance.com |
finished
in 3h36m0s
|
2025-09-18 11:21 UTC |
mm: Improve mlock tracking for large folios |
1 |
kirill@shutemov.name |
finished
in 3h40m0s
|
2025-09-18 05:04 UTC |
mm/khugepaged: optimize collapse candidate detection |
2 |
lance.yang@linux.dev |
finished
in 1h40m0s
|
2025-09-18 03:46 UTC |
some cleanups for pageout() |
2 |
baolin.wang@linux.alibaba.com |
finished
in 3h4m0s
|
2025-09-18 02:04 UTC |
memfd,selinux: call security_inode_init_security_anon |
3 |
tweek@google.com |
finished
in 3h39m0s
|
2025-09-17 19:11 UTC |
expand mmap_prepare functionality, port more users |
4 |
lorenzo.stoakes@oracle.com |
skipped
|
2025-09-16 16:00 UTC |
mm, swap: introduce swap table as swap cache (phase I) |
4 |
ryncsn@gmail.com |
skipped
|
2025-09-16 14:11 UTC |
expand mmap_prepare functionality, port more users |
3 |
lorenzo.stoakes@oracle.com |
skipped
|
2025-09-12 03:45 UTC |
some cleanups for pageout() |
1 |
baolin.wang@linux.alibaba.com |
finished
in 3h45m0s
|
2025-09-12 03:27 UTC |
khugepaged: mTHP support |
11 |
npache@redhat.com |
skipped
|
2025-09-10 20:21 UTC |
expand mmap_prepare functionality, port more users |
2 |
lorenzo.stoakes@oracle.com |
skipped
|
2025-09-10 16:08 UTC |
mm, swap: introduce swap table as swap cache (phase I) |
3 |
ryncsn@gmail.com |
finished
in 3h45m0s
|
2025-09-08 22:15 UTC |
mm: better GUP pin lru_add_drain_all() |
2 |
hughd@google.com |
finished
in 3h49m0s
|
2025-09-08 12:31 UTC |
mm: shmem: fix too little space for tmpfs only fallback 4KB |
1 |
vernon2gm@gmail.com |
finished
in 3h48m0s
|
2025-09-08 07:50 UTC |
Expand scope of khugepaged anonymous collapse |
2 |
dev.jain@arm.com |
finished
in 3h42m0s
|
2025-09-08 06:26 UTC |
mm/shmem: remove unused entry_order after large swapin rework |
2 |
liu.yun@linux.dev |
finished
in 3h42m0s
|
2025-09-08 02:39 UTC |
mm/shmem: remove redundant entry_order variable in shmem_split_large_entry() |
1 |
liu.yun@linux.dev |
finished
in 3h42m0s
|
2025-09-08 01:34 UTC |
memfd,selinux: call security_inode_init_security_anon |
2 |
tweek@google.com |
finished
in 3h46m0s
|
2025-09-05 19:13 UTC |
mm, swap: introduce swap table as swap cache (phase I) |
2 |
ryncsn@gmail.com |
finished
in 3h42m0s
|
2025-09-03 19:10 UTC |
mm: Remove mlock_count from struct page |
1 |
willy@infradead.org |
finished
in 48m0s
|
2025-09-03 08:54 UTC |
mm: shmem: fix the strategy for the tmpfs 'huge=' options |
1 |
baolin.wang@linux.alibaba.com |
finished
in 3h40m0s
|
2025-09-03 05:46 UTC |
mm: Enable khugepaged to operate on non-writable VMAs |
1 |
dev.jain@arm.com |
finished
in 3h57m0s
|
2025-09-01 20:50 UTC |
mm: establish const-correctness for pointer parameters |
6 |
max.kellermann@ionos.com |
skipped
|
2025-09-01 12:30 UTC |
mm: establish const-correctness for pointer parameters |
5 |
max.kellermann@ionos.com |
skipped
|
2025-09-01 09:19 UTC |
mm: establish const-correctness for pointer parameters |
4 |
max.kellermann@ionos.com |
skipped
|
2025-09-01 07:48 UTC |
mm: Enable khugepaged to operate on non-writable VMAs |
1 |
dev.jain@arm.com |
skipped
|
2025-09-01 06:12 UTC |
mm: add `const` to lots of pointer parameters |
3 |
max.kellermann@ionos.com |
skipped
|
2025-08-31 11:47 UTC |
mm/memfd: remove redundant casts |
1 |
joeypabalinas@gmail.com |
finished
in 37m0s
|
2025-08-31 09:39 UTC |
mm: add `const` to lots of pointer parameters |
2 |
max.kellermann@ionos.com |
skipped
|
2025-08-31 09:01 UTC |
mm: better GUP pin lru_add_drain_all() |
1 |
hughd@google.com |
finished
in 3h37m0s
|
2025-08-29 18:31 UTC |
mm: add `const` to lots of pointer parameters |
1 |
max.kellermann@ionos.com |
skipped
|
2025-08-26 09:35 UTC |
mm: shmem: use 'folio' for shmem_partial_swap_usage() |
1 |
baolin.wang@linux.alibaba.com |
finished
in 3h48m0s
|
2025-08-26 03:18 UTC |
memfd,selinux: call security_inode_init_security_anon |
1 |
tweek@google.com |
finished
in 3h43m0s
|
2025-08-22 19:20 UTC |
mm, swap: introduce swap table as swap cache (phase I) |
1 |
ryncsn@gmail.com |
skipped
|
2025-08-20 09:07 UTC |
add shmem mTHP collapse support |
1 |
baolin.wang@linux.alibaba.com |
skipped
|
2025-08-19 13:41 UTC |
khugepaged: mTHP support |
10 |
npache@redhat.com |
skipped
|
2025-08-19 06:18 UTC |
tmpfs: preserve SB_I_VERSION on remount |
1 |
libaokun@huaweicloud.com |
finished
in 3h41m0s
|
2025-08-15 10:18 UTC |
mm/gup: Drain batched mlock folio processing before attempting migration |
1 |
will@kernel.org |
finished
in 3h52m0s
|
2025-08-14 15:32 UTC |
mm: slowtier page promotion based on PTE A bit |
1 |
raghavendra.kt@amd.com |
skipped
|
2025-08-11 17:20 UTC |
mm/mincore: minor clean up for swap cache checking |
1 |
ryncsn@gmail.com |
finished
in 3h43m0s
|
2025-08-11 11:26 UTC |
mm: vm_normal_page*() improvements |
3 |
david@redhat.com |
skipped
|
2025-08-07 15:27 UTC |
mm/mincore: clean up swap cache helper and PTL |
1 |
ryncsn@gmail.com |
finished
in 3h43m0s
|
2025-08-06 14:56 UTC |
mm: Pass page directly instead of using folio_page |
1 |
dev.jain@arm.com |
finished
in 3h41m0s
|