| 2026-04-30 14:04 UTC |
mm/slub: defer freelist construction until after bulk allocation from a new slab |
8 |
hu.shengming@zte.com.cn |
finished
in 5h30m0s
|
| 2026-04-30 08:35 UTC |
mm/slub: initialize allocated object's freepointer before debug check |
1 |
hu.shengming@zte.com.cn |
finished
in 4h55m0s
|
| 2026-04-15 08:52 UTC |
mm/slub: defer freelist construction until after bulk allocation from a new slab |
7 |
hu.shengming@zte.com.cn |
finished
in 4h31m0s
|
| 2026-04-13 15:04 UTC |
mm/slub: defer freelist construction until after bulk allocation from a new slab |
6 |
hu.shengming@zte.com.cn |
finished
in 4h28m0s
|
| 2026-04-09 12:43 UTC |
mm/slub: defer freelist construction until after bulk allocation from a new slab |
5 |
hu.shengming@zte.com.cn |
skipped
|
| 2026-04-08 15:28 UTC |
mm/slub: defer freelist construction until after bulk allocation from a new slab |
4 |
hu.shengming@zte.com.cn |
finished
in 1h15m0s
|
| 2026-04-06 13:50 UTC |
mm/slub: defer freelist construction until after bulk allocation from a new slab |
3 |
hu.shengming@zte.com.cn |
finished
in 1h8m0s
|
| 2026-04-01 04:57 UTC |
mm/slub: skip freelist construction for whole-slab bulk refill |
2 |
hu.shengming@zte.com.cn |
finished
in 1h13m0s
|
| 2026-03-28 04:55 UTC |
mm/slub: skip freelist construction for whole-slab bulk refill |
1 |
hu.shengming@zte.com.cn |
finished
in 4h3m0s
|
| 2025-12-29 13:52 UTC |
mm/memblock: drop redundant 'struct page *' argument from memblock_free_pages() |
2 |
shengminghu512@qq.com |
skipped
|
| 2025-12-28 11:38 UTC |
mm/memblock: drop redundant 'struct page *' argument from memblock_free_pages() |
1 |
shengminghu512@qq.com |
skipped
|
| 2025-09-23 14:57 UTC |
mm/memory-failure: Ensure collect_procs is retried when unmap fails |
1 |
shengminghu512@qq.com |
finished
in 37m0s
|
| 2025-09-23 13:47 UTC |
mm/memory-failure: Ensure collect_procs is retried when unmap fails |
1 |
shengminghu512@qq.com |
skipped
|