Published Title Version Author Status
2025-10-26 17:34 UTC io_uring zcrx ifq sharing 3 dw@davidwei.uk finished in 3h42m0s
2025-10-25 19:15 UTC io_uring zcrx ifq sharing 2 dw@davidwei.uk finished in 3h51m0s
2025-10-23 21:39 UTC io_uring zcrx: allow sharing of ifqs with other instances 1 dw@davidwei.uk finished in 3h38m0s
2025-10-22 20:20 UTC fuse io_uring: support registered buffers 1 joannelkoong@gmail.com finished in 3h46m0s
2025-10-21 20:31 UTC fuse: check if system-wide io_uring is enabled 1 bschubert@ddn.com finished in 3h45m0s
2025-10-21 20:29 UTC io_uring zcrx: add MAINTAINERS entry 1 dw@davidwei.uk finished in 47m0s
2025-10-20 11:30 UTC io_uring: add IORING_SETUP_SQTHREAD_STATS flag to enable sqthread stats collection 2 changfengnan@bytedance.com finished in 26h38m0s
2025-10-16 18:25 UTC selftests/net: io_uring: fix unknown errnum values 1 cmllamas@google.com finished in 47m0s
2025-10-16 13:23 UTC random region / rings cleanups 1 asml.silence@gmail.com skipped
2025-10-16 11:22 UTC io_uring: introduce non-circular SQ 2 asml.silence@gmail.com skipped
2025-10-16 11:20 UTC io_uring: check for user passing 0 nr_submit 2 asml.silence@gmail.com finished in 3h44m0s
2025-10-16 06:36 UTC page_pool: check if nmdesc->pp is !NULL to confirm its usage as pp for net_iov 1 byungchul@sk.com finished in 37m0s
[1 findings]
2025-10-15 12:10 UTC io_uring: fix unexpected placement on same size resizing 1 asml.silence@gmail.com finished in 3h42m0s
2025-10-15 12:07 UTC io_uring: protect mem region deregistration 1 asml.silence@gmail.com finished in 3h45m0s
2025-10-14 15:02 UTC some query related updates 4 asml.silence@gmail.com skipped
2025-10-14 10:58 UTC Introduce non circular SQ 1 asml.silence@gmail.com finished in 4h15m0s
2025-10-13 14:54 UTC [pull request] Queue configs and large buffer providers 4 asml.silence@gmail.com finished in 3h43m0s
2025-10-13 04:41 UTC netmem: replace __netmem_clear_lsb() with netmem_to_nmdesc() 3 byungchul@sk.com finished in 42m0s
2025-10-11 01:33 UTC block: enable per-cpu bio cache by default 1 changfengnan@bytedance.com skipped
2025-10-08 13:12 UTC io_uring/zcrx: convert to use netmem_desc 1 asml.silence@gmail.com finished in 46m0s
2025-10-08 12:39 UTC io_uring/zcrx: increment fallback loop src offset 1 asml.silence@gmail.com finished in 2h4m0s
2025-10-08 12:38 UTC io_uring/zcrx: fix overshooting recv limit 1 asml.silence@gmail.com finished in 1h47m0s
2025-10-01 10:38 UTC github: Test build against the installed liburing 1 ammarfaizi2@gnuweeb.org skipped
2025-09-26 03:54 UTC netmem: replace __netmem_clear_lsb() with netmem_to_nmdesc() 3 byungchul@sk.com finished in 39m0s
2025-09-19 11:11 UTC query infinite loop prevention 1 asml.silence@gmail.com finished in 3h37m0s
2025-09-12 14:01 UTC io_uring/zcrx: fix ifq->if_rxq is -1, get dma_dev is NULL 1 zhoufeng.zf@bytedance.com finished in 3h53m0s
2025-09-12 08:39 UTC io_uring/zcrx: fix ifq->if_rxq is -1, get dma_dev is NULL 1 zhoufeng.zf@bytedance.com finished in 3h47m0s
2025-09-11 11:26 UTC Add query and mock tests 1 asml.silence@gmail.com skipped
2025-09-11 00:13 UTC io_uring/query: check for loops in in_query() 1 axboe@kernel.dk finished in 3h35m0s
2025-09-07 23:02 UTC introduce io_uring querying 3 asml.silence@gmail.com finished in 3h36m0s
2025-09-05 09:02 UTC io_uring: replace wq users and add WQ_PERCPU to alloc_workqueue() users 1 marco.crivellari@suse.com finished in 52m0s
2025-09-01 15:03 UTC mm: remove nth_page() 2 david@redhat.com finished in 3h44m0s
2025-08-28 16:43 UTC net: add net-device TX clock source selection framework 2 arkadiusz.kubalewski@intel.com finished in 3h44m0s
2025-08-28 09:39 UTC introduce io_uring querying 2 asml.silence@gmail.com finished in 3h45m0s
2025-08-27 22:01 UTC mm: remove nth_page() 1 david@redhat.com finished in 3h42m0s
2025-08-27 14:39 UTC devmem/io_uring: allow more flexibility for ZC DMA devices 6 dtatulea@nvidia.com finished in 3h45m0s
2025-08-25 06:36 UTC devmem/io_uring: allow more flexibility for ZC DMA devices 5 dtatulea@nvidia.com finished in 3h45m0s
2025-08-21 04:02 UTC io_uring: uring_cmd: add multishot support with provided buffer 1 ming.lei@redhat.com skipped
2025-08-21 02:56 UTC net: Add maintainer entry for netmem & friends 1 almasrymina@google.com finished in 34m0s
2025-08-20 23:16 UTC io_uring/kbuf: ensure ring ctx is held locked over io_put_kbuf() 1 axboe@kernel.dk finished in 3h44m0s
[1 findings]
2025-08-20 17:11 UTC devmem/io_uring: allow more flexibility for ZC DMA devices 4 dtatulea@nvidia.com finished in 3h44m0s
2025-08-20 15:40 UTC io_uring: uring_cmd: add multishot support 1 ming.lei@redhat.com finished in 3h38m0s
2025-08-19 15:00 UTC io_uring: uring_cmd: add multishot support 1 ming.lei@redhat.com finished in 51m0s
2025-08-19 11:45 UTC io_uring: uring_cmd: add multishot support 1 ming.lei@redhat.com finished in 3h48m0s
2025-08-18 13:57 UTC [pull request] Queue configs and large buffer providers 3 asml.silence@gmail.com finished in 3h50m0s
2025-08-17 22:43 UTC io_uring: move zcrx into a separate branch 1 asml.silence@gmail.com finished in 1h20m0s
2025-08-17 22:09 UTC io_uring: add request poisoning 2 asml.silence@gmail.com finished in 3h38m0s
2025-08-14 14:41 UTC io_uring: add request poisoning 1 asml.silence@gmail.com finished in 3h48m0s
[1 findings]
2025-08-14 14:40 UTC io_uring/zctx: check chained notif contexts 1 asml.silence@gmail.com finished in 3h42m0s
2025-08-10 02:50 UTC io_uring: uring_cmd: add multishot support without poll 1 ming.lei@redhat.com finished in 3h35m0s
2025-08-08 12:42 UTC io_uring/memmap: cast nr_pages to size_t before shifting 1 axboe@kernel.dk finished in 3h36m0s
2025-08-01 01:13 UTC net: devmem: fix DMA direction on unmapping 1 kuba@kernel.org finished in 1h32m0s
2025-07-29 11:02 UTC mm, page_pool: introduce a new page type for page pool in page type 3 byungchul@sk.com skipped
2025-07-29 10:45 UTC net: add net-device TX clock source selection framework 1 arkadiusz.kubalewski@intel.com finished in 1h30m0s
2025-07-28 08:20 UTC mm, page_pool: introduce a new page type for page pool in page type 2 byungchul@sk.com skipped
2025-07-28 05:27 UTC mm, page_pool: introduce a new page type for page pool in page type 2 byungchul@sk.com skipped
2025-07-21 05:49 UTC mm, page_pool: introduce a new page type for page pool in page type 1 byungchul@sk.com skipped
2025-07-21 02:18 UTC Split netmem from struct page 12 byungchul@sk.com skipped
2025-07-17 07:00 UTC Split netmem from struct page 11 byungchul@sk.com skipped
2025-07-14 12:00 UTC Split netmem from struct page 10 byungchul@sk.com skipped
2025-07-11 09:26 UTC net: Allow SF devices to be used for ZC DMA 2 dtatulea@nvidia.com finished in 3h36m0s
2025-07-10 08:28 UTC Split netmem from struct page 9 byungchul@sk.com finished in 3h34m0s
2025-07-09 12:40 UTC net: Allow non parent devices to be used for ZC DMA 1 dtatulea@nvidia.com finished in 3h43m0s
2025-07-08 05:40 UTC skbuff: Add MSG_MORE flag to optimize tcp large packet transmission 4 yangfeng59949@163.com finished in 3h36m0s
2025-07-02 05:32 UTC Split netmem from struct page 8 byungchul@sk.com skipped
2025-06-30 07:10 UTC skbuff: Add MSG_MORE flag to optimize large packet transmission 3 yangfeng59949@163.com finished in 3h49m0s
2025-06-27 09:44 UTC skbuff: Improve the sending efficiency of __skb_send_sock 2 yangfeng59949@163.com skipped