| Published | Title | Version | Author | Status |
|---|---|---|---|---|
| 2026-01-30 01:36 UTC | io_uring: Add size check for sqe->cmd | 3 | govind.varadar@gmail.com | skipped |
| 2026-01-29 23:54 UTC | io_uring: Add size check for sqe->cmd | 2 | govind.varadar@gmail.com | skipped |
| 2026-01-29 20:13 UTC | io_uring: Add macro to validate SQE cmd size | 1 | govind.varadar@gmail.com | finished in 50m0s |
| 2026-01-13 02:29 UTC | lib/group_cpus: make group CPU cluster aware | 2 | wangyang.guo@intel.com | finished in 3h56m0s |
| 2026-01-12 14:38 UTC | block, nvme: remove unused dma_iova_state function parameter | 1 | nj.shetty@samsung.com | finished in 4h4m0s |
| 2026-01-12 13:57 UTC | block: remove unused dma_iova_state function parameter | 1 | nj.shetty@samsung.com | finished in 4h10m0s |
| 2026-01-12 00:54 UTC | PCI/P2PDMA: Reset page reference count when page mapping fails | 1 | apopple@nvidia.com | finished in 1h0m0s |
| 2025-12-20 04:04 UTC | Enable compound page for p2pdma memory | 1 | houtao@huaweicloud.com | finished in 3h53m0s |
| 2025-12-17 09:41 UTC | block: Generalize physical entry definition | 3 | leon@kernel.org | finished in 3h51m0s |
| 2025-12-08 22:26 UTC | nvme-pci: set virt boundary according to capability | 1 | mgurtovoy@nvidia.com | finished in 1h51m0s |
| 2025-12-05 21:17 UTC | block: Use RCU in blk_mq_[un]quiesce_tagset() instead of set->tag_list_lock | 4 | mkhalfella@purestorage.com | finished in 3h56m0s |
| 2025-12-05 12:47 UTC | Misc patches for RNBD | 1 | haris.iqbal@ionos.com | finished in 1h51m0s |
| 2025-12-04 18:11 UTC | Use RCU in blk_mq_[un]quiesce_tagset() instead of set->tag_list_lock | 1 | mkhalfella@purestorage.com | finished in 4h0m0s |
| 2025-12-02 01:34 UTC | nvme-tcp: Support receiving KeyUpdate requests | 6 | alistair23@gmail.com | skipped |
| 2025-12-01 21:43 UTC | block: add IOC_PR_READ_KEYS and IOC_PR_READ_RESERVATION ioctls | 3 | stefanha@redhat.com | finished in 3h54m0s |
| 2025-11-27 15:54 UTC | block: add IOC_PR_READ_KEYS and IOC_PR_READ_RESERVATION ioctls | 2 | stefanha@redhat.com | finished in 4h9m0s |
| 2025-11-26 16:35 UTC | block: add IOC_PR_READ_KEYS and IOC_PR_READ_RESERVATION ioctls | 1 | stefanha@redhat.com | finished in 4h0m0s |
| 2025-11-25 06:11 UTC | tcp: use GFP_ATOMIC in tcp_disconnect | 1 | ckulkarnilinux@gmail.com | finished in 3h46m0s |
| 2025-11-24 23:48 UTC | block: ignore __blkdev_issue_discard() ret value | 1 | ckulkarnilinux@gmail.com | finished in 3h53m0s |
| 2025-11-24 02:57 UTC | block: ignore __blkdev_issue_discard() ret value | 1 | ckulkarnilinux@gmail.com | finished in 3h57m0s |
| 2025-11-18 07:42 UTC | block: change __blkdev_issue_discard() return type to void | 1 | ckulkarnilinux@gmail.com | finished in 4h20m0s |
| 2025-11-17 20:23 UTC | nvme: Convert tag_list mutex to rwsemaphore to avoid deadlock | 2 | mkhalfella@purestorage.com | finished in 3h54m0s |
| 2025-11-17 19:22 UTC | block: Generalize physical entry definition | 2 | leon@kernel.org | finished in 34m0s |
| 2025-11-15 16:22 UTC | block: Generalize physical entry definition | 1 | leon@kernel.org | skipped |
| 2025-11-14 09:07 UTC | block: Enable proper MMIO memory handling for P2P DMA | 5 | leon@kernel.org | finished in 4h2m0s |
| 2025-11-13 20:23 UTC | nvme: Convert tag_list mutex to rwsemaphore to avoid deadlock | 1 | mkhalfella@purestorage.com | finished in 4h12m0s |
| 2025-11-12 19:48 UTC | block: Enable proper MMIO memory handling for P2P DMA | 4 | leon@kernel.org | finished in 3h51m0s |
| 2025-11-12 04:27 UTC | nvme-tcp: Support receiving KeyUpdate requests | 5 | alistair23@gmail.com | skipped |
| 2025-11-11 02:06 UTC | lib/group_cpus: make group CPU cluster aware | 1 | wangyang.guo@intel.com | finished in 3h41m0s |
| 2025-10-31 20:34 UTC | io_uring/uring_cmd: avoid double indirect call in task work dispatch | 4 | csander@purestorage.com | finished in 3h44m0s |
| 2025-10-27 07:30 UTC | block: Enable proper MMIO memory handling for P2P DMA | 3 | leon@kernel.org | finished in 3h48m0s |
| 2025-10-27 02:02 UTC | io_uring/uring_cmd: avoid double indirect call in task work dispatch | 3 | csander@purestorage.com | finished in 3h39m0s |
| 2025-10-23 20:18 UTC | io_uring/uring_cmd: avoid double indirect call in task work dispatch | 2 | csander@purestorage.com | finished in 3h39m0s |
| 2025-10-22 23:13 UTC | io_uring/uring_cmd: avoid double indirect call in task work dispatch | 1 | csander@purestorage.com | finished in 59m0s |
| 2025-10-20 17:00 UTC | block: Enable proper MMIO memory handling for P2P DMA | 2 | leon@kernel.org | finished in 3h45m0s |
| 2025-10-17 05:31 UTC | block: Enable proper MMIO memory handling for P2P DMA | 1 | leon@kernel.org | finished in 3h39m0s |
| 2025-10-17 04:23 UTC | nvme-tcp: Support receiving KeyUpdate requests | 4 | alistair23@gmail.com | skipped |
| 2025-10-13 15:34 UTC | Properly take MMIO path | 1 | leon@kernel.org | finished in 3h47m0s |
| 2025-10-07 00:46 UTC | nvme/tcp: handle tls partially sent records in write_space() | 1 | wilfred.opensource@gmail.com | finished in 1h33m0s |
| 2025-10-03 04:31 UTC | nvme-tcp: Support receiving KeyUpdate requests | 3 | alistair23@gmail.com | skipped |
| 2025-09-09 13:27 UTC | dma-mapping: migrate to physical address-based API | 6 | leon@kernel.org | skipped |
| 2025-09-05 14:59 UTC | blk: honor isolcpus configuration | 8 | wagi@kernel.org | finished in 3h43m0s |
| 2025-09-05 02:46 UTC | nvme-tcp: Support receiving KeyUpdate requests | 2 | alistair23@gmail.com |
finished
in 33m0s
[1 findings] |
| 2025-09-02 14:48 UTC | dma-mapping: migrate to physical address-based API | 5 | leon@kernel.org | finished in 20h12m0s |
| 2025-08-20 21:32 UTC | nvme-cli: nvmf-autoconnect: udev-rule: add a file for new arrays | 3 | xose.vazquez@gmail.com | finished in 41m0s |
| 2025-08-19 17:36 UTC | dma-mapping: migrate to physical address-based API | 4 | leon@kernel.org | finished in 7m0s |
| 2025-08-15 05:02 UTC | nvme-tcp: Support receiving KeyUpdate requests | 1 | alistair23@gmail.com | finished in 4h2m0s |
| 2025-08-14 17:53 UTC | dma-mapping: migrate to physical address-based API | 3 | leon@kernel.org | finished in 3h45m0s |
| 2025-08-14 10:13 UTC | dma-mapping: migrate to physical address-based API | 2 | leon@kernel.org | finished in 3h47m0s |
| 2025-08-03 02:47 UTC | Add new VFIO PCI driver for NVMe devices | 1 | kch@nvidia.com | finished in 1h7m0s |
| 2025-07-31 18:00 UTC | address tls_alert_recv usage by NFS and NvME | 2 | okorniev@redhat.com | finished in 3h41m0s |
| 2025-07-30 20:08 UTC | address tls_alert_recv usage by NFS and NvME | 1 | okorniev@redhat.com | finished in 3h39m0s |
| 2025-07-15 13:27 UTC | nvme-tcp receive offloads | 30 | aaptel@nvidia.com | skipped |
| 2025-06-30 14:07 UTC | nvme-tcp receive offloads | 29 | aaptel@nvidia.com | skipped |