2025/10/14 17:38:48 extracted 329834 text symbol hashes for base and 329840 for patched 2025/10/14 17:38:48 binaries are different, continuing fuzzing 2025/10/14 17:38:48 adding modified_functions to focus areas: ["__access_remote_vm" "__handle_mm_fault" "__p4d_alloc" "__pfx_iommu_sva_invalidate_kva_range" "__pfx_kernel_pgtable_work_func" "__pfx_pagetable_free_kernel" "__pmd_alloc" "__pte_alloc" "__pte_alloc_kernel" "__pud_alloc" "__tlb_remove_table" "__vm_insert_mixed" "_set_memory_uc" "_set_memory_wb" "_set_memory_wc" "_set_memory_wt" "_set_pages_array" "change_page_attr_set_clr" "clear_mce_nospec" "copy_huge_pmd" "copy_page_range" "copy_pmd_range" "copy_remote_vm_str" "do_huge_pmd_anonymous_page" "do_wp_page" "follow_pfnmap_start" "free_pagetable" "insert_page" "iommu_sva_bind_device" "iommu_sva_invalidate_kva_range" "iommu_sva_unbind_device" "kasan_remove_zero_shadow" "kernel_pgtable_work_func" "mm_get_huge_zero_folio" "numa_migrate_check" "p4d_clear_bad" "pagetable_dtor_free" "pagetable_free_kernel" "pgd_alloc" "pgd_clear_bad" "pgd_free" "pmd_clear_bad" "pmd_free_pte_page" "pte_alloc_one" "pte_free" "pte_free_kernel" "pte_free_now" "pud_clear_bad" "pud_free_pmd_page" "remove_device_exclusive_entry" "set_mce_nospec" "set_memory_4k" "set_memory_global" "set_memory_nonglobal" "set_memory_np" "set_memory_np_noalias" "set_memory_nx" "set_memory_p" "set_memory_ro" "set_memory_rox" "set_memory_rw" "set_memory_uc" "set_memory_wb" "set_memory_wc" "set_memory_x" "set_pages_array_wb" "set_pages_ro" "set_pages_rw" "set_pages_wb" "tlb_remove_table_rcu" "try_restore_exclusive_pte" "unmap_huge_pmd_locked" "unmap_page_range" "vm_insert_pages" "vmemmap_pmd_entry" "zap_huge_pmd" "zero_pmd_populate"] 2025/10/14 17:38:48 adding directly modified files to focus areas: ["arch/x86/Kconfig" "arch/x86/mm/init_64.c" "arch/x86/mm/pat/set_memory.c" "arch/x86/mm/pgtable.c" "drivers/iommu/iommu-sva.c" "include/asm-generic/pgalloc.h" "include/linux/iommu.h" "include/linux/mm.h" "mm/Kconfig" "mm/pgtable-generic.c"] 2025/10/14 17:38:48 downloading corpus #1: "https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db" 2025/10/14 17:39:39 runner 0 connected 2025/10/14 17:39:40 runner 6 connected 2025/10/14 17:39:40 runner 1 connected 2025/10/14 17:39:41 runner 4 connected 2025/10/14 17:39:45 initializing coverage information... 2025/10/14 17:39:47 runner 0 connected 2025/10/14 17:39:47 runner 2 connected 2025/10/14 17:39:47 runner 3 connected 2025/10/14 17:39:47 runner 2 connected 2025/10/14 17:39:47 runner 8 connected 2025/10/14 17:39:48 runner 7 connected 2025/10/14 17:39:48 runner 5 connected 2025/10/14 17:39:48 runner 1 connected 2025/10/14 17:39:49 discovered 7757 source files, 340777 symbols 2025/10/14 17:39:50 coverage filter: __access_remote_vm: [__access_remote_vm] 2025/10/14 17:39:50 coverage filter: __handle_mm_fault: [__handle_mm_fault] 2025/10/14 17:39:50 coverage filter: __p4d_alloc: [__p4d_alloc] 2025/10/14 17:39:50 coverage filter: __pfx_iommu_sva_invalidate_kva_range: [] 2025/10/14 17:39:50 coverage filter: __pfx_kernel_pgtable_work_func: [] 2025/10/14 17:39:50 coverage filter: __pfx_pagetable_free_kernel: [] 2025/10/14 17:39:50 coverage filter: __pmd_alloc: [__pmd_alloc] 2025/10/14 17:39:50 coverage filter: __pte_alloc: [__pte_alloc __pte_alloc_kernel] 2025/10/14 17:39:50 coverage filter: __pte_alloc_kernel: [] 2025/10/14 17:39:50 coverage filter: __pud_alloc: [__pud_alloc] 2025/10/14 17:39:50 coverage filter: __tlb_remove_table: [__tlb_remove_table __tlb_remove_table_one_rcu] 2025/10/14 17:39:50 coverage filter: __vm_insert_mixed: [__vm_insert_mixed] 2025/10/14 17:39:50 coverage filter: _set_memory_uc: [_set_memory_uc] 2025/10/14 17:39:50 coverage filter: _set_memory_wb: [_set_memory_wb] 2025/10/14 17:39:50 coverage filter: _set_memory_wc: [_set_memory_wc] 2025/10/14 17:39:50 coverage filter: _set_memory_wt: [_set_memory_wt] 2025/10/14 17:39:50 coverage filter: _set_pages_array: [_set_pages_array] 2025/10/14 17:39:50 coverage filter: change_page_attr_set_clr: [__change_page_attr_set_clr change_page_attr_set_clr] 2025/10/14 17:39:50 coverage filter: clear_mce_nospec: [clear_mce_nospec] 2025/10/14 17:39:50 coverage filter: copy_huge_pmd: [copy_huge_pmd] 2025/10/14 17:39:50 coverage filter: copy_page_range: [copy_page_range] 2025/10/14 17:39:50 coverage filter: copy_pmd_range: [copy_pmd_range] 2025/10/14 17:39:50 coverage filter: copy_remote_vm_str: [copy_remote_vm_str] 2025/10/14 17:39:50 coverage filter: do_huge_pmd_anonymous_page: [do_huge_pmd_anonymous_page] 2025/10/14 17:39:50 coverage filter: do_wp_page: [do_wp_page] 2025/10/14 17:39:50 coverage filter: follow_pfnmap_start: [follow_pfnmap_start] 2025/10/14 17:39:50 coverage filter: free_pagetable: [free_pagetable] 2025/10/14 17:39:50 coverage filter: insert_page: [bxt_vtd_ggtt_insert_page__BKL bxt_vtd_ggtt_insert_page__cb dpt_insert_page gen6_ggtt_insert_page gen8_ggtt_insert_page gen8_ggtt_insert_page_bind gmch_ggtt_insert_page insert_page insert_page_into_pte_locked intel_gmch_gtt_insert_page intel_gmch_gtt_insert_pages null_insert_page vm_insert_page vm_insert_pages vmf_insert_page_mkwrite] 2025/10/14 17:39:50 coverage filter: iommu_sva_bind_device: [iommu_sva_bind_device] 2025/10/14 17:39:50 coverage filter: iommu_sva_invalidate_kva_range: [iommu_sva_invalidate_kva_range] 2025/10/14 17:39:50 coverage filter: iommu_sva_unbind_device: [iommu_sva_unbind_device] 2025/10/14 17:39:50 coverage filter: kasan_remove_zero_shadow: [] 2025/10/14 17:39:50 coverage filter: kernel_pgtable_work_func: [kernel_pgtable_work_func] 2025/10/14 17:39:50 coverage filter: mm_get_huge_zero_folio: [mm_get_huge_zero_folio] 2025/10/14 17:39:50 coverage filter: numa_migrate_check: [numa_migrate_check] 2025/10/14 17:39:50 coverage filter: p4d_clear_bad: [p4d_clear_bad] 2025/10/14 17:39:50 coverage filter: pagetable_dtor_free: [pagetable_dtor_free pagetable_dtor_free pagetable_dtor_free] 2025/10/14 17:39:50 coverage filter: pagetable_free_kernel: [pagetable_free_kernel] 2025/10/14 17:39:50 coverage filter: pgd_alloc: [pgd_alloc] 2025/10/14 17:39:50 coverage filter: pgd_clear_bad: [pgd_clear_bad] 2025/10/14 17:39:50 coverage filter: pgd_free: [pgd_free] 2025/10/14 17:39:50 coverage filter: pmd_clear_bad: [pmd_clear_bad] 2025/10/14 17:39:50 coverage filter: pmd_free_pte_page: [pmd_free_pte_page] 2025/10/14 17:39:50 coverage filter: pte_alloc_one: [pte_alloc_one] 2025/10/14 17:39:50 coverage filter: pte_free: [___pte_free_tlb dma_pte_free_level pte_free pte_free_defer pte_free_kernel pte_free_now] 2025/10/14 17:39:50 coverage filter: pte_free_kernel: [] 2025/10/14 17:39:50 coverage filter: pte_free_now: [] 2025/10/14 17:39:50 coverage filter: pud_clear_bad: [pud_clear_bad] 2025/10/14 17:39:50 coverage filter: pud_free_pmd_page: [pud_free_pmd_page] 2025/10/14 17:39:50 coverage filter: remove_device_exclusive_entry: [remove_device_exclusive_entry] 2025/10/14 17:39:50 coverage filter: set_mce_nospec: [set_mce_nospec] 2025/10/14 17:39:50 coverage filter: set_memory_4k: [set_memory_4k] 2025/10/14 17:39:50 coverage filter: set_memory_global: [set_memory_global] 2025/10/14 17:39:50 coverage filter: set_memory_nonglobal: [set_memory_nonglobal] 2025/10/14 17:39:50 coverage filter: set_memory_np: [set_memory_np set_memory_np_noalias] 2025/10/14 17:39:50 coverage filter: set_memory_np_noalias: [] 2025/10/14 17:39:50 coverage filter: set_memory_nx: [set_memory_nx] 2025/10/14 17:39:50 coverage filter: set_memory_p: [__set_memory_prot set_memory_p] 2025/10/14 17:39:50 coverage filter: set_memory_ro: [set_memory_ro set_memory_rox] 2025/10/14 17:39:50 coverage filter: set_memory_rox: [] 2025/10/14 17:39:50 coverage filter: set_memory_rw: [set_memory_rw] 2025/10/14 17:39:50 coverage filter: set_memory_uc: [set_memory_uc] 2025/10/14 17:39:50 coverage filter: set_memory_wb: [set_memory_wb] 2025/10/14 17:39:50 coverage filter: set_memory_wc: [set_memory_wc] 2025/10/14 17:39:50 coverage filter: set_memory_x: [set_memory_x] 2025/10/14 17:39:50 coverage filter: set_pages_array_wb: [set_pages_array_wb] 2025/10/14 17:39:50 coverage filter: set_pages_ro: [set_pages_ro] 2025/10/14 17:39:50 coverage filter: set_pages_rw: [set_pages_rw] 2025/10/14 17:39:50 coverage filter: set_pages_wb: [set_pages_wb] 2025/10/14 17:39:50 coverage filter: tlb_remove_table_rcu: [tlb_remove_table_rcu] 2025/10/14 17:39:50 coverage filter: try_restore_exclusive_pte: [try_restore_exclusive_pte] 2025/10/14 17:39:50 coverage filter: unmap_huge_pmd_locked: [unmap_huge_pmd_locked] 2025/10/14 17:39:50 coverage filter: unmap_page_range: [unmap_page_range] 2025/10/14 17:39:50 coverage filter: vm_insert_pages: [] 2025/10/14 17:39:50 coverage filter: vmemmap_pmd_entry: [vmemmap_pmd_entry] 2025/10/14 17:39:50 coverage filter: zap_huge_pmd: [zap_huge_pmd] 2025/10/14 17:39:50 coverage filter: zero_pmd_populate: [] 2025/10/14 17:39:50 coverage filter: arch/x86/Kconfig: [] 2025/10/14 17:39:50 coverage filter: arch/x86/mm/init_64.c: [arch/x86/mm/init_64.c] 2025/10/14 17:39:50 coverage filter: arch/x86/mm/pat/set_memory.c: [arch/x86/mm/pat/set_memory.c] 2025/10/14 17:39:50 coverage filter: arch/x86/mm/pgtable.c: [arch/x86/mm/pgtable.c] 2025/10/14 17:39:50 coverage filter: drivers/iommu/iommu-sva.c: [drivers/iommu/iommu-sva.c] 2025/10/14 17:39:50 coverage filter: include/asm-generic/pgalloc.h: [] 2025/10/14 17:39:50 coverage filter: include/linux/iommu.h: [] 2025/10/14 17:39:50 coverage filter: include/linux/mm.h: [] 2025/10/14 17:39:50 coverage filter: mm/Kconfig: [] 2025/10/14 17:39:50 coverage filter: mm/pgtable-generic.c: [mm/pgtable-generic.c] 2025/10/14 17:39:50 area "symbols": 4313 PCs in the cover filter 2025/10/14 17:39:50 area "files": 2053 PCs in the cover filter 2025/10/14 17:39:50 area "": 0 PCs in the cover filter 2025/10/14 17:39:50 executor cover filter: 0 PCs 2025/10/14 17:39:54 executor cover filter: 0 PCs 2025/10/14 17:39:54 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8056 2025/10/14 17:39:54 new: machine check complete 2025/10/14 17:39:55 new: adding 81571 seeds 2025/10/14 17:39:58 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8056 2025/10/14 17:39:58 base: machine check complete 2025/10/14 17:41:18 base crash: lost connection to test machine 2025/10/14 17:42:09 runner 0 connected 2025/10/14 17:42:15 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 17:42:35 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 17:42:40 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:42:40 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:42:52 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:42:52 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:42:56 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 17:42:59 crash "kernel BUG in jfs_evict_inode" is already known 2025/10/14 17:42:59 base crash "kernel BUG in jfs_evict_inode" is to be ignored 2025/10/14 17:42:59 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/10/14 17:43:02 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:43:02 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:43:04 runner 6 connected 2025/10/14 17:43:23 runner 0 connected 2025/10/14 17:43:28 runner 2 connected 2025/10/14 17:43:40 base crash: kernel BUG in jfs_evict_inode 2025/10/14 17:43:41 runner 5 connected 2025/10/14 17:43:45 runner 3 connected 2025/10/14 17:43:46 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:43:46 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:43:47 runner 1 connected 2025/10/14 17:43:48 base crash: kernel BUG in jfs_evict_inode 2025/10/14 17:43:51 STAT { "buffer too small": 0, "candidate triage jobs": 50, "candidates": 77097, "comps overflows": 0, "corpus": 4398, "corpus [files]": 1797, "corpus [symbols]": 1010, "cover overflows": 3183, "coverage": 162610, "distributor delayed": 5409, "distributor undelayed": 5390, "distributor violated": 341, "exec candidate": 4474, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 8348, "exec total [new]": 20003, "exec triage": 14027, "executor restarts [base]": 54, "executor restarts [new]": 94, "fault jobs": 0, "fuzzer jobs": 50, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 164404, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 4474, "no exec duration": 23212000000, "no exec requests": 321, "pending": 4, "prog exec time": 202, "reproducing": 0, "rpc recv": 1278519216, "rpc sent": 117704760, "signal": 160007, "smash jobs": 0, "triage jobs": 0, "vm output": 2205508, "vm restarts [base]": 4, "vm restarts [new]": 15 } 2025/10/14 17:43:52 runner 7 connected 2025/10/14 17:43:53 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 17:43:53 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 17:43:57 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/10/14 17:44:02 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/10/14 17:44:04 base crash: kernel BUG in jfs_evict_inode 2025/10/14 17:44:30 runner 1 connected 2025/10/14 17:44:36 runner 4 connected 2025/10/14 17:44:37 runner 2 connected 2025/10/14 17:44:41 runner 8 connected 2025/10/14 17:44:47 runner 6 connected 2025/10/14 17:44:52 runner 5 connected 2025/10/14 17:44:53 runner 0 connected 2025/10/14 17:44:58 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:44:58 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:45:08 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:45:08 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:45:19 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:45:19 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:45:46 runner 3 connected 2025/10/14 17:45:56 runner 2 connected 2025/10/14 17:46:08 runner 0 connected 2025/10/14 17:46:24 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:46:24 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:46:34 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:46:34 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:46:56 crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/10/14 17:46:56 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is to be ignored 2025/10/14 17:46:56 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 17:47:09 crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/10/14 17:47:09 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is to be ignored 2025/10/14 17:47:09 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 17:47:12 runner 7 connected 2025/10/14 17:47:20 crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/10/14 17:47:20 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is to be ignored 2025/10/14 17:47:20 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 17:47:24 runner 5 connected 2025/10/14 17:47:34 crash "possible deadlock in ocfs2_init_acl" is already known 2025/10/14 17:47:34 base crash "possible deadlock in ocfs2_init_acl" is to be ignored 2025/10/14 17:47:34 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/10/14 17:47:46 runner 0 connected 2025/10/14 17:47:55 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 17:47:55 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 17:47:59 runner 4 connected 2025/10/14 17:48:06 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 17:48:06 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 17:48:08 runner 6 connected 2025/10/14 17:48:13 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/10/14 17:48:18 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 17:48:18 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 17:48:24 runner 2 connected 2025/10/14 17:48:34 base crash: possible deadlock in ocfs2_init_acl 2025/10/14 17:48:45 runner 8 connected 2025/10/14 17:48:51 STAT { "buffer too small": 0, "candidate triage jobs": 38, "candidates": 72787, "comps overflows": 0, "corpus": 8695, "corpus [files]": 2996, "corpus [symbols]": 1631, "cover overflows": 6088, "coverage": 197605, "distributor delayed": 11852, "distributor undelayed": 11851, "distributor violated": 353, "exec candidate": 8784, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 15433, "exec total [new]": 38585, "exec triage": 27298, "executor restarts [base]": 73, "executor restarts [new]": 164, "fault jobs": 0, "fuzzer jobs": 38, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 199088, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 8784, "no exec duration": 23232000000, "no exec requests": 322, "pending": 13, "prog exec time": 306, "reproducing": 0, "rpc recv": 2548525044, "rpc sent": 240486248, "signal": 194864, "smash jobs": 0, "triage jobs": 0, "vm output": 5028149, "vm restarts [base]": 7, "vm restarts [new]": 30 } 2025/10/14 17:48:55 runner 3 connected 2025/10/14 17:49:02 runner 2 connected 2025/10/14 17:49:08 runner 1 connected 2025/10/14 17:49:22 runner 0 connected 2025/10/14 17:50:17 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 17:51:05 runner 1 connected 2025/10/14 17:51:35 base crash: WARNING in xfrm6_tunnel_net_exit 2025/10/14 17:51:48 patched crashed: KASAN: slab-use-after-free Read in tty_write_room [need repro = true] 2025/10/14 17:51:48 scheduled a reproduction of 'KASAN: slab-use-after-free Read in tty_write_room' 2025/10/14 17:51:51 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:51:51 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:52:01 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:52:01 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:52:24 runner 1 connected 2025/10/14 17:52:37 runner 3 connected 2025/10/14 17:52:40 runner 7 connected 2025/10/14 17:52:46 patched crashed: INFO: rcu detected stall in corrupted [need repro = false] 2025/10/14 17:52:50 base crash: lost connection to test machine 2025/10/14 17:52:50 runner 5 connected 2025/10/14 17:53:06 patched crashed: INFO: rcu detected stall in corrupted [need repro = false] 2025/10/14 17:53:07 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:53:07 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:53:19 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:53:19 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:53:30 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:53:30 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:53:33 patched crashed: INFO: rcu detected stall in corrupted [need repro = false] 2025/10/14 17:53:35 runner 2 connected 2025/10/14 17:53:46 runner 0 connected 2025/10/14 17:53:51 STAT { "buffer too small": 0, "candidate triage jobs": 250, "candidates": 67699, "comps overflows": 0, "corpus": 13527, "corpus [files]": 4225, "corpus [symbols]": 2273, "cover overflows": 9643, "coverage": 222434, "distributor delayed": 18339, "distributor undelayed": 18116, "distributor violated": 600, "exec candidate": 13872, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 24382, "exec total [new]": 62023, "exec triage": 42771, "executor restarts [base]": 83, "executor restarts [new]": 201, "fault jobs": 0, "fuzzer jobs": 250, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 224791, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 13872, "no exec duration": 23232000000, "no exec requests": 322, "pending": 19, "prog exec time": 261, "reproducing": 0, "rpc recv": 3549157776, "rpc sent": 367225256, "signal": 219083, "smash jobs": 0, "triage jobs": 0, "vm output": 7334822, "vm restarts [base]": 11, "vm restarts [new]": 37 } 2025/10/14 17:53:56 runner 1 connected 2025/10/14 17:54:03 runner 8 connected 2025/10/14 17:54:16 runner 4 connected 2025/10/14 17:54:18 runner 0 connected 2025/10/14 17:54:22 runner 6 connected 2025/10/14 17:54:47 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:54:47 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:54:59 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:54:59 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:55:09 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 17:55:09 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 17:55:19 patched crashed: KASAN: use-after-free Read in pmd_clear_huge [need repro = true] 2025/10/14 17:55:19 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_clear_huge' 2025/10/14 17:55:32 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:55:32 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:55:36 runner 3 connected 2025/10/14 17:55:48 runner 5 connected 2025/10/14 17:55:58 runner 7 connected 2025/10/14 17:56:07 runner 1 connected 2025/10/14 17:56:20 runner 6 connected 2025/10/14 17:56:45 base crash: WARNING in xfrm_state_fini 2025/10/14 17:57:10 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 17:57:26 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:57:26 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:57:36 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:57:36 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:57:41 runner 0 connected 2025/10/14 17:57:47 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:57:47 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:57:58 patched crashed: PANIC: double fault in search_extable [need repro = true] 2025/10/14 17:57:58 scheduled a reproduction of 'PANIC: double fault in search_extable' 2025/10/14 17:57:59 runner 0 connected 2025/10/14 17:58:04 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 17:58:04 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 17:58:09 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:58:09 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:58:15 runner 8 connected 2025/10/14 17:58:16 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 17:58:16 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 17:58:25 runner 4 connected 2025/10/14 17:58:28 base crash: unregister_netdevice: waiting for DEV to become free 2025/10/14 17:58:36 runner 5 connected 2025/10/14 17:58:46 runner 3 connected 2025/10/14 17:58:51 STAT { "buffer too small": 0, "candidate triage jobs": 29, "candidates": 63400, "comps overflows": 0, "corpus": 18015, "corpus [files]": 5301, "corpus [symbols]": 2809, "cover overflows": 12638, "coverage": 240006, "distributor delayed": 24174, "distributor undelayed": 24174, "distributor violated": 861, "exec candidate": 18171, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 35458, "exec total [new]": 82309, "exec triage": 56084, "executor restarts [base]": 93, "executor restarts [new]": 269, "fault jobs": 0, "fuzzer jobs": 29, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 241703, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 18171, "no exec duration": 23232000000, "no exec requests": 322, "pending": 31, "prog exec time": 212, "reproducing": 0, "rpc recv": 4769855808, "rpc sent": 511888352, "signal": 236580, "smash jobs": 0, "triage jobs": 0, "vm output": 10080300, "vm restarts [base]": 12, "vm restarts [new]": 52 } 2025/10/14 17:58:53 runner 7 connected 2025/10/14 17:58:58 runner 6 connected 2025/10/14 17:59:05 runner 2 connected 2025/10/14 17:59:17 runner 1 connected 2025/10/14 18:00:02 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:00:02 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:00:08 patched crashed: KASAN: use-after-free Read in pmd_clear_huge [need repro = true] 2025/10/14 18:00:08 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_clear_huge' 2025/10/14 18:00:13 patched crashed: KASAN: use-after-free Read in pmd_clear_huge [need repro = true] 2025/10/14 18:00:13 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_clear_huge' 2025/10/14 18:00:19 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:00:19 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:00:23 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:00:23 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:00:59 runner 8 connected 2025/10/14 18:01:04 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:01:04 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:01:05 runner 0 connected 2025/10/14 18:01:09 runner 5 connected 2025/10/14 18:01:11 runner 4 connected 2025/10/14 18:01:14 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:01:14 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:01:16 runner 7 connected 2025/10/14 18:01:39 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 18:01:53 runner 1 connected 2025/10/14 18:02:03 runner 3 connected 2025/10/14 18:02:13 base crash: lost connection to test machine 2025/10/14 18:02:29 runner 5 connected 2025/10/14 18:03:02 runner 2 connected 2025/10/14 18:03:51 STAT { "buffer too small": 0, "candidate triage jobs": 54, "candidates": 58453, "comps overflows": 0, "corpus": 22895, "corpus [files]": 6436, "corpus [symbols]": 3369, "cover overflows": 15862, "coverage": 254980, "distributor delayed": 29924, "distributor undelayed": 29923, "distributor violated": 894, "exec candidate": 23118, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 1, "exec seeds": 0, "exec smash": 0, "exec total [base]": 47246, "exec total [new]": 106124, "exec triage": 71074, "executor restarts [base]": 105, "executor restarts [new]": 341, "fault jobs": 0, "fuzzer jobs": 54, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 256699, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 23118, "no exec duration": 23508000000, "no exec requests": 325, "pending": 38, "prog exec time": 328, "reproducing": 0, "rpc recv": 5968841276, "rpc sent": 664447824, "signal": 251445, "smash jobs": 0, "triage jobs": 0, "vm output": 13388124, "vm restarts [base]": 14, "vm restarts [new]": 63 } 2025/10/14 18:04:25 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/10/14 18:04:38 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:04:38 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:04:48 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:04:48 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:05:14 runner 3 connected 2025/10/14 18:05:27 runner 6 connected 2025/10/14 18:05:35 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:05:35 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:05:37 runner 2 connected 2025/10/14 18:05:41 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:05:41 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:05:45 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:05:45 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:05:56 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:05:56 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:06:24 runner 7 connected 2025/10/14 18:06:31 runner 8 connected 2025/10/14 18:06:34 runner 0 connected 2025/10/14 18:06:43 runner 1 connected 2025/10/14 18:07:29 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:07:29 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:07:32 base crash: kernel BUG in txUnlock 2025/10/14 18:07:39 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:07:39 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:07:50 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:07:50 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:07:51 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:07:51 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:08:02 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:08:02 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:08:12 patched crashed: KASAN: use-after-free Read in pmd_clear_huge [need repro = true] 2025/10/14 18:08:12 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_clear_huge' 2025/10/14 18:08:19 runner 3 connected 2025/10/14 18:08:21 runner 2 connected 2025/10/14 18:08:28 runner 4 connected 2025/10/14 18:08:40 runner 2 connected 2025/10/14 18:08:41 runner 6 connected 2025/10/14 18:08:49 runner 7 connected 2025/10/14 18:08:51 STAT { "buffer too small": 0, "candidate triage jobs": 70, "candidates": 53887, "comps overflows": 0, "corpus": 27394, "corpus [files]": 7463, "corpus [symbols]": 3882, "cover overflows": 18553, "coverage": 266611, "distributor delayed": 35717, "distributor undelayed": 35683, "distributor violated": 1054, "exec candidate": 27684, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 59352, "exec total [new]": 129558, "exec triage": 84996, "executor restarts [base]": 121, "executor restarts [new]": 408, "fault jobs": 0, "fuzzer jobs": 70, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 268509, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 27684, "no exec duration": 24103000000, "no exec requests": 327, "pending": 50, "prog exec time": 128, "reproducing": 0, "rpc recv": 7074591384, "rpc sent": 808185576, "signal": 262857, "smash jobs": 0, "triage jobs": 0, "vm output": 16605321, "vm restarts [base]": 15, "vm restarts [new]": 75 } 2025/10/14 18:08:53 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:08:53 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:09:02 runner 5 connected 2025/10/14 18:09:04 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 18:09:24 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 18:09:35 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:09:35 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:09:44 runner 0 connected 2025/10/14 18:09:46 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:09:46 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:09:52 runner 3 connected 2025/10/14 18:10:12 runner 7 connected 2025/10/14 18:10:15 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:10:15 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:10:32 runner 1 connected 2025/10/14 18:10:42 runner 6 connected 2025/10/14 18:11:04 runner 2 connected 2025/10/14 18:12:04 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:12:04 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:12:53 runner 4 connected 2025/10/14 18:13:05 patched crashed: KASAN: use-after-free Read in __vmap_pages_range_noflush [need repro = true] 2025/10/14 18:13:05 scheduled a reproduction of 'KASAN: use-after-free Read in __vmap_pages_range_noflush' 2025/10/14 18:13:21 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:13:21 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:13:31 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:13:31 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:13:36 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:13:36 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:13:42 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:13:42 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:13:51 STAT { "buffer too small": 0, "candidate triage jobs": 73, "candidates": 49040, "comps overflows": 0, "corpus": 32181, "corpus [files]": 8529, "corpus [symbols]": 4442, "cover overflows": 21798, "coverage": 277065, "distributor delayed": 41361, "distributor undelayed": 41306, "distributor violated": 1059, "exec candidate": 32531, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 75110, "exec total [new]": 155948, "exec triage": 99834, "executor restarts [base]": 129, "executor restarts [new]": 461, "fault jobs": 0, "fuzzer jobs": 73, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 279010, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 32531, "no exec duration": 24141000000, "no exec requests": 330, "pending": 60, "prog exec time": 254, "reproducing": 0, "rpc recv": 8119847292, "rpc sent": 961487088, "signal": 273085, "smash jobs": 0, "triage jobs": 0, "vm output": 19818947, "vm restarts [base]": 15, "vm restarts [new]": 83 } 2025/10/14 18:13:53 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:13:53 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:13:53 runner 1 connected 2025/10/14 18:14:09 runner 3 connected 2025/10/14 18:14:14 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:14:14 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:14:21 runner 5 connected 2025/10/14 18:14:24 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:14:24 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:14:25 runner 6 connected 2025/10/14 18:14:30 runner 2 connected 2025/10/14 18:14:42 runner 0 connected 2025/10/14 18:15:03 runner 8 connected 2025/10/14 18:15:14 runner 4 connected 2025/10/14 18:16:35 crash "BUG: sleeping function called from invalid context in hook_sb_delete" is already known 2025/10/14 18:16:35 base crash "BUG: sleeping function called from invalid context in hook_sb_delete" is to be ignored 2025/10/14 18:16:35 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/10/14 18:17:21 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:17:21 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:17:24 runner 0 connected 2025/10/14 18:17:25 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:17:25 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:17:31 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:17:31 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:17:33 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 18:17:34 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 18:17:34 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 18:17:36 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:17:36 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:18:10 runner 8 connected 2025/10/14 18:18:12 runner 4 connected 2025/10/14 18:18:19 runner 5 connected 2025/10/14 18:18:22 runner 3 connected 2025/10/14 18:18:23 runner 2 connected 2025/10/14 18:18:23 runner 1 connected 2025/10/14 18:18:24 runner 7 connected 2025/10/14 18:18:42 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:18:42 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:18:51 STAT { "buffer too small": 0, "candidate triage jobs": 48, "candidates": 45770, "comps overflows": 0, "corpus": 35433, "corpus [files]": 9241, "corpus [symbols]": 4750, "cover overflows": 23721, "coverage": 284022, "distributor delayed": 45597, "distributor undelayed": 45592, "distributor violated": 1128, "exec candidate": 35801, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 89704, "exec total [new]": 173368, "exec triage": 109758, "executor restarts [base]": 141, "executor restarts [new]": 538, "fault jobs": 0, "fuzzer jobs": 48, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 285989, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 35801, "no exec duration": 24208000000, "no exec requests": 335, "pending": 68, "prog exec time": 267, "reproducing": 0, "rpc recv": 9284321116, "rpc sent": 1118031104, "signal": 279963, "smash jobs": 0, "triage jobs": 0, "vm output": 22820218, "vm restarts [base]": 15, "vm restarts [new]": 99 } 2025/10/14 18:18:52 VM-5 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:63461: connect: connection refused 2025/10/14 18:18:52 VM-5 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:63461: connect: connection refused 2025/10/14 18:18:54 patched crashed: PANIC: double fault in corrupted [need repro = true] 2025/10/14 18:18:54 scheduled a reproduction of 'PANIC: double fault in corrupted' 2025/10/14 18:18:57 VM-8 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:19217: connect: connection refused 2025/10/14 18:18:57 VM-8 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:19217: connect: connection refused 2025/10/14 18:19:02 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 18:19:07 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 18:19:11 base crash: kernel BUG in txUnlock 2025/10/14 18:19:11 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:25675: connect: connection refused 2025/10/14 18:19:11 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:25675: connect: connection refused 2025/10/14 18:19:21 base crash: lost connection to test machine 2025/10/14 18:19:30 runner 0 connected 2025/10/14 18:19:35 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:19:35 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:19:42 runner 4 connected 2025/10/14 18:19:44 patched crashed: no output from test machine [need repro = false] 2025/10/14 18:19:50 runner 5 connected 2025/10/14 18:19:56 runner 8 connected 2025/10/14 18:20:00 runner 2 connected 2025/10/14 18:20:11 runner 1 connected 2025/10/14 18:20:23 runner 3 connected 2025/10/14 18:20:27 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 18:20:34 runner 6 connected 2025/10/14 18:21:00 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:21:00 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:21:06 base crash: WARNING in xfrm6_tunnel_net_exit 2025/10/14 18:21:10 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:21:10 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:21:16 runner 2 connected 2025/10/14 18:21:21 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:21:21 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:21:31 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:21:31 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:21:43 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:21:43 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:21:48 runner 7 connected 2025/10/14 18:21:54 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:21:54 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:21:55 runner 0 connected 2025/10/14 18:22:00 runner 5 connected 2025/10/14 18:22:09 runner 0 connected 2025/10/14 18:22:14 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:22:14 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:22:20 runner 1 connected 2025/10/14 18:22:33 runner 8 connected 2025/10/14 18:22:43 runner 4 connected 2025/10/14 18:22:50 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:22:50 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:22:56 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:22:56 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:23:04 runner 7 connected 2025/10/14 18:23:06 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:23:06 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:23:34 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:23:34 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:23:39 runner 5 connected 2025/10/14 18:23:42 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:23:42 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:23:44 runner 3 connected 2025/10/14 18:23:51 STAT { "buffer too small": 0, "candidate triage jobs": 31, "candidates": 42680, "comps overflows": 0, "corpus": 38500, "corpus [files]": 9908, "corpus [symbols]": 5074, "cover overflows": 25749, "coverage": 289946, "distributor delayed": 50410, "distributor undelayed": 50410, "distributor violated": 1232, "exec candidate": 38891, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 97899, "exec total [new]": 191008, "exec triage": 119159, "executor restarts [base]": 172, "executor restarts [new]": 627, "fault jobs": 0, "fuzzer jobs": 31, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 291866, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 38891, "no exec duration": 24296000000, "no exec requests": 336, "pending": 82, "prog exec time": 264, "reproducing": 0, "rpc recv": 10400464456, "rpc sent": 1246225056, "signal": 285901, "smash jobs": 0, "triage jobs": 0, "vm output": 25642269, "vm restarts [base]": 18, "vm restarts [new]": 115 } 2025/10/14 18:23:55 runner 6 connected 2025/10/14 18:24:02 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:24:02 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:24:21 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:24:21 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:24:22 runner 8 connected 2025/10/14 18:24:31 runner 0 connected 2025/10/14 18:24:51 runner 1 connected 2025/10/14 18:25:18 runner 4 connected 2025/10/14 18:25:27 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:25:27 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:25:31 crash "general protection fault in pcl818_ai_cancel" is already known 2025/10/14 18:25:31 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/10/14 18:25:31 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/10/14 18:25:35 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:25:35 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:25:42 crash "general protection fault in pcl818_ai_cancel" is already known 2025/10/14 18:25:42 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/10/14 18:25:42 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/10/14 18:25:45 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:25:45 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:25:53 crash "general protection fault in pcl818_ai_cancel" is already known 2025/10/14 18:25:53 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/10/14 18:25:53 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/10/14 18:26:16 runner 6 connected 2025/10/14 18:26:20 runner 5 connected 2025/10/14 18:26:25 runner 2 connected 2025/10/14 18:26:27 base crash: general protection fault in pcl818_ai_cancel 2025/10/14 18:26:31 runner 1 connected 2025/10/14 18:26:35 runner 3 connected 2025/10/14 18:26:42 runner 8 connected 2025/10/14 18:26:52 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:26:52 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:27:03 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:27:03 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:27:14 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:27:14 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:27:16 runner 0 connected 2025/10/14 18:27:42 runner 0 connected 2025/10/14 18:27:54 runner 5 connected 2025/10/14 18:28:03 runner 4 connected 2025/10/14 18:28:23 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 18:28:51 STAT { "buffer too small": 0, "candidate triage jobs": 38, "candidates": 39479, "comps overflows": 0, "corpus": 41658, "corpus [files]": 10613, "corpus [symbols]": 5416, "cover overflows": 27891, "coverage": 295717, "distributor delayed": 54955, "distributor undelayed": 54955, "distributor violated": 1579, "exec candidate": 42092, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 109392, "exec total [new]": 210007, "exec triage": 128860, "executor restarts [base]": 183, "executor restarts [new]": 685, "fault jobs": 0, "fuzzer jobs": 38, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 297704, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 42092, "no exec duration": 24316000000, "no exec requests": 338, "pending": 90, "prog exec time": 211, "reproducing": 0, "rpc recv": 11448325292, "rpc sent": 1379767416, "signal": 291668, "smash jobs": 0, "triage jobs": 0, "vm output": 28417106, "vm restarts [base]": 19, "vm restarts [new]": 129 } 2025/10/14 18:29:12 runner 8 connected 2025/10/14 18:29:50 base crash: WARNING in xfrm6_tunnel_net_exit 2025/10/14 18:30:17 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:30:17 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:30:18 crash "INFO: task hung in crda_timeout_work" is already known 2025/10/14 18:30:18 base crash "INFO: task hung in crda_timeout_work" is to be ignored 2025/10/14 18:30:18 patched crashed: INFO: task hung in crda_timeout_work [need repro = false] 2025/10/14 18:30:27 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:30:27 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:30:47 runner 0 connected 2025/10/14 18:31:06 runner 2 connected 2025/10/14 18:31:07 runner 3 connected 2025/10/14 18:31:07 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 18:31:17 runner 1 connected 2025/10/14 18:32:04 runner 0 connected 2025/10/14 18:32:24 patched crashed: no output from test machine [need repro = false] 2025/10/14 18:32:26 patched crashed: no output from test machine [need repro = false] 2025/10/14 18:32:47 crash "KASAN: use-after-free Read in hpfs_get_ea" is already known 2025/10/14 18:32:47 base crash "KASAN: use-after-free Read in hpfs_get_ea" is to be ignored 2025/10/14 18:32:47 patched crashed: KASAN: use-after-free Read in hpfs_get_ea [need repro = false] 2025/10/14 18:32:52 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:32:52 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:32:55 base crash: INFO: task hung in corrupted 2025/10/14 18:32:58 crash "KASAN: use-after-free Read in hpfs_get_ea" is already known 2025/10/14 18:32:58 base crash "KASAN: use-after-free Read in hpfs_get_ea" is to be ignored 2025/10/14 18:32:58 patched crashed: KASAN: use-after-free Read in hpfs_get_ea [need repro = false] 2025/10/14 18:33:14 runner 6 connected 2025/10/14 18:33:15 runner 7 connected 2025/10/14 18:33:36 runner 1 connected 2025/10/14 18:33:40 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:33:40 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:33:41 runner 4 connected 2025/10/14 18:33:44 runner 2 connected 2025/10/14 18:33:46 runner 8 connected 2025/10/14 18:33:51 STAT { "buffer too small": 0, "candidate triage jobs": 11, "candidates": 37817, "comps overflows": 0, "corpus": 43305, "corpus [files]": 10991, "corpus [symbols]": 5589, "cover overflows": 30258, "coverage": 298756, "distributor delayed": 57434, "distributor undelayed": 57434, "distributor violated": 1590, "exec candidate": 43754, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 114157, "exec total [new]": 225960, "exec triage": 134016, "executor restarts [base]": 192, "executor restarts [new]": 733, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 300775, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43754, "no exec duration": 24609000000, "no exec requests": 342, "pending": 94, "prog exec time": 251, "reproducing": 0, "rpc recv": 12094922512, "rpc sent": 1493042432, "signal": 294703, "smash jobs": 0, "triage jobs": 0, "vm output": 31281598, "vm restarts [base]": 21, "vm restarts [new]": 139 } 2025/10/14 18:33:54 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:33:54 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:34:30 runner 0 connected 2025/10/14 18:34:44 runner 5 connected 2025/10/14 18:34:50 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 18:35:23 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:35:23 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:35:34 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:35:34 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:35:39 runner 1 connected 2025/10/14 18:35:46 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:35:46 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:36:12 runner 3 connected 2025/10/14 18:36:22 runner 4 connected 2025/10/14 18:36:35 runner 0 connected 2025/10/14 18:36:44 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:36:44 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:36:48 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:36:48 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:36:50 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:36:50 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:36:54 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:36:54 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:37:35 runner 7 connected 2025/10/14 18:37:36 runner 1 connected 2025/10/14 18:37:39 runner 8 connected 2025/10/14 18:37:44 runner 2 connected 2025/10/14 18:37:59 base crash: INFO: task hung in corrupted 2025/10/14 18:38:01 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:38:01 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:38:44 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:38:44 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:38:48 runner 1 connected 2025/10/14 18:38:50 runner 3 connected 2025/10/14 18:38:51 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 36526, "comps overflows": 0, "corpus": 44545, "corpus [files]": 11274, "corpus [symbols]": 5730, "cover overflows": 34327, "coverage": 301303, "distributor delayed": 58807, "distributor undelayed": 58807, "distributor violated": 1592, "exec candidate": 45045, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 9, "exec seeds": 0, "exec smash": 0, "exec total [base]": 121674, "exec total [new]": 248974, "exec triage": 138076, "executor restarts [base]": 201, "executor restarts [new]": 796, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 303446, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45045, "no exec duration": 30796000000, "no exec requests": 350, "pending": 104, "prog exec time": 247, "reproducing": 0, "rpc recv": 12761281724, "rpc sent": 1632960704, "signal": 297250, "smash jobs": 0, "triage jobs": 0, "vm output": 34125604, "vm restarts [base]": 22, "vm restarts [new]": 150 } 2025/10/14 18:39:33 runner 1 connected 2025/10/14 18:39:37 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:14544: connect: connection refused 2025/10/14 18:39:37 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:14544: connect: connection refused 2025/10/14 18:39:47 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 18:39:57 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:39:57 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:40:25 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:40:25 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:40:29 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 18:40:36 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:40:36 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:40:44 runner 7 connected 2025/10/14 18:40:46 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:40:46 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:40:53 runner 6 connected 2025/10/14 18:41:13 runner 0 connected 2025/10/14 18:41:19 runner 5 connected 2025/10/14 18:41:24 runner 4 connected 2025/10/14 18:41:34 runner 3 connected 2025/10/14 18:41:58 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:41:58 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:42:28 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:42:28 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:42:31 patched crashed: KASAN: use-after-free Write in pmd_set_huge [need repro = true] 2025/10/14 18:42:31 scheduled a reproduction of 'KASAN: use-after-free Write in pmd_set_huge' 2025/10/14 18:42:38 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:42:38 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:42:45 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:42:45 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:42:47 runner 1 connected 2025/10/14 18:42:49 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:42:49 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:43:07 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:43:07 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:43:18 runner 0 connected 2025/10/14 18:43:20 runner 3 connected 2025/10/14 18:43:27 runner 8 connected 2025/10/14 18:43:34 runner 6 connected 2025/10/14 18:43:39 runner 5 connected 2025/10/14 18:43:51 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 35607, "comps overflows": 0, "corpus": 45378, "corpus [files]": 11463, "corpus [symbols]": 5829, "cover overflows": 37564, "coverage": 303059, "distributor delayed": 59921, "distributor undelayed": 59921, "distributor violated": 1598, "exec candidate": 45964, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 133285, "exec total [new]": 267683, "exec triage": 140939, "executor restarts [base]": 214, "executor restarts [new]": 885, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 305299, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45945, "no exec duration": 30796000000, "no exec requests": 350, "pending": 115, "prog exec time": 235, "reproducing": 0, "rpc recv": 13607790592, "rpc sent": 1794129664, "signal": 299011, "smash jobs": 0, "triage jobs": 0, "vm output": 36878948, "vm restarts [base]": 22, "vm restarts [new]": 163 } 2025/10/14 18:43:52 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:43:52 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:43:56 runner 7 connected 2025/10/14 18:44:15 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:44:15 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:44:26 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:44:26 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:44:42 runner 4 connected 2025/10/14 18:44:46 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:44:46 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:45:06 runner 1 connected 2025/10/14 18:45:16 runner 5 connected 2025/10/14 18:45:26 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 18:45:30 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:45:30 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:45:32 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:45:32 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:45:35 runner 0 connected 2025/10/14 18:45:39 base crash: kernel BUG in txUnlock 2025/10/14 18:46:15 runner 4 connected 2025/10/14 18:46:19 runner 3 connected 2025/10/14 18:46:20 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:46:20 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:46:20 runner 2 connected 2025/10/14 18:46:23 crash "KASAN: slab-use-after-free Read in l2cap_unregister_user" is already known 2025/10/14 18:46:23 base crash "KASAN: slab-use-after-free Read in l2cap_unregister_user" is to be ignored 2025/10/14 18:46:23 patched crashed: KASAN: slab-use-after-free Read in l2cap_unregister_user [need repro = false] 2025/10/14 18:46:27 runner 0 connected 2025/10/14 18:46:57 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:46:57 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:47:09 runner 6 connected 2025/10/14 18:47:12 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:47:12 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:47:13 runner 8 connected 2025/10/14 18:47:14 patched crashed: BUG: unable to handle kernel paging request in corrupted [need repro = true] 2025/10/14 18:47:14 scheduled a reproduction of 'BUG: unable to handle kernel paging request in corrupted' 2025/10/14 18:47:16 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:47:16 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:47:21 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:47:21 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:47:26 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:47:26 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:47:46 runner 4 connected 2025/10/14 18:48:01 runner 5 connected 2025/10/14 18:48:02 runner 2 connected 2025/10/14 18:48:05 runner 0 connected 2025/10/14 18:48:10 runner 1 connected 2025/10/14 18:48:14 runner 7 connected 2025/10/14 18:48:18 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:48:18 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:48:38 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:48:38 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:48:51 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 35138, "comps overflows": 0, "corpus": 45751, "corpus [files]": 11558, "corpus [symbols]": 5870, "cover overflows": 41236, "coverage": 303733, "distributor delayed": 60492, "distributor undelayed": 60492, "distributor violated": 1604, "exec candidate": 46433, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 145025, "exec total [new]": 287358, "exec triage": 142242, "executor restarts [base]": 229, "executor restarts [new]": 972, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 306011, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46349, "no exec duration": 30819000000, "no exec requests": 352, "pending": 130, "prog exec time": 263, "reproducing": 0, "rpc recv": 14428089508, "rpc sent": 1947919864, "signal": 299666, "smash jobs": 0, "triage jobs": 0, "vm output": 39115312, "vm restarts [base]": 23, "vm restarts [new]": 179 } 2025/10/14 18:49:08 runner 3 connected 2025/10/14 18:49:20 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:49:20 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:49:28 runner 6 connected 2025/10/14 18:50:05 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:50:05 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:50:09 runner 8 connected 2025/10/14 18:50:15 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:50:15 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:50:21 patched crashed: PANIC: double fault in entry_SYSCALL_64_safe_stack [need repro = true] 2025/10/14 18:50:21 scheduled a reproduction of 'PANIC: double fault in entry_SYSCALL_64_safe_stack' 2025/10/14 18:50:32 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:50:32 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:50:49 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:50:49 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:50:54 runner 7 connected 2025/10/14 18:51:05 runner 3 connected 2025/10/14 18:51:05 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13708: connect: connection refused 2025/10/14 18:51:05 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13708: connect: connection refused 2025/10/14 18:51:09 runner 1 connected 2025/10/14 18:51:15 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 18:51:20 runner 5 connected 2025/10/14 18:51:38 runner 0 connected 2025/10/14 18:52:04 runner 2 connected 2025/10/14 18:53:24 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 18:53:24 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:53:24 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:53:34 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:53:34 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:53:42 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:53:42 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:53:51 STAT { "buffer too small": 0, "candidate triage jobs": 1, "candidates": 26313, "comps overflows": 0, "corpus": 46101, "corpus [files]": 11649, "corpus [symbols]": 5903, "cover overflows": 45373, "coverage": 304548, "distributor delayed": 60986, "distributor undelayed": 60985, "distributor violated": 1608, "exec candidate": 55258, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 17, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159150, "exec total [new]": 309609, "exec triage": 143581, "executor restarts [base]": 233, "executor restarts [new]": 1039, "fault jobs": 0, "fuzzer jobs": 1, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 306916, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46751, "no exec duration": 30819000000, "no exec requests": 352, "pending": 139, "prog exec time": 243, "reproducing": 0, "rpc recv": 15032862296, "rpc sent": 2097787296, "signal": 300426, "smash jobs": 0, "triage jobs": 0, "vm output": 41998033, "vm restarts [base]": 23, "vm restarts [new]": 188 } 2025/10/14 18:54:20 runner 3 connected 2025/10/14 18:54:21 runner 8 connected 2025/10/14 18:54:31 runner 4 connected 2025/10/14 18:54:39 runner 5 connected 2025/10/14 18:54:42 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 18:55:31 runner 3 connected 2025/10/14 18:56:03 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:56:03 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:56:13 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:56:13 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:56:42 base crash: WARNING in io_ring_exit_work 2025/10/14 18:56:52 runner 6 connected 2025/10/14 18:56:53 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 18:56:53 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:57:03 runner 4 connected 2025/10/14 18:57:31 runner 2 connected 2025/10/14 18:57:42 runner 2 connected 2025/10/14 18:57:50 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 18:57:51 triaged 92.2% of the corpus 2025/10/14 18:57:51 starting bug reproductions 2025/10/14 18:57:51 starting bug reproductions (max 6 VMs, 4 repros) 2025/10/14 18:57:51 start reproducing 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:57:51 start reproducing 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 18:57:51 start reproducing 'KASAN: slab-use-after-free Read in tty_write_room' 2025/10/14 18:57:51 start reproducing 'KASAN: use-after-free Read in pmd_clear_huge' 2025/10/14 18:58:38 runner 7 connected 2025/10/14 18:58:51 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 4626, "comps overflows": 0, "corpus": 46247, "corpus [files]": 11685, "corpus [symbols]": 5922, "cover overflows": 49702, "coverage": 304829, "distributor delayed": 61249, "distributor undelayed": 61249, "distributor violated": 1612, "exec candidate": 76945, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 172178, "exec total [new]": 332045, "exec triage": 144311, "executor restarts [base]": 241, "executor restarts [new]": 1093, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 307343, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46951, "no exec duration": 30829000000, "no exec requests": 354, "pending": 137, "prog exec time": 198, "reproducing": 4, "rpc recv": 15618842080, "rpc sent": 2239677576, "signal": 300682, "smash jobs": 0, "triage jobs": 0, "vm output": 44085665, "vm restarts [base]": 24, "vm restarts [new]": 197 } 2025/10/14 18:58:59 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 18:58:59 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 18:59:05 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 18:59:05 reproducing crash 'KASAN: use-after-free Read in vmap_range_noflush': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 18:59:06 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 18:59:47 runner 8 connected 2025/10/14 19:01:38 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 19:01:38 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 19:01:49 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 19:01:49 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 19:01:54 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:02:03 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 19:02:03 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 19:02:23 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:02:26 runner 6 connected 2025/10/14 19:02:30 runner 7 connected 2025/10/14 19:02:44 runner 8 connected 2025/10/14 19:02:55 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:02:56 reproducing crash 'KASAN: use-after-free Read in vmap_range_noflush': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:02:59 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:03:33 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:61671: connect: connection refused 2025/10/14 19:03:33 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:61671: connect: connection refused 2025/10/14 19:03:34 reproducing crash 'KASAN: use-after-free Read in vmap_range_noflush': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:03:37 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:03:43 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 19:03:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 14, "corpus": 46271, "corpus [files]": 11688, "corpus [symbols]": 5925, "cover overflows": 50979, "coverage": 304871, "distributor delayed": 61326, "distributor undelayed": 61319, "distributor violated": 1612, "exec candidate": 81571, "exec collide": 224, "exec fuzz": 435, "exec gen": 26, "exec hints": 69, "exec inject": 0, "exec minimize": 261, "exec retries": 20, "exec seeds": 33, "exec smash": 174, "exec total [base]": 181288, "exec total [new]": 338075, "exec triage": 144484, "executor restarts [base]": 253, "executor restarts [new]": 1115, "fault jobs": 0, "fuzzer jobs": 19, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 1, "max signal": 307482, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 125, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47003, "no exec duration": 258249000000, "no exec requests": 1060, "pending": 141, "prog exec time": 312, "reproducing": 4, "rpc recv": 15956154452, "rpc sent": 2319160672, "signal": 300724, "smash jobs": 8, "triage jobs": 10, "vm output": 45571989, "vm restarts [base]": 24, "vm restarts [new]": 201 } 2025/10/14 19:04:09 reproducing crash 'KASAN: use-after-free Read in vmap_range_noflush': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:04:10 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:04:10 repro finished 'KASAN: use-after-free Read in pmd_clear_huge', repro=true crepro=false desc='KASAN: use-after-free Read in vmap_range_noflush' hub=false from_dashboard=false 2025/10/14 19:04:10 found repro for "KASAN: use-after-free Read in vmap_range_noflush" (orig title: "KASAN: use-after-free Read in pmd_clear_huge", reliability: 1), took 6.31 minutes 2025/10/14 19:04:10 start reproducing 'PANIC: double fault in search_extable' 2025/10/14 19:04:10 "KASAN: use-after-free Read in vmap_range_noflush": saved crash log into 1760468650.crash.log 2025/10/14 19:04:10 "KASAN: use-after-free Read in vmap_range_noflush": saved repro log into 1760468650.repro.log 2025/10/14 19:04:21 base crash: WARNING in io_ring_exit_work 2025/10/14 19:04:23 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:04:23 repro finished 'KASAN: use-after-free Read in pmd_set_huge', repro=true crepro=false desc='KASAN: use-after-free Read in vmap_range_noflush' hub=false from_dashboard=false 2025/10/14 19:04:23 found repro for "KASAN: use-after-free Read in vmap_range_noflush" (orig title: "KASAN: use-after-free Read in pmd_set_huge", reliability: 1), took 6.53 minutes 2025/10/14 19:04:23 "KASAN: use-after-free Read in vmap_range_noflush": saved crash log into 1760468663.crash.log 2025/10/14 19:04:23 start reproducing 'KASAN: use-after-free Read in __vmap_pages_range_noflush' 2025/10/14 19:04:23 "KASAN: use-after-free Read in vmap_range_noflush": saved repro log into 1760468663.repro.log 2025/10/14 19:04:31 runner 7 connected 2025/10/14 19:04:39 reproducing crash 'KASAN: use-after-free Read in vmap_range_noflush': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:04:39 repro finished 'KASAN: use-after-free Read in vmap_range_noflush', repro=true crepro=false desc='KASAN: use-after-free Read in vmap_range_noflush' hub=false from_dashboard=false 2025/10/14 19:04:39 found repro for "KASAN: use-after-free Read in vmap_range_noflush" (orig title: "-SAME-", reliability: 1), took 6.76 minutes 2025/10/14 19:04:39 start reproducing 'PANIC: double fault in corrupted' 2025/10/14 19:04:39 "KASAN: use-after-free Read in vmap_range_noflush": saved crash log into 1760468679.crash.log 2025/10/14 19:04:39 "KASAN: use-after-free Read in vmap_range_noflush": saved repro log into 1760468679.repro.log 2025/10/14 19:05:13 crash "WARNING in udf_truncate_extents" is already known 2025/10/14 19:05:13 base crash "WARNING in udf_truncate_extents" is to be ignored 2025/10/14 19:05:13 patched crashed: WARNING in udf_truncate_extents [need repro = false] 2025/10/14 19:05:17 crash "WARNING in udf_truncate_extents" is already known 2025/10/14 19:05:17 base crash "WARNING in udf_truncate_extents" is to be ignored 2025/10/14 19:05:17 patched crashed: WARNING in udf_truncate_extents [need repro = false] 2025/10/14 19:05:17 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 19:05:17 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 19:06:02 attempt #0 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:06:02 runner 8 connected 2025/10/14 19:06:07 runner 6 connected 2025/10/14 19:06:07 runner 7 connected 2025/10/14 19:06:12 reproducing crash 'PANIC: double fault in corrupted': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:06:12 attempt #0 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: aborting due to context cancelation 2025/10/14 19:06:13 attempt #0 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:06:22 crash "WARNING in udf_truncate_extents" is already known 2025/10/14 19:06:22 base crash "WARNING in udf_truncate_extents" is to be ignored 2025/10/14 19:06:22 patched crashed: WARNING in udf_truncate_extents [need repro = false] 2025/10/14 19:06:50 runner 0 connected 2025/10/14 19:07:10 crash "WARNING in udf_truncate_extents" is already known 2025/10/14 19:07:10 base crash "WARNING in udf_truncate_extents" is to be ignored 2025/10/14 19:07:10 patched crashed: WARNING in udf_truncate_extents [need repro = false] 2025/10/14 19:07:11 runner 8 connected 2025/10/14 19:08:01 runner 7 connected 2025/10/14 19:08:04 attempt #1 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:08:05 attempt #1 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:08:19 reproducing crash 'PANIC: double fault in corrupted': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:08:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 47, "corpus": 46298, "corpus [files]": 11694, "corpus [symbols]": 5931, "cover overflows": 51701, "coverage": 304951, "distributor delayed": 61392, "distributor undelayed": 61392, "distributor violated": 1615, "exec candidate": 81571, "exec collide": 629, "exec fuzz": 1272, "exec gen": 84, "exec hints": 418, "exec inject": 0, "exec minimize": 782, "exec retries": 20, "exec seeds": 105, "exec smash": 666, "exec total [base]": 182299, "exec total [new]": 340939, "exec triage": 144608, "executor restarts [base]": 260, "executor restarts [new]": 1147, "fault jobs": 0, "fuzzer jobs": 27, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 3, "hints jobs": 10, "max signal": 307567, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 427, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47045, "no exec duration": 302365000000, "no exec requests": 1189, "pending": 139, "prog exec time": 648, "reproducing": 4, "rpc recv": 16261782088, "rpc sent": 2389343368, "signal": 300802, "smash jobs": 10, "triage jobs": 7, "vm output": 47265635, "vm restarts [base]": 25, "vm restarts [new]": 207 } 2025/10/14 19:08:56 reproducing crash 'PANIC: double fault in corrupted': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:09:18 base crash: WARNING in udf_truncate_extents 2025/10/14 19:09:39 reproducing crash 'PANIC: double fault in corrupted': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:09:39 repro finished 'PANIC: double fault in corrupted', repro=true crepro=false desc='KASAN: use-after-free Read in vmap_range_noflush' hub=false from_dashboard=false 2025/10/14 19:09:39 found repro for "KASAN: use-after-free Read in vmap_range_noflush" (orig title: "PANIC: double fault in corrupted", reliability: 1), took 4.99 minutes 2025/10/14 19:09:39 start reproducing 'KASAN: use-after-free Write in pmd_set_huge' 2025/10/14 19:09:39 "KASAN: use-after-free Read in vmap_range_noflush": saved crash log into 1760468979.crash.log 2025/10/14 19:09:39 "KASAN: use-after-free Read in vmap_range_noflush": saved repro log into 1760468979.repro.log 2025/10/14 19:09:55 attempt #2 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:09:55 patched-only: KASAN: use-after-free Read in vmap_range_noflush 2025/10/14 19:09:55 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush (full)' 2025/10/14 19:10:03 attempt #2 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:10:03 patched-only: KASAN: use-after-free Read in vmap_range_noflush 2025/10/14 19:10:03 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush (full)' 2025/10/14 19:10:13 reproducing crash 'KASAN: use-after-free Write in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:10:52 runner 1 connected 2025/10/14 19:10:54 runner 0 connected 2025/10/14 19:11:14 reproducing crash 'KASAN: use-after-free Write in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:11:22 base crash: KASAN: slab-use-after-free Read in l2cap_unregister_user 2025/10/14 19:11:39 reproducing crash 'KASAN: use-after-free Write in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:11:45 attempt #0 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:12:11 runner 1 connected 2025/10/14 19:12:16 reproducing crash 'KASAN: use-after-free Write in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:12:16 repro finished 'KASAN: use-after-free Write in pmd_set_huge', repro=true crepro=false desc='KASAN: use-after-free Read in vmap_range_noflush' hub=false from_dashboard=false 2025/10/14 19:12:16 found repro for "KASAN: use-after-free Read in vmap_range_noflush" (orig title: "KASAN: use-after-free Write in pmd_set_huge", reliability: 1), took 2.62 minutes 2025/10/14 19:12:16 start reproducing 'BUG: unable to handle kernel paging request in corrupted' 2025/10/14 19:12:16 "KASAN: use-after-free Read in vmap_range_noflush": saved crash log into 1760469136.crash.log 2025/10/14 19:12:16 "KASAN: use-after-free Read in vmap_range_noflush": saved repro log into 1760469136.repro.log 2025/10/14 19:12:44 reproducing crash 'BUG: unable to handle kernel paging request in corrupted': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/nf_tables_api.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:13:15 base crash: BUG: sleeping function called from invalid context in hook_sb_delete 2025/10/14 19:13:37 attempt #1 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:13:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 72, "corpus": 46334, "corpus [files]": 11707, "corpus [symbols]": 5940, "cover overflows": 52531, "coverage": 305069, "distributor delayed": 61477, "distributor undelayed": 61477, "distributor violated": 1615, "exec candidate": 81571, "exec collide": 1041, "exec fuzz": 2080, "exec gen": 125, "exec hints": 862, "exec inject": 0, "exec minimize": 1879, "exec retries": 20, "exec seeds": 200, "exec smash": 1384, "exec total [base]": 182938, "exec total [new]": 344722, "exec triage": 144776, "executor restarts [base]": 275, "executor restarts [new]": 1177, "fault jobs": 0, "fuzzer jobs": 50, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 3, "hints jobs": 17, "max signal": 307799, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1188, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47112, "no exec duration": 305365000000, "no exec requests": 1192, "pending": 139, "prog exec time": 621, "reproducing": 4, "rpc recv": 16443691736, "rpc sent": 2456541256, "signal": 300913, "smash jobs": 24, "triage jobs": 9, "vm output": 50817871, "vm restarts [base]": 28, "vm restarts [new]": 207 } 2025/10/14 19:14:04 runner 1 connected 2025/10/14 19:14:08 attempt #0 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:14:30 reproducing crash 'BUG: unable to handle kernel paging request in corrupted': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/nf_tables_api.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:14:58 reproducing crash 'BUG: unable to handle kernel paging request in corrupted': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/nf_tables_api.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:15:26 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 19:15:26 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 19:15:26 reproducing crash 'BUG: unable to handle kernel paging request in corrupted': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:15:26 repro finished 'BUG: unable to handle kernel paging request in corrupted', repro=true crepro=false desc='KASAN: use-after-free Read in pmd_set_huge' hub=false from_dashboard=false 2025/10/14 19:15:26 found repro for "KASAN: use-after-free Read in pmd_set_huge" (orig title: "BUG: unable to handle kernel paging request in corrupted", reliability: 1), took 3.17 minutes 2025/10/14 19:15:26 start reproducing 'KASAN: use-after-free Read in vmap_range_noflush (full)' 2025/10/14 19:15:26 "KASAN: use-after-free Read in pmd_set_huge": saved crash log into 1760469326.crash.log 2025/10/14 19:15:26 "KASAN: use-after-free Read in pmd_set_huge": saved repro log into 1760469326.repro.log 2025/10/14 19:15:31 attempt #2 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:15:31 patched-only: KASAN: use-after-free Read in vmap_range_noflush 2025/10/14 19:15:31 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush (full)' 2025/10/14 19:15:57 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:15:59 attempt #1 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:16:03 crash "INFO: task hung in lock_metapage" is already known 2025/10/14 19:16:03 base crash "INFO: task hung in lock_metapage" is to be ignored 2025/10/14 19:16:03 patched crashed: INFO: task hung in lock_metapage [need repro = false] 2025/10/14 19:16:13 repro finished 'PANIC: double fault in search_extable', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 19:16:13 failed repro for "PANIC: double fault in search_extable", err=%!s() 2025/10/14 19:16:13 start reproducing 'PANIC: double fault in entry_SYSCALL_64_safe_stack' 2025/10/14 19:16:13 "PANIC: double fault in search_extable": saved crash log into 1760469373.crash.log 2025/10/14 19:16:13 "PANIC: double fault in search_extable": saved repro log into 1760469373.repro.log 2025/10/14 19:16:13 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:16:13 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:16:13 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:16:13 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:16:13 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:16:13 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:16:13 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:16:13 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:16:14 runner 8 connected 2025/10/14 19:16:49 runner 0 connected 2025/10/14 19:16:52 runner 7 connected 2025/10/14 19:17:19 attempt #0 to run "KASAN: use-after-free Read in pmd_set_huge" on base: did not crash 2025/10/14 19:17:21 attempt #2 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:17:21 patched-only: KASAN: use-after-free Read in vmap_range_noflush 2025/10/14 19:17:21 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush (full)' 2025/10/14 19:18:00 base crash: lost connection to test machine 2025/10/14 19:18:08 runner 1 connected 2025/10/14 19:18:48 runner 0 connected 2025/10/14 19:18:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 91, "corpus": 46365, "corpus [files]": 11714, "corpus [symbols]": 5945, "cover overflows": 53252, "coverage": 305140, "distributor delayed": 61550, "distributor undelayed": 61550, "distributor violated": 1615, "exec candidate": 81571, "exec collide": 1485, "exec fuzz": 2889, "exec gen": 162, "exec hints": 1359, "exec inject": 0, "exec minimize": 2707, "exec retries": 20, "exec seeds": 295, "exec smash": 2084, "exec total [base]": 183621, "exec total [new]": 348289, "exec triage": 144933, "executor restarts [base]": 290, "executor restarts [new]": 1209, "fault jobs": 0, "fuzzer jobs": 54, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 3, "hints jobs": 22, "max signal": 307915, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1669, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 1, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47163, "no exec duration": 305507000000, "no exec requests": 1194, "pending": 132, "prog exec time": 530, "reproducing": 4, "rpc recv": 16686150564, "rpc sent": 2544030696, "signal": 300987, "smash jobs": 27, "triage jobs": 5, "vm output": 53578730, "vm restarts [base]": 32, "vm restarts [new]": 209 } 2025/10/14 19:19:13 attempt #1 to run "KASAN: use-after-free Read in pmd_set_huge" on base: did not crash 2025/10/14 19:19:33 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 19:20:22 runner 8 connected 2025/10/14 19:20:25 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = false] 2025/10/14 19:21:06 attempt #2 to run "KASAN: use-after-free Read in pmd_set_huge" on base: did not crash 2025/10/14 19:21:06 patched-only: KASAN: use-after-free Read in pmd_set_huge 2025/10/14 19:21:06 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge (full)' 2025/10/14 19:21:15 runner 6 connected 2025/10/14 19:21:55 runner 2 connected 2025/10/14 19:23:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 130, "corpus": 46393, "corpus [files]": 11718, "corpus [symbols]": 5947, "cover overflows": 55105, "coverage": 305213, "distributor delayed": 61624, "distributor undelayed": 61624, "distributor violated": 1615, "exec candidate": 81571, "exec collide": 2175, "exec fuzz": 4121, "exec gen": 219, "exec hints": 2382, "exec inject": 0, "exec minimize": 3579, "exec retries": 21, "exec seeds": 357, "exec smash": 2974, "exec total [base]": 187617, "exec total [new]": 353261, "exec triage": 145076, "executor restarts [base]": 312, "executor restarts [new]": 1232, "fault jobs": 0, "fuzzer jobs": 41, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 20, "max signal": 308056, "minimize: array": 0, "minimize: buffer": 1, "minimize: call": 2151, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 6, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47214, "no exec duration": 310303000000, "no exec requests": 1206, "pending": 133, "prog exec time": 358, "reproducing": 4, "rpc recv": 17060455688, "rpc sent": 2792633024, "signal": 301056, "smash jobs": 19, "triage jobs": 2, "vm output": 55307703, "vm restarts [base]": 33, "vm restarts [new]": 211 } 2025/10/14 19:25:09 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = false] 2025/10/14 19:25:19 base crash: lost connection to test machine 2025/10/14 19:25:21 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:25:40 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/ipv4/netfilter/ip_tables.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:25:57 runner 8 connected 2025/10/14 19:26:06 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:26:09 runner 0 connected 2025/10/14 19:26:41 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:27:04 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:27:45 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:27:58 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:28:13 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:28:35 patched crashed: WARNING in udf_truncate_extents [need repro = false] 2025/10/14 19:28:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 131, "corpus": 46426, "corpus [files]": 11726, "corpus [symbols]": 5955, "cover overflows": 57012, "coverage": 305285, "distributor delayed": 61754, "distributor undelayed": 61744, "distributor violated": 1620, "exec candidate": 81571, "exec collide": 3301, "exec fuzz": 6258, "exec gen": 332, "exec hints": 4763, "exec inject": 0, "exec minimize": 4314, "exec retries": 21, "exec seeds": 454, "exec smash": 3875, "exec total [base]": 194912, "exec total [new]": 361017, "exec triage": 145345, "executor restarts [base]": 325, "executor restarts [new]": 1243, "fault jobs": 0, "fuzzer jobs": 55, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 32, "max signal": 308250, "minimize: array": 0, "minimize: buffer": 1, "minimize: call": 2559, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 7, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47312, "no exec duration": 320687000000, "no exec requests": 1234, "pending": 133, "prog exec time": 280, "reproducing": 4, "rpc recv": 17440044844, "rpc sent": 3057523912, "signal": 301117, "smash jobs": 12, "triage jobs": 11, "vm output": 56668953, "vm restarts [base]": 34, "vm restarts [new]": 212 } 2025/10/14 19:29:02 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:29:22 runner 8 connected 2025/10/14 19:29:28 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:29:57 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:29:57 repro finished 'KASAN: use-after-free Read in vmap_range_noflush (full)', repro=true crepro=true desc='KASAN: use-after-free Read in vmap_range_noflush' hub=false from_dashboard=false 2025/10/14 19:29:57 found repro for "KASAN: use-after-free Read in vmap_range_noflush" (orig title: "-SAME-", reliability: 1), took 14.50 minutes 2025/10/14 19:29:57 "KASAN: use-after-free Read in vmap_range_noflush": saved crash log into 1760470197.crash.log 2025/10/14 19:29:57 start reproducing 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 19:29:57 "KASAN: use-after-free Read in vmap_range_noflush": saved repro log into 1760470197.repro.log 2025/10/14 19:30:25 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:30:54 patched crashed: KASAN: use-after-free Read in __vmap_pages_range_noflush [need repro = true] 2025/10/14 19:30:54 scheduled a reproduction of 'KASAN: use-after-free Read in __vmap_pages_range_noflush' 2025/10/14 19:31:31 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 19:31:42 runner 6 connected 2025/10/14 19:31:49 attempt #0 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:32:19 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:32:21 runner 8 connected 2025/10/14 19:33:03 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:33:24 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/10/14 19:33:34 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:33:34 repro finished 'KASAN: use-after-free Read in pmd_set_huge', repro=true crepro=false desc='KASAN: use-after-free Read in vmap_range_noflush' hub=false from_dashboard=false 2025/10/14 19:33:34 found repro for "KASAN: use-after-free Read in vmap_range_noflush" (orig title: "KASAN: use-after-free Read in pmd_set_huge", reliability: 1), took 3.61 minutes 2025/10/14 19:33:34 start reproducing 'KASAN: use-after-free Read in pmd_set_huge (full)' 2025/10/14 19:33:34 "KASAN: use-after-free Read in vmap_range_noflush": saved crash log into 1760470414.crash.log 2025/10/14 19:33:34 "KASAN: use-after-free Read in vmap_range_noflush": saved repro log into 1760470414.repro.log 2025/10/14 19:33:41 attempt #1 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:33:45 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/10/14 19:33:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 166, "corpus": 46457, "corpus [files]": 11737, "corpus [symbols]": 5964, "cover overflows": 58393, "coverage": 305327, "distributor delayed": 61851, "distributor undelayed": 61846, "distributor violated": 1626, "exec candidate": 81571, "exec collide": 4008, "exec fuzz": 7657, "exec gen": 403, "exec hints": 6126, "exec inject": 0, "exec minimize": 5014, "exec retries": 22, "exec seeds": 541, "exec smash": 4601, "exec total [base]": 201582, "exec total [new]": 366257, "exec triage": 145534, "executor restarts [base]": 329, "executor restarts [new]": 1267, "fault jobs": 0, "fuzzer jobs": 36, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 1, "hints jobs": 20, "max signal": 308372, "minimize: array": 0, "minimize: buffer": 1, "minimize: call": 2915, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 7, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47377, "no exec duration": 325274000000, "no exec requests": 1243, "pending": 132, "prog exec time": 391, "reproducing": 4, "rpc recv": 17818388896, "rpc sent": 3221922016, "signal": 301155, "smash jobs": 8, "triage jobs": 8, "vm output": 58689373, "vm restarts [base]": 34, "vm restarts [new]": 215 } 2025/10/14 19:34:04 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/nf_tables_api.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:34:20 runner 8 connected 2025/10/14 19:34:35 runner 6 connected 2025/10/14 19:35:32 attempt #0 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:35:41 attempt #2 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:35:41 patched-only: KASAN: use-after-free Read in vmap_range_noflush 2025/10/14 19:35:45 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/10/14 19:35:54 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 19:35:54 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 19:36:31 runner 0 connected 2025/10/14 19:36:33 runner 2 connected 2025/10/14 19:36:43 runner 6 connected 2025/10/14 19:37:11 crash "possible deadlock in ocfs2_setattr" is already known 2025/10/14 19:37:11 base crash "possible deadlock in ocfs2_setattr" is to be ignored 2025/10/14 19:37:11 patched crashed: possible deadlock in ocfs2_setattr [need repro = false] 2025/10/14 19:37:19 patched crashed: INFO: rcu detected stall in corrupted [need repro = false] 2025/10/14 19:37:22 base crash: WARNING in dbAdjTree 2025/10/14 19:37:23 attempt #1 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:38:00 runner 6 connected 2025/10/14 19:38:07 runner 7 connected 2025/10/14 19:38:10 runner 2 connected 2025/10/14 19:38:36 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f lib/vsprintf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:38:49 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 19:38:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 182, "corpus": 46474, "corpus [files]": 11743, "corpus [symbols]": 5968, "cover overflows": 58916, "coverage": 305358, "distributor delayed": 61906, "distributor undelayed": 61905, "distributor violated": 1629, "exec candidate": 81571, "exec collide": 4476, "exec fuzz": 8555, "exec gen": 456, "exec hints": 7163, "exec inject": 0, "exec minimize": 5265, "exec retries": 22, "exec seeds": 592, "exec smash": 4930, "exec total [base]": 203716, "exec total [new]": 369453, "exec triage": 145640, "executor restarts [base]": 347, "executor restarts [new]": 1299, "fault jobs": 0, "fuzzer jobs": 36, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 23, "max signal": 308464, "minimize: array": 0, "minimize: buffer": 2, "minimize: call": 3062, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 10, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47411, "no exec duration": 325274000000, "no exec requests": 1243, "pending": 133, "prog exec time": 439, "reproducing": 4, "rpc recv": 18190228832, "rpc sent": 3309859104, "signal": 301185, "smash jobs": 10, "triage jobs": 3, "vm output": 61183440, "vm restarts [base]": 37, "vm restarts [new]": 220 } 2025/10/14 19:39:06 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/nf_tables_api.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:39:15 attempt #2 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 19:39:15 patched-only: KASAN: use-after-free Read in vmap_range_noflush 2025/10/14 19:39:15 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush (full)' 2025/10/14 19:39:24 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/nf_tables_api.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:39:36 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/10/14 19:39:40 runner 7 connected 2025/10/14 19:39:42 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/nf_tables_api.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:40:05 runner 1 connected 2025/10/14 19:40:09 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/nf_tables_api.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:40:24 runner 2 connected 2025/10/14 19:40:25 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/nf_tables_api.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:40:44 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/nf_tables_api.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:40:54 base crash: lost connection to test machine 2025/10/14 19:41:02 crash "KASAN: slab-use-after-free Read in handle_tx" is already known 2025/10/14 19:41:02 base crash "KASAN: slab-use-after-free Read in handle_tx" is to be ignored 2025/10/14 19:41:02 patched crashed: KASAN: slab-use-after-free Read in handle_tx [need repro = false] 2025/10/14 19:41:11 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/nf_tables_api.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:41:27 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/nf_tables_api.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:41:42 runner 2 connected 2025/10/14 19:41:47 base crash: lost connection to test machine 2025/10/14 19:41:51 runner 6 connected 2025/10/14 19:42:04 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:42:37 runner 1 connected 2025/10/14 19:43:26 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/nf_tables_api.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:43:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 201, "corpus": 46492, "corpus [files]": 11747, "corpus [symbols]": 5969, "cover overflows": 59673, "coverage": 305458, "distributor delayed": 61972, "distributor undelayed": 61972, "distributor violated": 1635, "exec candidate": 81571, "exec collide": 5082, "exec fuzz": 9801, "exec gen": 527, "exec hints": 8524, "exec inject": 0, "exec minimize": 5630, "exec retries": 22, "exec seeds": 647, "exec smash": 5442, "exec total [base]": 207478, "exec total [new]": 373782, "exec triage": 145751, "executor restarts [base]": 367, "executor restarts [new]": 1326, "fault jobs": 0, "fuzzer jobs": 31, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 1, "hints jobs": 18, "max signal": 308613, "minimize: array": 0, "minimize: buffer": 2, "minimize: call": 3283, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 10, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47451, "no exec duration": 325284000000, "no exec requests": 1244, "pending": 134, "prog exec time": 624, "reproducing": 4, "rpc recv": 18568899804, "rpc sent": 3450877984, "signal": 301234, "smash jobs": 6, "triage jobs": 7, "vm output": 64416782, "vm restarts [base]": 41, "vm restarts [new]": 222 } 2025/10/14 19:43:58 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 19:44:01 crash "possible deadlock in ocfs2_evict_inode" is already known 2025/10/14 19:44:01 base crash "possible deadlock in ocfs2_evict_inode" is to be ignored 2025/10/14 19:44:01 patched crashed: possible deadlock in ocfs2_evict_inode [need repro = false] 2025/10/14 19:44:05 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 19:44:31 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/10/14 19:44:46 runner 6 connected 2025/10/14 19:44:51 runner 7 connected 2025/10/14 19:44:53 runner 8 connected 2025/10/14 19:45:20 runner 2 connected 2025/10/14 19:45:31 crash "possible deadlock in ocfs2_del_inode_from_orphan" is already known 2025/10/14 19:45:31 base crash "possible deadlock in ocfs2_del_inode_from_orphan" is to be ignored 2025/10/14 19:45:31 patched crashed: possible deadlock in ocfs2_del_inode_from_orphan [need repro = false] 2025/10/14 19:45:48 base crash: possible deadlock in ocfs2_evict_inode 2025/10/14 19:45:53 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/nf_tables_api.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:46:20 runner 8 connected 2025/10/14 19:46:36 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 19:46:37 runner 1 connected 2025/10/14 19:47:17 repro finished 'KASAN: use-after-free Read in pmd_set_huge (full)', repro=true crepro=true desc='KASAN: use-after-free Read in vmap_range_noflush' hub=false from_dashboard=false 2025/10/14 19:47:17 found repro for "KASAN: use-after-free Read in vmap_range_noflush" (orig title: "KASAN: use-after-free Read in pmd_set_huge", reliability: 0), took 13.72 minutes 2025/10/14 19:47:17 KASAN: use-after-free Read in vmap_range_noflush: repro is too unreliable, skipping 2025/10/14 19:47:17 "KASAN: use-after-free Read in vmap_range_noflush": saved crash log into 1760471237.crash.log 2025/10/14 19:47:17 start reproducing 'KASAN: use-after-free Read in vmap_range_noflush (full)' 2025/10/14 19:47:17 "KASAN: use-after-free Read in vmap_range_noflush": saved repro log into 1760471237.repro.log 2025/10/14 19:47:17 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:47:17 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:47:17 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:47:17 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:47:17 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:47:17 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:47:17 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:47:17 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:47:17 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:47:17 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:47:17 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:47:17 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:47:17 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 19:47:24 runner 7 connected 2025/10/14 19:47:42 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:47:51 base crash: lost connection to test machine 2025/10/14 19:48:01 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 19:48:02 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:48:15 base crash: possible deadlock in ocfs2_del_inode_from_orphan 2025/10/14 19:48:40 runner 2 connected 2025/10/14 19:48:50 runner 6 connected 2025/10/14 19:48:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 221, "corpus": 46509, "corpus [files]": 11754, "corpus [symbols]": 5973, "cover overflows": 60222, "coverage": 305484, "distributor delayed": 62026, "distributor undelayed": 62020, "distributor violated": 1635, "exec candidate": 81571, "exec collide": 5389, "exec fuzz": 10411, "exec gen": 561, "exec hints": 9153, "exec inject": 0, "exec minimize": 6074, "exec retries": 22, "exec seeds": 698, "exec smash": 5714, "exec total [base]": 211385, "exec total [new]": 376210, "exec triage": 145832, "executor restarts [base]": 393, "executor restarts [new]": 1351, "fault jobs": 0, "fuzzer jobs": 41, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 19, "max signal": 308692, "minimize: array": 0, "minimize: buffer": 2, "minimize: call": 3547, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 10, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47485, "no exec duration": 325374000000, "no exec requests": 1245, "pending": 120, "prog exec time": 573, "reproducing": 4, "rpc recv": 19038435132, "rpc sent": 3581347832, "signal": 301256, "smash jobs": 12, "triage jobs": 10, "vm output": 67007242, "vm restarts [base]": 44, "vm restarts [new]": 228 } 2025/10/14 19:48:53 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:49:04 runner 0 connected 2025/10/14 19:49:39 base crash: lost connection to test machine 2025/10/14 19:50:28 runner 0 connected 2025/10/14 19:53:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 252, "corpus": 46530, "corpus [files]": 11763, "corpus [symbols]": 5977, "cover overflows": 62072, "coverage": 305558, "distributor delayed": 62086, "distributor undelayed": 62086, "distributor violated": 1635, "exec candidate": 81571, "exec collide": 6114, "exec fuzz": 11823, "exec gen": 625, "exec hints": 10711, "exec inject": 0, "exec minimize": 6632, "exec retries": 22, "exec seeds": 762, "exec smash": 6289, "exec total [base]": 215701, "exec total [new]": 381322, "exec triage": 145986, "executor restarts [base]": 418, "executor restarts [new]": 1371, "fault jobs": 0, "fuzzer jobs": 31, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 20, "max signal": 308823, "minimize: array": 0, "minimize: buffer": 3, "minimize: call": 3873, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 15, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47535, "no exec duration": 329085000000, "no exec requests": 1253, "pending": 120, "prog exec time": 522, "reproducing": 4, "rpc recv": 19388390908, "rpc sent": 3788178096, "signal": 301286, "smash jobs": 6, "triage jobs": 5, "vm output": 70032409, "vm restarts [base]": 46, "vm restarts [new]": 228 } 2025/10/14 19:54:52 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:55:21 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:55:30 crash "kernel BUG in may_open" is already known 2025/10/14 19:55:30 base crash "kernel BUG in may_open" is to be ignored 2025/10/14 19:55:30 patched crashed: kernel BUG in may_open [need repro = false] 2025/10/14 19:55:53 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:56:00 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:56:19 runner 7 connected 2025/10/14 19:56:30 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:56:34 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:56:59 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:57:33 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:57:47 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:58:21 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = false] 2025/10/14 19:58:36 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:58:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 276, "corpus": 46544, "corpus [files]": 11766, "corpus [symbols]": 5979, "cover overflows": 64013, "coverage": 305582, "distributor delayed": 62152, "distributor undelayed": 62146, "distributor violated": 1637, "exec candidate": 81571, "exec collide": 7093, "exec fuzz": 13712, "exec gen": 706, "exec hints": 13170, "exec inject": 0, "exec minimize": 7076, "exec retries": 22, "exec seeds": 807, "exec smash": 6736, "exec total [base]": 221999, "exec total [new]": 387804, "exec triage": 146126, "executor restarts [base]": 434, "executor restarts [new]": 1385, "fault jobs": 0, "fuzzer jobs": 26, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 18, "max signal": 308924, "minimize: array": 0, "minimize: buffer": 3, "minimize: call": 4088, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 17, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47584, "no exec duration": 330134000000, "no exec requests": 1265, "pending": 120, "prog exec time": 319, "reproducing": 4, "rpc recv": 19704688224, "rpc sent": 4014355072, "signal": 301305, "smash jobs": 1, "triage jobs": 7, "vm output": 72464147, "vm restarts [base]": 46, "vm restarts [new]": 229 } 2025/10/14 19:59:02 VM-6 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:27863: connect: connection refused 2025/10/14 19:59:02 VM-6 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:27863: connect: connection refused 2025/10/14 19:59:05 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/fork.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:59:09 runner 8 connected 2025/10/14 19:59:10 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:59:12 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 19:59:50 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 19:59:58 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:00:01 runner 6 connected 2025/10/14 20:00:21 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:01:06 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:01:45 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:02:12 base crash: lost connection to test machine 2025/10/14 20:02:20 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:02:50 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:03:01 runner 1 connected 2025/10/14 20:03:05 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 20:03:34 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:03:38 repro finished 'PANIC: double fault in entry_SYSCALL_64_safe_stack', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 20:03:38 failed repro for "PANIC: double fault in entry_SYSCALL_64_safe_stack", err=%!s() 2025/10/14 20:03:38 start reproducing 'KASAN: use-after-free Read in pmd_clear_huge' 2025/10/14 20:03:38 "PANIC: double fault in entry_SYSCALL_64_safe_stack": saved crash log into 1760472218.crash.log 2025/10/14 20:03:38 "PANIC: double fault in entry_SYSCALL_64_safe_stack": saved repro log into 1760472218.repro.log 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:38 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:39 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:39 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:39 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:39 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:39 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:39 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:39 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:39 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:39 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:39 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:39 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:39 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:39 reproduction of "KASAN: use-after-free Read in pmd_set_huge" aborted: it's no longer needed 2025/10/14 20:03:46 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:03:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 292, "corpus": 46567, "corpus [files]": 11771, "corpus [symbols]": 5982, "cover overflows": 65260, "coverage": 305606, "distributor delayed": 62205, "distributor undelayed": 62203, "distributor violated": 1637, "exec candidate": 81571, "exec collide": 7859, "exec fuzz": 15146, "exec gen": 782, "exec hints": 14852, "exec inject": 0, "exec minimize": 7741, "exec retries": 22, "exec seeds": 864, "exec smash": 7271, "exec total [base]": 229454, "exec total [new]": 393129, "exec triage": 146238, "executor restarts [base]": 442, "executor restarts [new]": 1408, "fault jobs": 0, "fuzzer jobs": 26, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 20, "max signal": 308971, "minimize: array": 0, "minimize: buffer": 3, "minimize: call": 4475, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 17, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47622, "no exec duration": 333333000000, "no exec requests": 1270, "pending": 6, "prog exec time": 674, "reproducing": 4, "rpc recv": 20100115948, "rpc sent": 4233516944, "signal": 301328, "smash jobs": 2, "triage jobs": 4, "vm output": 75342488, "vm restarts [base]": 47, "vm restarts [new]": 231 } 2025/10/14 20:03:54 runner 7 connected 2025/10/14 20:04:04 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:04:21 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:04:33 repro finished 'KASAN: use-after-free Read in __vmap_pages_range_noflush', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 20:04:33 failed repro for "KASAN: use-after-free Read in __vmap_pages_range_noflush", err=%!s() 2025/10/14 20:04:33 "KASAN: use-after-free Read in __vmap_pages_range_noflush": saved crash log into 1760472273.crash.log 2025/10/14 20:04:33 "KASAN: use-after-free Read in __vmap_pages_range_noflush": saved repro log into 1760472273.repro.log 2025/10/14 20:04:33 start reproducing 'KASAN: use-after-free Read in __vmap_pages_range_noflush' 2025/10/14 20:04:52 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:05:27 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 20:05:46 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:05:59 patched crashed: KASAN: use-after-free Read in __vmap_pages_range_noflush [need repro = true] 2025/10/14 20:05:59 scheduled a reproduction of 'KASAN: use-after-free Read in __vmap_pages_range_noflush' 2025/10/14 20:06:08 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:06:15 runner 8 connected 2025/10/14 20:06:20 base crash: INFO: rcu detected stall in sys_clone 2025/10/14 20:06:40 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:06:48 runner 7 connected 2025/10/14 20:07:00 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/trace/bpf_trace.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:07:00 repro finished 'KASAN: use-after-free Read in vmap_range_noflush (full)', repro=true crepro=true desc='KASAN: use-after-free Read in vmap_range_noflush' hub=false from_dashboard=false 2025/10/14 20:07:00 found repro for "KASAN: use-after-free Read in vmap_range_noflush" (orig title: "-SAME-", reliability: 0), took 19.72 minutes 2025/10/14 20:07:00 start reproducing 'KASAN: use-after-free Read in vmap_range_noflush (full)' 2025/10/14 20:07:00 KASAN: use-after-free Read in vmap_range_noflush: repro is too unreliable, skipping 2025/10/14 20:07:00 "KASAN: use-after-free Read in vmap_range_noflush": saved crash log into 1760472420.crash.log 2025/10/14 20:07:00 "KASAN: use-after-free Read in vmap_range_noflush": saved repro log into 1760472420.repro.log 2025/10/14 20:07:09 runner 0 connected 2025/10/14 20:07:20 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:07:38 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:07:48 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:49099: connect: connection refused 2025/10/14 20:07:48 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:49099: connect: connection refused 2025/10/14 20:07:53 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:07:58 base crash: lost connection to test machine 2025/10/14 20:08:07 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:08:46 runner 0 connected 2025/10/14 20:08:47 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/ipv4/netfilter/ip_tables.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:08:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 300, "corpus": 46585, "corpus [files]": 11776, "corpus [symbols]": 5985, "cover overflows": 66525, "coverage": 305627, "distributor delayed": 62258, "distributor undelayed": 62258, "distributor violated": 1637, "exec candidate": 81571, "exec collide": 8627, "exec fuzz": 16566, "exec gen": 866, "exec hints": 16730, "exec inject": 0, "exec minimize": 8144, "exec retries": 22, "exec seeds": 909, "exec smash": 7621, "exec total [base]": 234833, "exec total [new]": 398210, "exec triage": 146363, "executor restarts [base]": 449, "executor restarts [new]": 1433, "fault jobs": 0, "fuzzer jobs": 30, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 16, "max signal": 309102, "minimize: array": 0, "minimize: buffer": 3, "minimize: call": 4752, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 17, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47665, "no exec duration": 344019000000, "no exec requests": 1284, "pending": 5, "prog exec time": 506, "reproducing": 4, "rpc recv": 20453352976, "rpc sent": 4415485032, "signal": 301347, "smash jobs": 8, "triage jobs": 6, "vm output": 77532671, "vm restarts [base]": 49, "vm restarts [new]": 234 } 2025/10/14 20:09:02 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 20:09:08 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:09:45 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:09:51 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:09:52 runner 8 connected 2025/10/14 20:10:24 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:10:27 VM-6 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:16294: connect: connection refused 2025/10/14 20:10:27 VM-6 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:16294: connect: connection refused 2025/10/14 20:10:37 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 20:10:44 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:10:44 repro finished 'KASAN: use-after-free Read in pmd_clear_huge', repro=true crepro=false desc='KASAN: use-after-free Read in pmd_set_huge' hub=false from_dashboard=false 2025/10/14 20:10:44 found repro for "KASAN: use-after-free Read in pmd_set_huge" (orig title: "KASAN: use-after-free Read in pmd_clear_huge", reliability: 1), took 7.10 minutes 2025/10/14 20:10:44 "KASAN: use-after-free Read in pmd_set_huge": saved crash log into 1760472644.crash.log 2025/10/14 20:10:44 "KASAN: use-after-free Read in pmd_set_huge": saved repro log into 1760472644.repro.log 2025/10/14 20:10:44 start reproducing 'KASAN: use-after-free Read in pmd_clear_huge' 2025/10/14 20:11:07 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:11:26 runner 6 connected 2025/10/14 20:11:50 reproducing crash 'KASAN: use-after-free Read in __vmap_pages_range_noflush': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netlink/af_netlink.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:11:58 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:12:14 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = true] 2025/10/14 20:12:14 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 20:12:21 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:12:35 reproducing crash 'KASAN: use-after-free Read in __vmap_pages_range_noflush': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netlink/af_netlink.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:12:37 attempt #0 to run "KASAN: use-after-free Read in pmd_set_huge" on base: did not crash 2025/10/14 20:12:58 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:13:03 runner 7 connected 2025/10/14 20:13:27 reproducing crash 'KASAN: use-after-free Read in __vmap_pages_range_noflush': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:13:33 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:13:34 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:13:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 305, "corpus": 46613, "corpus [files]": 11779, "corpus [symbols]": 5986, "cover overflows": 67394, "coverage": 305717, "distributor delayed": 62344, "distributor undelayed": 62344, "distributor violated": 1645, "exec candidate": 81571, "exec collide": 9216, "exec fuzz": 17574, "exec gen": 921, "exec hints": 17755, "exec inject": 0, "exec minimize": 8684, "exec retries": 23, "exec seeds": 981, "exec smash": 8173, "exec total [base]": 239821, "exec total [new]": 402200, "exec triage": 146513, "executor restarts [base]": 461, "executor restarts [new]": 1460, "fault jobs": 0, "fuzzer jobs": 37, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 16, "max signal": 309252, "minimize: array": 1, "minimize: buffer": 3, "minimize: call": 5096, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 20, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47716, "no exec duration": 509911000000, "no exec requests": 1795, "pending": 4, "prog exec time": 630, "reproducing": 4, "rpc recv": 20788886688, "rpc sent": 4529763584, "signal": 301432, "smash jobs": 15, "triage jobs": 6, "vm output": 81216990, "vm restarts [base]": 49, "vm restarts [new]": 237 } 2025/10/14 20:14:25 patched crashed: KASAN: use-after-free Read in __vmap_pages_range_noflush [need repro = true] 2025/10/14 20:14:25 scheduled a reproduction of 'KASAN: use-after-free Read in __vmap_pages_range_noflush' 2025/10/14 20:14:28 attempt #1 to run "KASAN: use-after-free Read in pmd_set_huge" on base: did not crash 2025/10/14 20:14:37 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:14:38 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:14:46 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:14:54 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:15:06 runner 8 connected 2025/10/14 20:15:33 reproducing crash 'KASAN: slab-use-after-free Read in tty_write_room': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/hashtab.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:15:33 repro finished 'KASAN: slab-use-after-free Read in tty_write_room', repro=true crepro=false desc='KASAN: use-after-free Read in vmap_range_noflush' hub=false from_dashboard=false 2025/10/14 20:15:33 start reproducing 'KASAN: use-after-free Read in pmd_set_huge' 2025/10/14 20:15:33 found repro for "KASAN: use-after-free Read in vmap_range_noflush" (orig title: "KASAN: slab-use-after-free Read in tty_write_room", reliability: 1), took 77.69 minutes 2025/10/14 20:15:33 "KASAN: use-after-free Read in vmap_range_noflush": saved crash log into 1760472933.crash.log 2025/10/14 20:15:33 "KASAN: use-after-free Read in vmap_range_noflush": saved repro log into 1760472933.repro.log 2025/10/14 20:15:41 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:15:51 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:16:19 attempt #2 to run "KASAN: use-after-free Read in pmd_set_huge" on base: did not crash 2025/10/14 20:16:19 patched-only: KASAN: use-after-free Read in pmd_set_huge 2025/10/14 20:16:19 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge (full)' 2025/10/14 20:16:34 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:16:46 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:16:46 repro finished 'KASAN: use-after-free Read in pmd_clear_huge', repro=true crepro=false desc='KASAN: use-after-free Read in vmap_range_noflush' hub=false from_dashboard=false 2025/10/14 20:16:46 found repro for "KASAN: use-after-free Read in vmap_range_noflush" (orig title: "KASAN: use-after-free Read in pmd_clear_huge", reliability: 1), took 6.02 minutes 2025/10/14 20:16:46 start reproducing 'KASAN: use-after-free Read in pmd_set_huge (full)' 2025/10/14 20:16:46 "KASAN: use-after-free Read in vmap_range_noflush": saved crash log into 1760473006.crash.log 2025/10/14 20:16:46 "KASAN: use-after-free Read in vmap_range_noflush": saved repro log into 1760473006.repro.log 2025/10/14 20:16:48 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:16:50 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:17:11 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:17:20 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/10/14 20:17:24 attempt #0 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 20:17:36 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:17:53 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:18:00 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:18:08 runner 7 connected 2025/10/14 20:18:13 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:18:28 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 20:18:28 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 20:18:40 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = true] 2025/10/14 20:18:40 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush' 2025/10/14 20:18:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 310, "corpus": 46629, "corpus [files]": 11784, "corpus [symbols]": 5989, "cover overflows": 68320, "coverage": 305746, "distributor delayed": 62413, "distributor undelayed": 62411, "distributor violated": 1645, "exec candidate": 81571, "exec collide": 9936, "exec fuzz": 18892, "exec gen": 991, "exec hints": 19287, "exec inject": 0, "exec minimize": 9117, "exec retries": 23, "exec seeds": 1030, "exec smash": 8675, "exec total [base]": 242863, "exec total [new]": 406949, "exec triage": 146642, "executor restarts [base]": 482, "executor restarts [new]": 1519, "fault jobs": 0, "fuzzer jobs": 14, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 1, "hints jobs": 8, "max signal": 309333, "minimize: array": 1, "minimize: buffer": 3, "minimize: call": 5369, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 20, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47762, "no exec duration": 566924000000, "no exec requests": 1955, "pending": 6, "prog exec time": 353, "reproducing": 4, "rpc recv": 20990664980, "rpc sent": 4634255168, "signal": 301453, "smash jobs": 2, "triage jobs": 4, "vm output": 83153961, "vm restarts [base]": 49, "vm restarts [new]": 239 } 2025/10/14 20:18:52 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:18:56 attempt #0 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 20:19:07 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:19:09 runner 8 connected 2025/10/14 20:19:17 attempt #1 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 20:19:20 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:19:30 runner 6 connected 2025/10/14 20:19:33 patched crashed: KASAN: use-after-free Read in pmd_set_huge [need repro = false] 2025/10/14 20:19:33 reproducing crash 'KASAN: use-after-free Read in __vmap_pages_range_noflush': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netlink/af_netlink.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:20:04 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:20:12 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 20:20:18 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:20:21 runner 7 connected 2025/10/14 20:20:31 reproducing crash 'KASAN: use-after-free Read in __vmap_pages_range_noflush': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:20:33 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:20:33 repro finished 'KASAN: use-after-free Read in vmap_range_noflush (full)', repro=true crepro=true desc='KASAN: use-after-free Read in vmap_range_noflush' hub=false from_dashboard=false 2025/10/14 20:20:33 start reproducing 'KASAN: use-after-free Read in pmd_clear_huge' 2025/10/14 20:20:33 found repro for "KASAN: use-after-free Read in vmap_range_noflush" (orig title: "-SAME-", reliability: 1), took 13.54 minutes 2025/10/14 20:20:33 "KASAN: use-after-free Read in vmap_range_noflush": saved crash log into 1760473233.crash.log 2025/10/14 20:20:33 "KASAN: use-after-free Read in vmap_range_noflush": saved repro log into 1760473233.repro.log 2025/10/14 20:20:45 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:20:48 attempt #1 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 20:21:02 runner 6 connected 2025/10/14 20:21:08 attempt #2 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 20:21:08 patched-only: KASAN: use-after-free Read in vmap_range_noflush 2025/10/14 20:21:08 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush (full)' 2025/10/14 20:21:31 reproducing crash 'KASAN: use-after-free Read in __vmap_pages_range_noflush': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netlink/af_netlink.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:21:36 runner 0 connected 2025/10/14 20:21:43 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:21:58 reproducing crash 'KASAN: use-after-free Read in __vmap_pages_range_noflush': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netlink/af_netlink.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:22:07 base crash: lost connection to test machine 2025/10/14 20:22:24 attempt #0 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 20:22:45 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:22:45 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:22:55 runner 0 connected 2025/10/14 20:22:58 attempt #2 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 20:22:58 patched-only: KASAN: use-after-free Read in vmap_range_noflush 2025/10/14 20:22:58 scheduled a reproduction of 'KASAN: use-after-free Read in vmap_range_noflush (full)' 2025/10/14 20:23:15 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:23:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 320, "corpus": 46646, "corpus [files]": 11789, "corpus [symbols]": 5992, "cover overflows": 68901, "coverage": 305775, "distributor delayed": 62490, "distributor undelayed": 62480, "distributor violated": 1645, "exec candidate": 81571, "exec collide": 10417, "exec fuzz": 19835, "exec gen": 1036, "exec hints": 20274, "exec inject": 0, "exec minimize": 9481, "exec retries": 23, "exec seeds": 1083, "exec smash": 9059, "exec total [base]": 244009, "exec total [new]": 410312, "exec triage": 146742, "executor restarts [base]": 493, "executor restarts [new]": 1550, "fault jobs": 0, "fuzzer jobs": 27, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 3, "hints jobs": 9, "max signal": 309413, "minimize: array": 1, "minimize: buffer": 4, "minimize: call": 5574, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 23, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47805, "no exec duration": 575679000000, "no exec requests": 1966, "pending": 7, "prog exec time": 588, "reproducing": 4, "rpc recv": 21249100744, "rpc sent": 4704543152, "signal": 301479, "smash jobs": 5, "triage jobs": 13, "vm output": 85190849, "vm restarts [base]": 51, "vm restarts [new]": 243 } 2025/10/14 20:23:52 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:23:55 runner 1 connected 2025/10/14 20:24:04 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 20:24:05 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:24:16 attempt #1 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 20:24:22 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:24:43 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:24:48 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:24:52 runner 7 connected 2025/10/14 20:25:30 patched crashed: INFO: task hung in corrupted [need repro = false] 2025/10/14 20:25:34 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/10/14 20:25:34 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:25:37 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:26:00 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:26:09 attempt #2 to run "KASAN: use-after-free Read in vmap_range_noflush" on base: did not crash 2025/10/14 20:26:09 patched-only: KASAN: use-after-free Read in vmap_range_noflush 2025/10/14 20:26:19 runner 8 connected 2025/10/14 20:26:23 runner 1 connected 2025/10/14 20:26:32 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:26:49 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/media/test-drivers/vicodec/vicodec-core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:26:57 runner 2 connected 2025/10/14 20:27:15 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/media/test-drivers/vicodec/vicodec-core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:27:28 base crash: BUG: sleeping function called from invalid context in hook_sb_delete 2025/10/14 20:27:34 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:27:48 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 20:28:06 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:28:11 reproducing crash 'KASAN: use-after-free Read in pmd_clear_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:28:11 repro finished 'KASAN: use-after-free Read in pmd_clear_huge', repro=true crepro=false desc='KASAN: use-after-free Read in pmd_set_huge' hub=false from_dashboard=false 2025/10/14 20:28:11 found repro for "KASAN: use-after-free Read in pmd_set_huge" (orig title: "KASAN: use-after-free Read in pmd_clear_huge", reliability: 1), took 7.63 minutes 2025/10/14 20:28:11 start reproducing 'KASAN: use-after-free Read in vmap_range_noflush (full)' 2025/10/14 20:28:11 "KASAN: use-after-free Read in pmd_set_huge": saved crash log into 1760473691.crash.log 2025/10/14 20:28:11 "KASAN: use-after-free Read in pmd_set_huge": saved repro log into 1760473691.repro.log 2025/10/14 20:28:11 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:28:11 reproduction of "KASAN: use-after-free Read in vmap_range_noflush" aborted: it's no longer needed 2025/10/14 20:28:16 runner 1 connected 2025/10/14 20:28:35 runner 7 connected 2025/10/14 20:28:46 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:28:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 346, "corpus": 46675, "corpus [files]": 11793, "corpus [symbols]": 5993, "cover overflows": 69731, "coverage": 305839, "distributor delayed": 62538, "distributor undelayed": 62538, "distributor violated": 1655, "exec candidate": 81571, "exec collide": 10907, "exec fuzz": 20775, "exec gen": 1078, "exec hints": 21023, "exec inject": 0, "exec minimize": 9992, "exec retries": 23, "exec seeds": 1164, "exec smash": 9695, "exec total [base]": 247029, "exec total [new]": 413891, "exec triage": 146871, "executor restarts [base]": 523, "executor restarts [new]": 1572, "fault jobs": 0, "fuzzer jobs": 46, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 17, "max signal": 309491, "minimize: array": 1, "minimize: buffer": 4, "minimize: call": 5856, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 24, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47849, "no exec duration": 579532000000, "no exec requests": 1974, "pending": 5, "prog exec time": 404, "reproducing": 4, "rpc recv": 21651938892, "rpc sent": 4815016696, "signal": 301543, "smash jobs": 21, "triage jobs": 8, "vm output": 87390978, "vm restarts [base]": 55, "vm restarts [new]": 246 } 2025/10/14 20:28:58 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = false] 2025/10/14 20:29:09 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = false] 2025/10/14 20:29:18 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:29:20 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = false] 2025/10/14 20:29:43 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:29:46 runner 6 connected 2025/10/14 20:29:50 base crash: WARNING in xfrm6_tunnel_net_exit 2025/10/14 20:29:54 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:29:58 runner 7 connected 2025/10/14 20:30:02 attempt #0 to run "KASAN: use-after-free Read in pmd_set_huge" on base: did not crash 2025/10/14 20:30:09 runner 8 connected 2025/10/14 20:30:39 runner 2 connected 2025/10/14 20:30:41 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:30:58 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:31:01 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 20:31:07 patched crashed: kernel BUG in folio_set_bh [need repro = true] 2025/10/14 20:31:07 scheduled a reproduction of 'kernel BUG in folio_set_bh' 2025/10/14 20:31:18 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:31:45 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:31:49 runner 8 connected 2025/10/14 20:31:52 attempt #1 to run "KASAN: use-after-free Read in pmd_set_huge" on base: did not crash 2025/10/14 20:31:55 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/netfilter/ipset/ip_set_core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:31:56 runner 7 connected 2025/10/14 20:32:01 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:32:01 reproducing crash 'KASAN: use-after-free Read in pmd_set_huge': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/extable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:32:01 repro finished 'KASAN: use-after-free Read in pmd_set_huge', repro=true crepro=false desc='PANIC: double fault in search_extable' hub=false from_dashboard=false 2025/10/14 20:32:01 found repro for "PANIC: double fault in search_extable" (orig title: "KASAN: use-after-free Read in pmd_set_huge", reliability: 1), took 16.46 minutes 2025/10/14 20:32:01 start reproducing 'kernel BUG in folio_set_bh' 2025/10/14 20:32:01 "PANIC: double fault in search_extable": saved crash log into 1760473921.crash.log 2025/10/14 20:32:01 "PANIC: double fault in search_extable": saved repro log into 1760473921.repro.log 2025/10/14 20:32:33 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:32:33 repro finished 'KASAN: use-after-free Read in pmd_set_huge (full)', repro=true crepro=true desc='KASAN: use-after-free Read in pmd_set_huge' hub=false from_dashboard=false 2025/10/14 20:32:33 found repro for "KASAN: use-after-free Read in pmd_set_huge" (orig title: "-SAME-", reliability: 0), took 15.79 minutes 2025/10/14 20:32:33 "KASAN: use-after-free Read in pmd_set_huge": saved crash log into 1760473953.crash.log 2025/10/14 20:32:33 "KASAN: use-after-free Read in pmd_set_huge": saved repro log into 1760473953.repro.log 2025/10/14 20:32:58 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f arch/x86/mm/pgtable.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:33:21 runner 0 connected 2025/10/14 20:33:23 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 20:33:25 reproducing crash 'kernel BUG in folio_set_bh': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/fsntfs.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:33:32 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 20:33:37 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:33:38 runner 1 connected 2025/10/14 20:33:43 attempt #2 to run "KASAN: use-after-free Read in pmd_set_huge" on base: did not crash 2025/10/14 20:33:43 patched-only: KASAN: use-after-free Read in pmd_set_huge 2025/10/14 20:33:43 scheduled a reproduction of 'KASAN: use-after-free Read in pmd_set_huge (full)' 2025/10/14 20:33:43 start reproducing 'KASAN: use-after-free Read in pmd_set_huge (full)' 2025/10/14 20:33:43 failed to recv *flatrpc.InfoRequestRawT: unexpected EOF 2025/10/14 20:33:51 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 354, "corpus": 46693, "corpus [files]": 11799, "corpus [symbols]": 5995, "cover overflows": 70459, "coverage": 305863, "distributor delayed": 62579, "distributor undelayed": 62573, "distributor violated": 1655, "exec candidate": 81571, "exec collide": 11330, "exec fuzz": 21503, "exec gen": 1126, "exec hints": 21643, "exec inject": 0, "exec minimize": 10448, "exec retries": 23, "exec seeds": 1207, "exec smash": 10231, "exec total [base]": 249065, "exec total [new]": 416804, "exec triage": 146928, "executor restarts [base]": 535, "executor restarts [new]": 1596, "fault jobs": 0, "fuzzer jobs": 29, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 1, "hints jobs": 14, "max signal": 309523, "minimize: array": 1, "minimize: buffer": 4, "minimize: call": 6103, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 25, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47873, "no exec duration": 583949000000, "no exec requests": 1982, "pending": 5, "prog exec time": 440, "reproducing": 4, "rpc recv": 21998524860, "rpc sent": 4908701320, "signal": 301564, "smash jobs": 7, "triage jobs": 8, "vm output": 88962354, "vm restarts [base]": 56, "vm restarts [new]": 253 } 2025/10/14 20:33:52 attempt #0 to run "PANIC: double fault in search_extable" on base: did not crash 2025/10/14 20:34:05 reproducing crash 'kernel BUG in folio_set_bh': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/fsntfs.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:34:05 runner 8 connected 2025/10/14 20:34:21 runner 7 connected 2025/10/14 20:34:25 attempt #0 to run "KASAN: use-after-free Read in pmd_set_huge" on base: did not crash 2025/10/14 20:34:30 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:34:32 runner 0 connected 2025/10/14 20:34:43 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/media/test-drivers/vicodec/vicodec-core.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:34:46 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:34:51 reproducing crash 'kernel BUG in folio_set_bh': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/fsntfs.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:34:54 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 20:35:03 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 20:35:16 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 20:35:31 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:35:35 runner 8 connected 2025/10/14 20:35:42 attempt #1 to run "PANIC: double fault in search_extable" on base: did not crash 2025/10/14 20:35:48 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/10/14 20:35:49 reproducing crash 'kernel BUG in folio_set_bh': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/fsntfs.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:35:51 runner 6 connected 2025/10/14 20:35:52 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:35:57 runner 7 connected 2025/10/14 20:36:16 attempt #1 to run "KASAN: use-after-free Read in pmd_set_huge" on base: did not crash 2025/10/14 20:36:35 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:36:37 runner 0 connected 2025/10/14 20:36:37 reproducing crash 'kernel BUG in folio_set_bh': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/fsntfs.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:36:54 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:37:20 patched crashed: KASAN: use-after-free Read in vmap_range_noflush [need repro = false] 2025/10/14 20:37:31 reproducing crash 'kernel BUG in folio_set_bh': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/fsntfs.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:37:33 attempt #2 to run "PANIC: double fault in search_extable" on base: did not crash 2025/10/14 20:37:33 patched-only: PANIC: double fault in search_extable 2025/10/14 20:37:33 scheduled a reproduction of 'PANIC: double fault in search_extable (full)' 2025/10/14 20:37:33 base crash: lost connection to test machine 2025/10/14 20:37:38 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:38:08 runner 7 connected 2025/10/14 20:38:08 reproducing crash 'kernel BUG in folio_set_bh': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/fsntfs.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:38:08 attempt #2 to run "KASAN: use-after-free Read in pmd_set_huge" on base: did not crash 2025/10/14 20:38:16 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/bpf/btf.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 20:38:21 runner 0 connected 2025/10/14 20:38:23 runner 1 connected 2025/10/14 20:38:46 bug reporting terminated 2025/10/14 20:38:46 status reporting terminated 2025/10/14 20:38:46 repro finished 'KASAN: use-after-free Read in vmap_range_noflush (full)', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 20:38:46 attempt #3 to run "KASAN: use-after-free Read in pmd_set_huge" on base: skipping due to errors: context deadline exceeded / 2025/10/14 20:38:46 base: rpc server terminaled 2025/10/14 20:38:46 repro finished 'kernel BUG in folio_set_bh', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 20:38:50 new: rpc server terminaled 2025/10/14 20:38:57 base: pool terminated 2025/10/14 20:38:57 base: kernel context loop terminated 2025/10/14 20:38:58 repro finished 'KASAN: use-after-free Read in pmd_set_huge (full)', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 20:45:57 reproducing crash 'KASAN: use-after-free Read in __vmap_pages_range_noflush': concatenation step failed with context deadline exceeded 2025/10/14 20:45:57 repro finished 'KASAN: use-after-free Read in __vmap_pages_range_noflush', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 20:45:57 repro loop terminated 2025/10/14 20:45:58 new: pool terminated 2025/10/14 20:45:58 new: kernel context loop terminated 2025/10/14 20:45:58 diff fuzzing terminated 2025/10/14 20:45:58 fuzzing is finished 2025/10/14 20:45:58 status at the end: Title On-Base On-Patched KASAN: use-after-free Read in pmd_set_huge 60 crashes[reproduced] KASAN: use-after-free Read in vmap_range_noflush 89 crashes[reproduced] PANIC: double fault in search_extable 1 crashes[reproduced] BUG: sleeping function called from invalid context in hook_sb_delete 2 crashes 2 crashes BUG: unable to handle kernel paging request in corrupted 1 crashes INFO: rcu detected stall in corrupted 4 crashes INFO: rcu detected stall in sys_clone 1 crashes INFO: task hung in corrupted 2 crashes 1 crashes INFO: task hung in crda_timeout_work 1 crashes INFO: task hung in lock_metapage 1 crashes KASAN: slab-use-after-free Read in handle_tx 1 crashes KASAN: slab-use-after-free Read in l2cap_unregister_user 1 crashes 1 crashes KASAN: slab-use-after-free Read in tty_write_room 1 crashes KASAN: use-after-free Read in __vmap_pages_range_noflush 4 crashes KASAN: use-after-free Read in hpfs_get_ea 2 crashes KASAN: use-after-free Read in pmd_clear_huge 4 crashes KASAN: use-after-free Write in pmd_set_huge 1 crashes PANIC: double fault in corrupted 1 crashes PANIC: double fault in entry_SYSCALL_64_safe_stack 1 crashes WARNING in dbAdjTree 1 crashes WARNING in io_ring_exit_work 2 crashes WARNING in udf_truncate_extents 1 crashes 5 crashes WARNING in xfrm6_tunnel_net_exit 4 crashes 5 crashes WARNING in xfrm_state_fini 1 crashes 7 crashes general protection fault in pcl818_ai_cancel 1 crashes 3 crashes kernel BUG in folio_set_bh 1 crashes kernel BUG in jfs_evict_inode 3 crashes 4 crashes kernel BUG in may_open 1 crashes kernel BUG in txUnlock 3 crashes 6 crashes lost connection to test machine 14 crashes 23 crashes no output from test machine 3 crashes possible deadlock in ocfs2_del_inode_from_orphan 1 crashes 1 crashes possible deadlock in ocfs2_evict_inode 1 crashes 1 crashes possible deadlock in ocfs2_init_acl 1 crashes 3 crashes possible deadlock in ocfs2_setattr 1 crashes possible deadlock in ocfs2_try_remove_refcount_tree 6 crashes 8 crashes unregister_netdevice: waiting for DEV to become free 1 crashes 2025/10/14 20:45:58 possibly patched-only: KASAN: use-after-free Read in vmap_range_noflush 2025/10/14 20:45:58 possibly patched-only: KASAN: use-after-free Read in pmd_set_huge