2025/11/04 11:23:24 extracted 322889 text symbol hashes for base and 322889 for patched 2025/11/04 11:23:24 symbol "split_huge_pages_pid.__UNIQUE_ID_ddebug1755" has different values in base vs patch 2025/11/04 11:23:24 binaries are different, continuing fuzzing 2025/11/04 11:23:24 adding modified_functions to focus areas: ["__access_remote_vm" "__handle_mm_fault" "__pte_alloc" "__pte_alloc_kernel" "__vm_insert_mixed" "change_huge_pmd" "clear_gigantic_page" "copy_folio_from_user" "copy_huge_pmd" "copy_page_range" "copy_pmd_range" "copy_remote_vm_str" "copy_user_gigantic_page" "copy_user_large_folio" "deferred_split_folio" "deferred_split_scan" "do_huge_pmd_anonymous_page" "do_huge_pmd_wp_page" "do_wp_page" "folio_try_dup_anon_rmap_pmd" "folio_zero_user" "follow_pfnmap_start" "huge_pmd_set_accessed" "insert_page" "mm_get_huge_zero_folio" "numa_migrate_check" "remove_device_exclusive_entry" "split_huge_pages_all" "split_huge_pages_in_file" "split_huge_pages_write" "split_huge_pmd_locked" "touch_pmd" "try_restore_exclusive_pte" "unmap_huge_pmd_locked" "unmap_page_range" "vm_insert_pages" "zap_huge_pmd"] 2025/11/04 11:23:24 adding directly modified files to focus areas: ["arch/arm64/include/asm/pgtable.h" "arch/arm64/include/asm/tlbflush.h" "arch/arm64/mm/contpte.c" "arch/arm64/mm/fault.c" "include/linux/huge_mm.h" "include/linux/pgtable.h" "mm/huge_memory.c" "mm/internal.h" "mm/memory.c"] 2025/11/04 11:23:24 downloading corpus #1: "https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db" 2025/11/04 11:24:23 runner 1 connected 2025/11/04 11:24:23 runner 5 connected 2025/11/04 11:24:23 runner 0 connected 2025/11/04 11:24:23 runner 3 connected 2025/11/04 11:24:23 runner 1 connected 2025/11/04 11:24:23 runner 7 connected 2025/11/04 11:24:24 runner 4 connected 2025/11/04 11:24:25 runner 2 connected 2025/11/04 11:24:30 runner 8 connected 2025/11/04 11:24:30 runner 2 connected 2025/11/04 11:24:31 runner 6 connected 2025/11/04 11:24:31 initializing coverage information... 2025/11/04 11:24:31 executor cover filter: 0 PCs 2025/11/04 11:24:32 runner 0 connected 2025/11/04 11:24:36 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8056 2025/11/04 11:24:36 base: machine check complete 2025/11/04 11:24:36 discovered 7609 source files, 333839 symbols 2025/11/04 11:24:37 coverage filter: __access_remote_vm: [__access_remote_vm] 2025/11/04 11:24:37 coverage filter: __handle_mm_fault: [__handle_mm_fault] 2025/11/04 11:24:37 coverage filter: __pte_alloc: [__pte_alloc __pte_alloc_kernel] 2025/11/04 11:24:37 coverage filter: __pte_alloc_kernel: [] 2025/11/04 11:24:37 coverage filter: __vm_insert_mixed: [__vm_insert_mixed] 2025/11/04 11:24:37 coverage filter: change_huge_pmd: [change_huge_pmd] 2025/11/04 11:24:37 coverage filter: clear_gigantic_page: [clear_gigantic_page] 2025/11/04 11:24:37 coverage filter: copy_folio_from_user: [copy_folio_from_user] 2025/11/04 11:24:37 coverage filter: copy_huge_pmd: [copy_huge_pmd] 2025/11/04 11:24:37 coverage filter: copy_page_range: [copy_page_range] 2025/11/04 11:24:37 coverage filter: copy_pmd_range: [copy_pmd_range] 2025/11/04 11:24:37 coverage filter: copy_remote_vm_str: [copy_remote_vm_str] 2025/11/04 11:24:37 coverage filter: copy_user_gigantic_page: [copy_user_gigantic_page] 2025/11/04 11:24:37 coverage filter: copy_user_large_folio: [copy_user_large_folio] 2025/11/04 11:24:37 coverage filter: deferred_split_folio: [deferred_split_folio] 2025/11/04 11:24:37 coverage filter: deferred_split_scan: [deferred_split_scan] 2025/11/04 11:24:37 coverage filter: do_huge_pmd_anonymous_page: [do_huge_pmd_anonymous_page] 2025/11/04 11:24:37 coverage filter: do_huge_pmd_wp_page: [do_huge_pmd_wp_page] 2025/11/04 11:24:37 coverage filter: do_wp_page: [do_wp_page] 2025/11/04 11:24:37 coverage filter: folio_try_dup_anon_rmap_pmd: [folio_try_dup_anon_rmap_pmd] 2025/11/04 11:24:37 coverage filter: folio_zero_user: [folio_zero_user] 2025/11/04 11:24:37 coverage filter: follow_pfnmap_start: [follow_pfnmap_start] 2025/11/04 11:24:37 coverage filter: huge_pmd_set_accessed: [huge_pmd_set_accessed] 2025/11/04 11:24:37 coverage filter: insert_page: [bxt_vtd_ggtt_insert_page__BKL bxt_vtd_ggtt_insert_page__cb dpt_insert_page gen6_ggtt_insert_page gen8_ggtt_insert_page gen8_ggtt_insert_page_bind gmch_ggtt_insert_page insert_page insert_page_into_pte_locked intel_gmch_gtt_insert_page intel_gmch_gtt_insert_pages null_insert_page vm_insert_page vm_insert_pages vmf_insert_page_mkwrite] 2025/11/04 11:24:37 coverage filter: mm_get_huge_zero_folio: [mm_get_huge_zero_folio] 2025/11/04 11:24:37 coverage filter: numa_migrate_check: [numa_migrate_check] 2025/11/04 11:24:37 coverage filter: remove_device_exclusive_entry: [remove_device_exclusive_entry] 2025/11/04 11:24:37 coverage filter: split_huge_pages_all: [split_huge_pages_all] 2025/11/04 11:24:37 coverage filter: split_huge_pages_in_file: [split_huge_pages_in_file] 2025/11/04 11:24:37 coverage filter: split_huge_pages_write: [split_huge_pages_write] 2025/11/04 11:24:37 coverage filter: split_huge_pmd_locked: [split_huge_pmd_locked] 2025/11/04 11:24:37 coverage filter: touch_pmd: [touch_pmd] 2025/11/04 11:24:37 coverage filter: try_restore_exclusive_pte: [try_restore_exclusive_pte] 2025/11/04 11:24:37 coverage filter: unmap_huge_pmd_locked: [unmap_huge_pmd_locked] 2025/11/04 11:24:37 coverage filter: unmap_page_range: [unmap_page_range] 2025/11/04 11:24:37 coverage filter: vm_insert_pages: [] 2025/11/04 11:24:37 coverage filter: zap_huge_pmd: [zap_huge_pmd] 2025/11/04 11:24:37 coverage filter: arch/arm64/include/asm/pgtable.h: [] 2025/11/04 11:24:37 coverage filter: arch/arm64/include/asm/tlbflush.h: [] 2025/11/04 11:24:37 coverage filter: arch/arm64/mm/contpte.c: [] 2025/11/04 11:24:37 coverage filter: arch/arm64/mm/fault.c: [] 2025/11/04 11:24:37 coverage filter: include/linux/huge_mm.h: [] 2025/11/04 11:24:37 coverage filter: include/linux/pgtable.h: [] 2025/11/04 11:24:37 coverage filter: mm/huge_memory.c: [mm/huge_memory.c] 2025/11/04 11:24:37 coverage filter: mm/internal.h: [] 2025/11/04 11:24:37 coverage filter: mm/memory.c: [mm/memory.c] 2025/11/04 11:24:37 area "symbols": 4821 PCs in the cover filter 2025/11/04 11:24:37 area "files": 10205 PCs in the cover filter 2025/11/04 11:24:37 area "": 0 PCs in the cover filter 2025/11/04 11:24:37 executor cover filter: 0 PCs 2025/11/04 11:24:40 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8056 2025/11/04 11:24:40 new: machine check complete 2025/11/04 11:24:41 new: adding 85463 seeds 2025/11/04 11:25:16 base crash: lost connection to test machine 2025/11/04 11:26:12 runner 0 connected 2025/11/04 11:27:24 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/11/04 11:27:24 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/11/04 11:27:24 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/04 11:27:37 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/11/04 11:27:37 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/11/04 11:27:37 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/04 11:27:48 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/11/04 11:27:48 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/11/04 11:27:48 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/04 11:27:52 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/11/04 11:27:52 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/11/04 11:27:52 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/04 11:28:03 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/11/04 11:28:03 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/11/04 11:28:03 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/04 11:28:21 runner 8 connected 2025/11/04 11:28:27 STAT { "buffer too small": 0, "candidate triage jobs": 43, "candidates": 81496, "comps overflows": 0, "corpus": 3888, "corpus [files]": 5910, "corpus [symbols]": 833, "cover overflows": 2690, "coverage": 154706, "distributor delayed": 4215, "distributor undelayed": 4200, "distributor violated": 10, "exec candidate": 3967, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 6402, "exec total [new]": 17302, "exec triage": 12322, "executor restarts [base]": 62, "executor restarts [new]": 113, "fault jobs": 0, "fuzzer jobs": 43, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 156205, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 3967, "no exec duration": 32970000000, "no exec requests": 322, "pending": 0, "prog exec time": 367, "reproducing": 0, "rpc recv": 1046732824, "rpc sent": 97798744, "signal": 152601, "smash jobs": 0, "triage jobs": 0, "vm output": 2821314, "vm restarts [base]": 4, "vm restarts [new]": 10 } 2025/11/04 11:28:34 runner 3 connected 2025/11/04 11:28:45 runner 1 connected 2025/11/04 11:28:49 runner 4 connected 2025/11/04 11:29:01 runner 0 connected 2025/11/04 11:30:06 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 11:30:28 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 11:31:03 runner 3 connected 2025/11/04 11:31:26 runner 4 connected 2025/11/04 11:31:47 crash "WARNING in xfrm_state_fini" is already known 2025/11/04 11:31:47 base crash "WARNING in xfrm_state_fini" is to be ignored 2025/11/04 11:31:47 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/04 11:31:59 crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/11/04 11:31:59 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is to be ignored 2025/11/04 11:31:59 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 11:32:12 crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/11/04 11:32:12 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is to be ignored 2025/11/04 11:32:12 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 11:32:14 crash "possible deadlock in ocfs2_init_acl" is already known 2025/11/04 11:32:14 base crash "possible deadlock in ocfs2_init_acl" is to be ignored 2025/11/04 11:32:14 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/11/04 11:32:23 crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/11/04 11:32:23 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is to be ignored 2025/11/04 11:32:23 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 11:32:36 runner 1 connected 2025/11/04 11:32:39 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 11:32:49 runner 7 connected 2025/11/04 11:33:01 runner 2 connected 2025/11/04 11:33:10 runner 4 connected 2025/11/04 11:33:19 runner 8 connected 2025/11/04 11:33:27 STAT { "buffer too small": 0, "candidate triage jobs": 44, "candidates": 76402, "comps overflows": 0, "corpus": 8947, "corpus [files]": 11579, "corpus [symbols]": 1617, "cover overflows": 6590, "coverage": 198666, "distributor delayed": 10340, "distributor undelayed": 10339, "distributor violated": 211, "exec candidate": 9061, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 17340, "exec total [new]": 40415, "exec triage": 28186, "executor restarts [base]": 74, "executor restarts [new]": 167, "fault jobs": 0, "fuzzer jobs": 44, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 200076, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 9061, "no exec duration": 35993000000, "no exec requests": 328, "pending": 0, "prog exec time": 317, "reproducing": 0, "rpc recv": 2211134668, "rpc sent": 240093000, "signal": 195634, "smash jobs": 0, "triage jobs": 0, "vm output": 5139242, "vm restarts [base]": 4, "vm restarts [new]": 21 } 2025/11/04 11:33:29 runner 3 connected 2025/11/04 11:33:38 crash "BUG: sleeping function called from invalid context in hook_sb_delete" is already known 2025/11/04 11:33:38 base crash "BUG: sleeping function called from invalid context in hook_sb_delete" is to be ignored 2025/11/04 11:33:38 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/04 11:33:39 crash "BUG: sleeping function called from invalid context in hook_sb_delete" is already known 2025/11/04 11:33:39 base crash "BUG: sleeping function called from invalid context in hook_sb_delete" is to be ignored 2025/11/04 11:33:39 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/04 11:33:40 crash "BUG: sleeping function called from invalid context in hook_sb_delete" is already known 2025/11/04 11:33:40 base crash "BUG: sleeping function called from invalid context in hook_sb_delete" is to be ignored 2025/11/04 11:33:40 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/04 11:34:14 base crash: BUG: sleeping function called from invalid context in hook_sb_delete 2025/11/04 11:34:35 runner 2 connected 2025/11/04 11:34:36 runner 0 connected 2025/11/04 11:34:36 runner 7 connected 2025/11/04 11:35:11 runner 0 connected 2025/11/04 11:35:31 crash "unregister_netdevice: waiting for DEV to become free" is already known 2025/11/04 11:35:31 base crash "unregister_netdevice: waiting for DEV to become free" is to be ignored 2025/11/04 11:35:31 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/11/04 11:36:30 runner 5 connected 2025/11/04 11:37:13 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/04 11:37:24 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/04 11:37:35 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/04 11:38:09 runner 3 connected 2025/11/04 11:38:21 runner 5 connected 2025/11/04 11:38:27 STAT { "buffer too small": 0, "candidate triage jobs": 41, "candidates": 70619, "comps overflows": 0, "corpus": 14645, "corpus [files]": 17358, "corpus [symbols]": 2374, "cover overflows": 11595, "coverage": 224798, "distributor delayed": 16904, "distributor undelayed": 16904, "distributor violated": 213, "exec candidate": 14844, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 8, "exec seeds": 0, "exec smash": 0, "exec total [base]": 29374, "exec total [new]": 68892, "exec triage": 46362, "executor restarts [base]": 83, "executor restarts [new]": 207, "fault jobs": 0, "fuzzer jobs": 41, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 226603, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 14844, "no exec duration": 36572000000, "no exec requests": 332, "pending": 0, "prog exec time": 217, "reproducing": 0, "rpc recv": 3233724400, "rpc sent": 378818184, "signal": 221254, "smash jobs": 0, "triage jobs": 0, "vm output": 7355798, "vm restarts [base]": 5, "vm restarts [new]": 28 } 2025/11/04 11:38:31 runner 1 connected 2025/11/04 11:40:06 crash "possible deadlock in ntfs_fiemap" is already known 2025/11/04 11:40:06 base crash "possible deadlock in ntfs_fiemap" is to be ignored 2025/11/04 11:40:06 patched crashed: possible deadlock in ntfs_fiemap [need repro = false] 2025/11/04 11:40:18 crash "possible deadlock in ntfs_fiemap" is already known 2025/11/04 11:40:18 base crash "possible deadlock in ntfs_fiemap" is to be ignored 2025/11/04 11:40:18 patched crashed: possible deadlock in ntfs_fiemap [need repro = false] 2025/11/04 11:41:03 runner 7 connected 2025/11/04 11:41:04 crash "possible deadlock in ocfs2_xattr_set" is already known 2025/11/04 11:41:04 base crash "possible deadlock in ocfs2_xattr_set" is to be ignored 2025/11/04 11:41:04 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/11/04 11:41:15 runner 0 connected 2025/11/04 11:42:01 runner 5 connected 2025/11/04 11:42:09 crash "WARNING in xfrm_state_fini" is already known 2025/11/04 11:42:09 base crash "WARNING in xfrm_state_fini" is to be ignored 2025/11/04 11:42:09 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/04 11:42:20 crash "general protection fault in pcl818_ai_cancel" is already known 2025/11/04 11:42:20 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/11/04 11:42:20 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/04 11:43:05 runner 1 connected 2025/11/04 11:43:16 runner 7 connected 2025/11/04 11:43:27 STAT { "buffer too small": 0, "candidate triage jobs": 47, "candidates": 65286, "comps overflows": 0, "corpus": 19908, "corpus [files]": 22406, "corpus [symbols]": 3040, "cover overflows": 16526, "coverage": 243295, "distributor delayed": 22576, "distributor undelayed": 22575, "distributor violated": 213, "exec candidate": 20177, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 43297, "exec total [new]": 96956, "exec triage": 63074, "executor restarts [base]": 86, "executor restarts [new]": 258, "fault jobs": 0, "fuzzer jobs": 47, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 245548, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 20177, "no exec duration": 36987000000, "no exec requests": 337, "pending": 0, "prog exec time": 186, "reproducing": 0, "rpc recv": 4234226128, "rpc sent": 545268712, "signal": 239068, "smash jobs": 0, "triage jobs": 0, "vm output": 9767816, "vm restarts [base]": 5, "vm restarts [new]": 34 } 2025/11/04 11:44:34 base crash: lost connection to test machine 2025/11/04 11:45:08 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/04 11:45:31 runner 0 connected 2025/11/04 11:46:01 crash "WARNING in rate_control_rate_init" is already known 2025/11/04 11:46:01 base crash "WARNING in rate_control_rate_init" is to be ignored 2025/11/04 11:46:01 patched crashed: WARNING in rate_control_rate_init [need repro = false] 2025/11/04 11:46:05 runner 7 connected 2025/11/04 11:46:58 runner 0 connected 2025/11/04 11:47:22 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 11:48:18 runner 8 connected 2025/11/04 11:48:27 STAT { "buffer too small": 0, "candidate triage jobs": 48, "candidates": 59615, "comps overflows": 0, "corpus": 25458, "corpus [files]": 27556, "corpus [symbols]": 3672, "cover overflows": 21577, "coverage": 259769, "distributor delayed": 28314, "distributor undelayed": 28313, "distributor violated": 213, "exec candidate": 25848, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 56027, "exec total [new]": 129334, "exec triage": 81045, "executor restarts [base]": 90, "executor restarts [new]": 290, "fault jobs": 0, "fuzzer jobs": 48, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 262247, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 25848, "no exec duration": 37014000000, "no exec requests": 339, "pending": 0, "prog exec time": 162, "reproducing": 0, "rpc recv": 5101327644, "rpc sent": 707365440, "signal": 254904, "smash jobs": 0, "triage jobs": 0, "vm output": 11920200, "vm restarts [base]": 6, "vm restarts [new]": 37 } 2025/11/04 11:50:40 crash "INFO: task hung in reg_check_chans_work" is already known 2025/11/04 11:50:40 base crash "INFO: task hung in reg_check_chans_work" is to be ignored 2025/11/04 11:50:40 patched crashed: INFO: task hung in reg_check_chans_work [need repro = false] 2025/11/04 11:51:37 runner 3 connected 2025/11/04 11:51:57 crash "INFO: task hung in reg_check_chans_work" is already known 2025/11/04 11:51:57 base crash "INFO: task hung in reg_check_chans_work" is to be ignored 2025/11/04 11:51:57 patched crashed: INFO: task hung in reg_check_chans_work [need repro = false] 2025/11/04 11:52:05 crash "general protection fault in pcl818_ai_cancel" is already known 2025/11/04 11:52:05 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/11/04 11:52:05 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/04 11:52:22 crash "unregister_netdevice: waiting for DEV to become free" is already known 2025/11/04 11:52:22 base crash "unregister_netdevice: waiting for DEV to become free" is to be ignored 2025/11/04 11:52:22 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/11/04 11:52:53 runner 0 connected 2025/11/04 11:53:01 runner 1 connected 2025/11/04 11:53:19 runner 5 connected 2025/11/04 11:53:21 base crash: unregister_netdevice: waiting for DEV to become free 2025/11/04 11:53:22 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 11:53:25 crash "general protection fault in pcl818_ai_cancel" is already known 2025/11/04 11:53:25 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/11/04 11:53:25 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/04 11:53:27 STAT { "buffer too small": 0, "candidate triage jobs": 28, "candidates": 55409, "comps overflows": 0, "corpus": 29584, "corpus [files]": 31202, "corpus [symbols]": 4134, "cover overflows": 25339, "coverage": 270173, "distributor delayed": 33371, "distributor undelayed": 33370, "distributor violated": 214, "exec candidate": 30054, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 70937, "exec total [new]": 154204, "exec triage": 94224, "executor restarts [base]": 94, "executor restarts [new]": 335, "fault jobs": 0, "fuzzer jobs": 28, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 272780, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 30054, "no exec duration": 37054000000, "no exec requests": 343, "pending": 0, "prog exec time": 311, "reproducing": 0, "rpc recv": 5895642568, "rpc sent": 848845424, "signal": 265121, "smash jobs": 0, "triage jobs": 0, "vm output": 14043488, "vm restarts [base]": 6, "vm restarts [new]": 41 } 2025/11/04 11:54:13 crash "possible deadlock in ocfs2_xattr_set" is already known 2025/11/04 11:54:13 base crash "possible deadlock in ocfs2_xattr_set" is to be ignored 2025/11/04 11:54:13 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/11/04 11:54:19 runner 4 connected 2025/11/04 11:54:20 runner 0 connected 2025/11/04 11:54:22 runner 7 connected 2025/11/04 11:54:31 crash "general protection fault in pcl818_ai_cancel" is already known 2025/11/04 11:54:31 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/11/04 11:54:31 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/04 11:54:42 crash "WARNING in xfrm_state_fini" is already known 2025/11/04 11:54:42 base crash "WARNING in xfrm_state_fini" is to be ignored 2025/11/04 11:54:42 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/04 11:55:09 runner 3 connected 2025/11/04 11:55:22 base crash: WARNING in xfrm_state_fini 2025/11/04 11:55:28 runner 8 connected 2025/11/04 11:55:40 runner 0 connected 2025/11/04 11:55:43 crash "WARNING in xfrm6_tunnel_net_exit" is already known 2025/11/04 11:55:43 base crash "WARNING in xfrm6_tunnel_net_exit" is to be ignored 2025/11/04 11:55:43 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/11/04 11:55:56 crash "possible deadlock in ext4_writepages" is already known 2025/11/04 11:55:56 base crash "possible deadlock in ext4_writepages" is to be ignored 2025/11/04 11:55:56 patched crashed: possible deadlock in ext4_writepages [need repro = false] 2025/11/04 11:56:18 runner 0 connected 2025/11/04 11:56:39 runner 5 connected 2025/11/04 11:56:45 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/11/04 11:56:45 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/11/04 11:56:45 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/04 11:56:53 runner 6 connected 2025/11/04 11:56:57 crash "possible deadlock in ocfs2_init_acl" is already known 2025/11/04 11:56:57 base crash "possible deadlock in ocfs2_init_acl" is to be ignored 2025/11/04 11:56:57 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/11/04 11:57:07 base crash: WARNING in xfrm_state_fini 2025/11/04 11:57:10 crash "possible deadlock in ocfs2_init_acl" is already known 2025/11/04 11:57:10 base crash "possible deadlock in ocfs2_init_acl" is to be ignored 2025/11/04 11:57:10 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/11/04 11:57:42 runner 0 connected 2025/11/04 11:57:54 runner 8 connected 2025/11/04 11:58:04 runner 1 connected 2025/11/04 11:58:07 runner 3 connected 2025/11/04 11:58:27 STAT { "buffer too small": 0, "candidate triage jobs": 41, "candidates": 51376, "comps overflows": 0, "corpus": 33559, "corpus [files]": 34697, "corpus [symbols]": 4591, "cover overflows": 28678, "coverage": 280021, "distributor delayed": 37973, "distributor undelayed": 37973, "distributor violated": 214, "exec candidate": 34087, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 82167, "exec total [new]": 178039, "exec triage": 106554, "executor restarts [base]": 105, "executor restarts [new]": 395, "fault jobs": 0, "fuzzer jobs": 41, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 282603, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 34087, "no exec duration": 37128000000, "no exec requests": 346, "pending": 0, "prog exec time": 208, "reproducing": 0, "rpc recv": 6886637304, "rpc sent": 1005701896, "signal": 274701, "smash jobs": 0, "triage jobs": 0, "vm output": 16744979, "vm restarts [base]": 9, "vm restarts [new]": 51 } 2025/11/04 11:59:01 crash "WARNING in __rate_control_send_low" is already known 2025/11/04 11:59:01 base crash "WARNING in __rate_control_send_low" is to be ignored 2025/11/04 11:59:01 patched crashed: WARNING in __rate_control_send_low [need repro = false] 2025/11/04 11:59:58 runner 1 connected 2025/11/04 12:02:12 crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/11/04 12:02:12 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is to be ignored 2025/11/04 12:02:12 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 12:02:30 crash "possible deadlock in padata_do_serial" is already known 2025/11/04 12:02:30 base crash "possible deadlock in padata_do_serial" is to be ignored 2025/11/04 12:02:30 patched crashed: possible deadlock in padata_do_serial [need repro = false] 2025/11/04 12:02:40 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/11/04 12:02:40 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/11/04 12:02:40 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/04 12:02:41 crash "possible deadlock in padata_do_serial" is already known 2025/11/04 12:02:41 base crash "possible deadlock in padata_do_serial" is to be ignored 2025/11/04 12:02:41 patched crashed: possible deadlock in padata_do_serial [need repro = false] 2025/11/04 12:03:08 runner 1 connected 2025/11/04 12:03:17 crash "possible deadlock in padata_do_serial" is already known 2025/11/04 12:03:17 base crash "possible deadlock in padata_do_serial" is to be ignored 2025/11/04 12:03:17 patched crashed: possible deadlock in padata_do_serial [need repro = false] 2025/11/04 12:03:19 runner 7 connected 2025/11/04 12:03:27 STAT { "buffer too small": 0, "candidate triage jobs": 24, "candidates": 47065, "comps overflows": 0, "corpus": 37712, "corpus [files]": 38339, "corpus [symbols]": 5038, "cover overflows": 33537, "coverage": 289148, "distributor delayed": 42172, "distributor undelayed": 42163, "distributor violated": 228, "exec candidate": 38398, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 95487, "exec total [new]": 209493, "exec triage": 120397, "executor restarts [base]": 118, "executor restarts [new]": 433, "fault jobs": 0, "fuzzer jobs": 24, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 292489, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 38398, "no exec duration": 37216000000, "no exec requests": 350, "pending": 0, "prog exec time": 189, "reproducing": 0, "rpc recv": 7628230976, "rpc sent": 1193112288, "signal": 283564, "smash jobs": 0, "triage jobs": 0, "vm output": 19263171, "vm restarts [base]": 9, "vm restarts [new]": 54 } 2025/11/04 12:03:30 runner 3 connected 2025/11/04 12:03:31 runner 0 connected 2025/11/04 12:03:57 crash "kernel BUG in jfs_evict_inode" is already known 2025/11/04 12:03:57 base crash "kernel BUG in jfs_evict_inode" is to be ignored 2025/11/04 12:03:57 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/11/04 12:04:13 runner 4 connected 2025/11/04 12:04:53 runner 1 connected 2025/11/04 12:05:46 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/11/04 12:06:08 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/11/04 12:06:08 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/11/04 12:06:08 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/04 12:06:11 crash "possible deadlock in ocfs2_init_acl" is already known 2025/11/04 12:06:11 base crash "possible deadlock in ocfs2_init_acl" is to be ignored 2025/11/04 12:06:11 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/11/04 12:06:43 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/11/04 12:06:43 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/11/04 12:06:43 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/04 12:06:44 runner 2 connected 2025/11/04 12:06:54 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/11/04 12:06:54 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/11/04 12:06:54 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/04 12:07:05 runner 6 connected 2025/11/04 12:07:08 runner 1 connected 2025/11/04 12:07:31 crash "INFO: task hung in corrupted" is already known 2025/11/04 12:07:31 base crash "INFO: task hung in corrupted" is to be ignored 2025/11/04 12:07:31 patched crashed: INFO: task hung in corrupted [need repro = false] 2025/11/04 12:07:39 runner 0 connected 2025/11/04 12:07:43 crash "general protection fault in pcl818_ai_cancel" is already known 2025/11/04 12:07:43 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/11/04 12:07:43 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/04 12:07:51 runner 3 connected 2025/11/04 12:07:54 crash "general protection fault in pcl818_ai_cancel" is already known 2025/11/04 12:07:54 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/11/04 12:07:54 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/04 12:08:01 base crash: general protection fault in pcl818_ai_cancel 2025/11/04 12:08:27 STAT { "buffer too small": 0, "candidate triage jobs": 26, "candidates": 45145, "comps overflows": 0, "corpus": 39579, "corpus [files]": 39893, "corpus [symbols]": 5279, "cover overflows": 36010, "coverage": 293544, "distributor delayed": 44707, "distributor undelayed": 44702, "distributor violated": 261, "exec candidate": 40318, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 21, "exec seeds": 0, "exec smash": 0, "exec total [base]": 104313, "exec total [new]": 226517, "exec triage": 126353, "executor restarts [base]": 133, "executor restarts [new]": 498, "fault jobs": 0, "fuzzer jobs": 26, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 297000, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 40318, "no exec duration": 37216000000, "no exec requests": 350, "pending": 0, "prog exec time": 267, "reproducing": 0, "rpc recv": 8314402764, "rpc sent": 1362358272, "signal": 288010, "smash jobs": 0, "triage jobs": 0, "vm output": 21990853, "vm restarts [base]": 9, "vm restarts [new]": 63 } 2025/11/04 12:08:28 runner 4 connected 2025/11/04 12:08:32 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/11/04 12:08:32 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/11/04 12:08:32 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/04 12:08:40 runner 5 connected 2025/11/04 12:08:51 runner 0 connected 2025/11/04 12:08:51 runner 6 connected 2025/11/04 12:08:59 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/04 12:09:12 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/04 12:09:18 base crash: WARNING in xfrm_state_fini 2025/11/04 12:09:29 runner 3 connected 2025/11/04 12:09:40 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/04 12:09:55 runner 2 connected 2025/11/04 12:10:08 base crash: BUG: sleeping function called from invalid context in hook_sb_delete 2025/11/04 12:10:09 runner 8 connected 2025/11/04 12:10:15 runner 1 connected 2025/11/04 12:10:38 runner 4 connected 2025/11/04 12:11:03 crash "INFO: task hung in corrupted" is already known 2025/11/04 12:11:03 base crash "INFO: task hung in corrupted" is to be ignored 2025/11/04 12:11:03 patched crashed: INFO: task hung in corrupted [need repro = false] 2025/11/04 12:11:04 runner 0 connected 2025/11/04 12:11:20 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:11:51 crash "possible deadlock in ocfs2_init_acl" is already known 2025/11/04 12:11:51 base crash "possible deadlock in ocfs2_init_acl" is to be ignored 2025/11/04 12:11:51 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/11/04 12:11:54 crash "possible deadlock in ntfs_fiemap" is already known 2025/11/04 12:11:54 base crash "possible deadlock in ntfs_fiemap" is to be ignored 2025/11/04 12:11:54 patched crashed: possible deadlock in ntfs_fiemap [need repro = false] 2025/11/04 12:11:59 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/04 12:11:59 runner 7 connected 2025/11/04 12:12:10 runner 6 connected 2025/11/04 12:12:48 runner 4 connected 2025/11/04 12:12:51 runner 3 connected 2025/11/04 12:12:56 runner 1 connected 2025/11/04 12:13:17 crash "WARNING in io_ring_exit_work" is already known 2025/11/04 12:13:17 base crash "WARNING in io_ring_exit_work" is to be ignored 2025/11/04 12:13:17 patched crashed: WARNING in io_ring_exit_work [need repro = false] 2025/11/04 12:13:27 STAT { "buffer too small": 0, "candidate triage jobs": 21, "candidates": 43152, "comps overflows": 0, "corpus": 41538, "corpus [files]": 41484, "corpus [symbols]": 5481, "cover overflows": 38632, "coverage": 297790, "distributor delayed": 47167, "distributor undelayed": 47167, "distributor violated": 266, "exec candidate": 42311, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 25, "exec seeds": 0, "exec smash": 0, "exec total [base]": 111248, "exec total [new]": 243459, "exec triage": 132440, "executor restarts [base]": 151, "executor restarts [new]": 570, "fault jobs": 0, "fuzzer jobs": 21, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 301203, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 42311, "no exec duration": 37226000000, "no exec requests": 351, "pending": 0, "prog exec time": 314, "reproducing": 0, "rpc recv": 9185435516, "rpc sent": 1521035360, "signal": 292272, "smash jobs": 0, "triage jobs": 0, "vm output": 25358780, "vm restarts [base]": 13, "vm restarts [new]": 74 } 2025/11/04 12:13:28 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 12:13:37 crash "possible deadlock in ocfs2_init_acl" is already known 2025/11/04 12:13:37 base crash "possible deadlock in ocfs2_init_acl" is to be ignored 2025/11/04 12:13:37 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/11/04 12:13:39 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 12:14:14 runner 1 connected 2025/11/04 12:14:26 runner 7 connected 2025/11/04 12:14:33 runner 3 connected 2025/11/04 12:14:36 runner 8 connected 2025/11/04 12:15:42 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/11/04 12:15:42 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/11/04 12:15:42 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/04 12:15:47 crash "WARNING in xfrm6_tunnel_net_exit" is already known 2025/11/04 12:15:47 base crash "WARNING in xfrm6_tunnel_net_exit" is to be ignored 2025/11/04 12:15:47 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/11/04 12:16:13 base crash: possible deadlock in ocfs2_xattr_set 2025/11/04 12:16:35 crash "possible deadlock in ext4_writepages" is already known 2025/11/04 12:16:35 base crash "possible deadlock in ext4_writepages" is to be ignored 2025/11/04 12:16:35 patched crashed: possible deadlock in ext4_writepages [need repro = false] 2025/11/04 12:16:39 runner 0 connected 2025/11/04 12:16:44 runner 3 connected 2025/11/04 12:16:46 crash "possible deadlock in ext4_writepages" is already known 2025/11/04 12:16:46 base crash "possible deadlock in ext4_writepages" is to be ignored 2025/11/04 12:16:46 patched crashed: possible deadlock in ext4_writepages [need repro = false] 2025/11/04 12:17:04 runner 1 connected 2025/11/04 12:17:16 base crash: possible deadlock in ext4_writepages 2025/11/04 12:17:32 runner 2 connected 2025/11/04 12:17:45 runner 4 connected 2025/11/04 12:18:12 crash "KASAN: slab-use-after-free Write in txEnd" is already known 2025/11/04 12:18:12 base crash "KASAN: slab-use-after-free Write in txEnd" is to be ignored 2025/11/04 12:18:12 patched crashed: KASAN: slab-use-after-free Write in txEnd [need repro = false] 2025/11/04 12:18:13 runner 2 connected 2025/11/04 12:18:27 STAT { "buffer too small": 0, "candidate triage jobs": 16, "candidates": 41520, "comps overflows": 0, "corpus": 43122, "corpus [files]": 42781, "corpus [symbols]": 5645, "cover overflows": 41539, "coverage": 301196, "distributor delayed": 48901, "distributor undelayed": 48901, "distributor violated": 267, "exec candidate": 43943, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 26, "exec seeds": 0, "exec smash": 0, "exec total [base]": 118257, "exec total [new]": 261918, "exec triage": 137472, "executor restarts [base]": 170, "executor restarts [new]": 635, "fault jobs": 0, "fuzzer jobs": 16, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 304686, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43942, "no exec duration": 37226000000, "no exec requests": 351, "pending": 0, "prog exec time": 280, "reproducing": 0, "rpc recv": 9889643200, "rpc sent": 1657693824, "signal": 295700, "smash jobs": 0, "triage jobs": 0, "vm output": 28760059, "vm restarts [base]": 15, "vm restarts [new]": 82 } 2025/11/04 12:18:36 crash "possible deadlock in ocfs2_del_inode_from_orphan" is already known 2025/11/04 12:18:36 base crash "possible deadlock in ocfs2_del_inode_from_orphan" is to be ignored 2025/11/04 12:18:36 patched crashed: possible deadlock in ocfs2_del_inode_from_orphan [need repro = false] 2025/11/04 12:19:11 runner 1 connected 2025/11/04 12:19:27 crash "INFO: task hung in __iterate_supers" is already known 2025/11/04 12:19:27 base crash "INFO: task hung in __iterate_supers" is to be ignored 2025/11/04 12:19:27 patched crashed: INFO: task hung in __iterate_supers [need repro = false] 2025/11/04 12:19:33 runner 7 connected 2025/11/04 12:20:09 base crash: INFO: task hung in __iterate_supers 2025/11/04 12:20:15 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:20:24 runner 5 connected 2025/11/04 12:21:06 runner 0 connected 2025/11/04 12:21:11 runner 2 connected 2025/11/04 12:22:13 base crash: general protection fault in pcl818_ai_cancel 2025/11/04 12:23:10 runner 2 connected 2025/11/04 12:23:27 STAT { "buffer too small": 0, "candidate triage jobs": 15, "candidates": 39968, "comps overflows": 0, "corpus": 44583, "corpus [files]": 43942, "corpus [symbols]": 5815, "cover overflows": 45788, "coverage": 304169, "distributor delayed": 50509, "distributor undelayed": 50509, "distributor violated": 267, "exec candidate": 45495, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 28, "exec seeds": 0, "exec smash": 0, "exec total [base]": 125594, "exec total [new]": 286416, "exec triage": 142423, "executor restarts [base]": 189, "executor restarts [new]": 681, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 307848, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45474, "no exec duration": 37856000000, "no exec requests": 358, "pending": 0, "prog exec time": 534, "reproducing": 0, "rpc recv": 10419967032, "rpc sent": 1813173264, "signal": 298519, "smash jobs": 0, "triage jobs": 0, "vm output": 31334651, "vm restarts [base]": 17, "vm restarts [new]": 86 } 2025/11/04 12:23:28 base crash: lost connection to test machine 2025/11/04 12:24:03 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/11/04 12:24:03 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/11/04 12:24:03 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/04 12:24:16 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/11/04 12:24:16 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/11/04 12:24:16 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/04 12:24:20 crash "INFO: task hung in corrupted" is already known 2025/11/04 12:24:20 base crash "INFO: task hung in corrupted" is to be ignored 2025/11/04 12:24:20 patched crashed: INFO: task hung in corrupted [need repro = false] 2025/11/04 12:24:21 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/04 12:24:25 runner 1 connected 2025/11/04 12:25:00 runner 1 connected 2025/11/04 12:25:13 runner 3 connected 2025/11/04 12:25:16 runner 0 connected 2025/11/04 12:25:17 runner 2 connected 2025/11/04 12:25:28 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 12:25:34 base crash: BUG: sleeping function called from invalid context in hook_sb_delete 2025/11/04 12:25:42 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:26:25 runner 6 connected 2025/11/04 12:26:30 runner 0 connected 2025/11/04 12:26:39 runner 1 connected 2025/11/04 12:27:19 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:27:41 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:28:16 runner 5 connected 2025/11/04 12:28:27 STAT { "buffer too small": 0, "candidate triage jobs": 1, "candidates": 39477, "comps overflows": 0, "corpus": 44978, "corpus [files]": 44254, "corpus [symbols]": 5872, "cover overflows": 50423, "coverage": 304937, "distributor delayed": 51065, "distributor undelayed": 51065, "distributor violated": 272, "exec candidate": 45986, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 29, "exec seeds": 0, "exec smash": 0, "exec total [base]": 133466, "exec total [new]": 309352, "exec triage": 143998, "executor restarts [base]": 209, "executor restarts [new]": 727, "fault jobs": 0, "fuzzer jobs": 1, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 308742, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45923, "no exec duration": 38306000000, "no exec requests": 364, "pending": 0, "prog exec time": 215, "reproducing": 0, "rpc recv": 10936341832, "rpc sent": 1958261360, "signal": 299252, "smash jobs": 0, "triage jobs": 0, "vm output": 33293416, "vm restarts [base]": 19, "vm restarts [new]": 93 } 2025/11/04 12:28:38 runner 7 connected 2025/11/04 12:30:00 crash "kernel BUG in txUnlock" is already known 2025/11/04 12:30:00 base crash "kernel BUG in txUnlock" is to be ignored 2025/11/04 12:30:00 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/04 12:30:01 crash "kernel BUG in txUnlock" is already known 2025/11/04 12:30:01 base crash "kernel BUG in txUnlock" is to be ignored 2025/11/04 12:30:01 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/04 12:30:03 crash "kernel BUG in txUnlock" is already known 2025/11/04 12:30:03 base crash "kernel BUG in txUnlock" is to be ignored 2025/11/04 12:30:03 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/04 12:30:09 base crash: kernel BUG in txUnlock 2025/11/04 12:30:16 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/04 12:30:24 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/11/04 12:30:27 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/04 12:30:56 crash "kernel BUG in jfs_evict_inode" is already known 2025/11/04 12:30:56 base crash "kernel BUG in jfs_evict_inode" is to be ignored 2025/11/04 12:30:56 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/11/04 12:30:57 runner 1 connected 2025/11/04 12:30:57 runner 3 connected 2025/11/04 12:31:00 runner 0 connected 2025/11/04 12:31:05 runner 1 connected 2025/11/04 12:31:06 runner 2 connected 2025/11/04 12:31:14 runner 0 connected 2025/11/04 12:31:16 runner 8 connected 2025/11/04 12:31:53 base crash: kernel BUG in jfs_evict_inode 2025/11/04 12:31:53 runner 5 connected 2025/11/04 12:32:41 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:32:50 runner 0 connected 2025/11/04 12:33:27 STAT { "buffer too small": 0, "candidate triage jobs": 8, "candidates": 28947, "comps overflows": 0, "corpus": 45927, "corpus [files]": 45002, "corpus [symbols]": 5985, "cover overflows": 54413, "coverage": 306670, "distributor delayed": 52086, "distributor undelayed": 52086, "distributor violated": 275, "exec candidate": 56516, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 31, "exec seeds": 0, "exec smash": 0, "exec total [base]": 141021, "exec total [new]": 331448, "exec triage": 147072, "executor restarts [base]": 231, "executor restarts [new]": 798, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 310546, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46915, "no exec duration": 38722000000, "no exec requests": 366, "pending": 0, "prog exec time": 289, "reproducing": 0, "rpc recv": 11527593416, "rpc sent": 2106877952, "signal": 300956, "smash jobs": 0, "triage jobs": 0, "vm output": 36392062, "vm restarts [base]": 22, "vm restarts [new]": 100 } 2025/11/04 12:33:38 runner 2 connected 2025/11/04 12:34:05 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/04 12:34:11 base crash: BUG: sleeping function called from invalid context in hook_sb_delete 2025/11/04 12:34:15 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/04 12:35:02 runner 0 connected 2025/11/04 12:35:08 runner 1 connected 2025/11/04 12:35:12 runner 7 connected 2025/11/04 12:36:32 crash "INFO: task hung in sync_bdevs" is already known 2025/11/04 12:36:32 base crash "INFO: task hung in sync_bdevs" is to be ignored 2025/11/04 12:36:32 patched crashed: INFO: task hung in sync_bdevs [need repro = false] 2025/11/04 12:37:31 runner 5 connected 2025/11/04 12:37:55 base crash: possible deadlock in ocfs2_init_acl 2025/11/04 12:37:57 triaged 92.9% of the corpus 2025/11/04 12:37:57 starting bug reproductions 2025/11/04 12:37:57 starting bug reproductions (max 6 VMs, 4 repros) 2025/11/04 12:38:27 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 3151, "comps overflows": 0, "corpus": 46215, "corpus [files]": 45238, "corpus [symbols]": 6019, "cover overflows": 59807, "coverage": 307166, "distributor delayed": 52408, "distributor undelayed": 52408, "distributor violated": 275, "exec candidate": 82312, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 32, "exec seeds": 0, "exec smash": 0, "exec total [base]": 150461, "exec total [new]": 358394, "exec triage": 148207, "executor restarts [base]": 245, "executor restarts [new]": 859, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 311137, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47247, "no exec duration": 38722000000, "no exec requests": 366, "pending": 0, "prog exec time": 253, "reproducing": 0, "rpc recv": 11924477300, "rpc sent": 2252704960, "signal": 301450, "smash jobs": 0, "triage jobs": 0, "vm output": 39149856, "vm restarts [base]": 23, "vm restarts [new]": 104 } 2025/11/04 12:38:51 runner 2 connected 2025/11/04 12:39:04 crash "WARNING in io_ring_exit_work" is already known 2025/11/04 12:39:04 base crash "WARNING in io_ring_exit_work" is to be ignored 2025/11/04 12:39:04 patched crashed: WARNING in io_ring_exit_work [need repro = false] 2025/11/04 12:39:07 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:39:16 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/04 12:40:01 runner 1 connected 2025/11/04 12:40:04 runner 7 connected 2025/11/04 12:40:14 runner 3 connected 2025/11/04 12:40:22 crash "WARNING in udf_truncate_extents" is already known 2025/11/04 12:40:22 base crash "WARNING in udf_truncate_extents" is to be ignored 2025/11/04 12:40:22 patched crashed: WARNING in udf_truncate_extents [need repro = false] 2025/11/04 12:40:42 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:40:49 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:41:19 runner 1 connected 2025/11/04 12:41:29 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 12:41:39 runner 7 connected 2025/11/04 12:41:45 runner 3 connected 2025/11/04 12:42:02 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:42:23 base crash: WARNING in xfrm6_tunnel_net_exit 2025/11/04 12:42:24 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:42:26 runner 4 connected 2025/11/04 12:42:58 runner 1 connected 2025/11/04 12:43:19 runner 1 connected 2025/11/04 12:43:21 runner 8 connected 2025/11/04 12:43:23 base crash: BUG: sleeping function called from invalid context in hook_sb_delete 2025/11/04 12:43:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 26, "corpus": 46330, "corpus [files]": 45331, "corpus [symbols]": 6031, "cover overflows": 63212, "coverage": 307401, "distributor delayed": 52674, "distributor undelayed": 52674, "distributor violated": 275, "exec candidate": 85463, "exec collide": 1975, "exec fuzz": 3615, "exec gen": 219, "exec hints": 885, "exec inject": 0, "exec minimize": 1218, "exec retries": 33, "exec seeds": 145, "exec smash": 1086, "exec total [base]": 157331, "exec total [new]": 371325, "exec triage": 148838, "executor restarts [base]": 262, "executor restarts [new]": 936, "fault jobs": 0, "fuzzer jobs": 27, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 8, "hints jobs": 8, "max signal": 311589, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 736, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47451, "no exec duration": 46232000000, "no exec requests": 378, "pending": 0, "prog exec time": 521, "reproducing": 0, "rpc recv": 12481012632, "rpc sent": 2508188696, "signal": 301672, "smash jobs": 8, "triage jobs": 11, "vm output": 42081539, "vm restarts [base]": 25, "vm restarts [new]": 113 } 2025/11/04 12:44:18 crash "possible deadlock in hfsplus_get_block" is already known 2025/11/04 12:44:18 base crash "possible deadlock in hfsplus_get_block" is to be ignored 2025/11/04 12:44:18 patched crashed: possible deadlock in hfsplus_get_block [need repro = false] 2025/11/04 12:44:20 runner 0 connected 2025/11/04 12:45:15 runner 0 connected 2025/11/04 12:46:56 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/04 12:47:53 runner 7 connected 2025/11/04 12:48:06 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:48:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 94, "corpus": 46387, "corpus [files]": 45387, "corpus [symbols]": 6044, "cover overflows": 66888, "coverage": 307484, "distributor delayed": 52829, "distributor undelayed": 52829, "distributor violated": 275, "exec candidate": 85463, "exec collide": 4580, "exec fuzz": 8557, "exec gen": 453, "exec hints": 2269, "exec inject": 0, "exec minimize": 2748, "exec retries": 38, "exec seeds": 317, "exec smash": 2541, "exec total [base]": 161894, "exec total [new]": 384111, "exec triage": 149299, "executor restarts [base]": 289, "executor restarts [new]": 1041, "fault jobs": 0, "fuzzer jobs": 40, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 15, "max signal": 312018, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1621, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47613, "no exec duration": 68572000000, "no exec requests": 411, "pending": 0, "prog exec time": 702, "reproducing": 0, "rpc recv": 12935663304, "rpc sent": 2796651136, "signal": 301752, "smash jobs": 10, "triage jobs": 15, "vm output": 46259190, "vm restarts [base]": 26, "vm restarts [new]": 115 } 2025/11/04 12:48:51 base crash: lost connection to test machine 2025/11/04 12:49:03 runner 6 connected 2025/11/04 12:49:17 base crash: unregister_netdevice: waiting for DEV to become free 2025/11/04 12:49:48 runner 0 connected 2025/11/04 12:50:09 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:50:14 runner 2 connected 2025/11/04 12:50:33 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:51:06 runner 6 connected 2025/11/04 12:51:19 crash "INFO: task hung in corrupted" is already known 2025/11/04 12:51:19 base crash "INFO: task hung in corrupted" is to be ignored 2025/11/04 12:51:19 patched crashed: INFO: task hung in corrupted [need repro = false] 2025/11/04 12:51:28 crash "KASAN: slab-out-of-bounds Read in ext4_xattr_set_entry" is already known 2025/11/04 12:51:28 base crash "KASAN: slab-out-of-bounds Read in ext4_xattr_set_entry" is to be ignored 2025/11/04 12:51:28 patched crashed: KASAN: slab-out-of-bounds Read in ext4_xattr_set_entry [need repro = false] 2025/11/04 12:51:30 runner 2 connected 2025/11/04 12:52:00 crash "WARNING in raw_ioctl" is already known 2025/11/04 12:52:00 base crash "WARNING in raw_ioctl" is to be ignored 2025/11/04 12:52:00 patched crashed: WARNING in raw_ioctl [need repro = false] 2025/11/04 12:52:08 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:52:16 runner 0 connected 2025/11/04 12:52:25 runner 1 connected 2025/11/04 12:52:32 base crash: WARNING in raw_ioctl 2025/11/04 12:52:57 runner 5 connected 2025/11/04 12:53:05 runner 8 connected 2025/11/04 12:53:23 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/04 12:53:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 157, "corpus": 46437, "corpus [files]": 45424, "corpus [symbols]": 6054, "cover overflows": 69081, "coverage": 307595, "distributor delayed": 52975, "distributor undelayed": 52975, "distributor violated": 275, "exec candidate": 85463, "exec collide": 5980, "exec fuzz": 11129, "exec gen": 575, "exec hints": 3426, "exec inject": 0, "exec minimize": 3937, "exec retries": 40, "exec seeds": 474, "exec smash": 3847, "exec total [base]": 164489, "exec total [new]": 392361, "exec triage": 149634, "executor restarts [base]": 325, "executor restarts [new]": 1140, "fault jobs": 0, "fuzzer jobs": 27, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 8, "hints jobs": 9, "max signal": 312529, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2339, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47734, "no exec duration": 79032000000, "no exec requests": 424, "pending": 0, "prog exec time": 601, "reproducing": 0, "rpc recv": 13435453820, "rpc sent": 3024903712, "signal": 301857, "smash jobs": 6, "triage jobs": 12, "vm output": 51157545, "vm restarts [base]": 28, "vm restarts [new]": 122 } 2025/11/04 12:53:31 runner 1 connected 2025/11/04 12:54:00 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:54:20 runner 4 connected 2025/11/04 12:54:57 runner 8 connected 2025/11/04 12:55:00 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/04 12:55:03 crash "possible deadlock in padata_do_serial" is already known 2025/11/04 12:55:03 base crash "possible deadlock in padata_do_serial" is to be ignored 2025/11/04 12:55:03 patched crashed: possible deadlock in padata_do_serial [need repro = false] 2025/11/04 12:55:14 base crash: INFO: task hung in reg_check_chans_work 2025/11/04 12:55:58 runner 6 connected 2025/11/04 12:55:59 runner 7 connected 2025/11/04 12:56:11 runner 2 connected 2025/11/04 12:56:36 crash "KASAN: use-after-free Read in hpfs_get_ea" is already known 2025/11/04 12:56:36 base crash "KASAN: use-after-free Read in hpfs_get_ea" is to be ignored 2025/11/04 12:56:36 patched crashed: KASAN: use-after-free Read in hpfs_get_ea [need repro = false] 2025/11/04 12:56:52 crash "KASAN: use-after-free Read in hpfs_get_ea" is already known 2025/11/04 12:56:52 base crash "KASAN: use-after-free Read in hpfs_get_ea" is to be ignored 2025/11/04 12:56:52 patched crashed: KASAN: use-after-free Read in hpfs_get_ea [need repro = false] 2025/11/04 12:56:57 patched crashed: BUG: Bad page state in skb_pp_cow_data [need repro = true] 2025/11/04 12:56:57 scheduled a reproduction of 'BUG: Bad page state in skb_pp_cow_data' 2025/11/04 12:56:57 start reproducing 'BUG: Bad page state in skb_pp_cow_data' 2025/11/04 12:57:03 crash "KASAN: use-after-free Read in hpfs_get_ea" is already known 2025/11/04 12:57:03 base crash "KASAN: use-after-free Read in hpfs_get_ea" is to be ignored 2025/11/04 12:57:03 patched crashed: KASAN: use-after-free Read in hpfs_get_ea [need repro = false] 2025/11/04 12:57:15 crash "KASAN: use-after-free Read in hpfs_get_ea" is already known 2025/11/04 12:57:15 base crash "KASAN: use-after-free Read in hpfs_get_ea" is to be ignored 2025/11/04 12:57:15 patched crashed: KASAN: use-after-free Read in hpfs_get_ea [need repro = false] 2025/11/04 12:57:33 runner 8 connected 2025/11/04 12:57:49 runner 7 connected 2025/11/04 12:57:51 base crash: lost connection to test machine 2025/11/04 12:58:00 runner 3 connected 2025/11/04 12:58:11 runner 6 connected 2025/11/04 12:58:24 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 12:58:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 235, "corpus": 46491, "corpus [files]": 45470, "corpus [symbols]": 6060, "cover overflows": 72008, "coverage": 307777, "distributor delayed": 53099, "distributor undelayed": 53099, "distributor violated": 275, "exec candidate": 85463, "exec collide": 7411, "exec fuzz": 13627, "exec gen": 728, "exec hints": 4687, "exec inject": 0, "exec minimize": 5334, "exec retries": 41, "exec seeds": 630, "exec smash": 5050, "exec total [base]": 167237, "exec total [new]": 400763, "exec triage": 149932, "executor restarts [base]": 353, "executor restarts [new]": 1215, "fault jobs": 0, "fuzzer jobs": 32, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 7, "hints jobs": 8, "max signal": 312851, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3141, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47842, "no exec duration": 79032000000, "no exec requests": 424, "pending": 0, "prog exec time": 530, "reproducing": 1, "rpc recv": 13984206256, "rpc sent": 3271248416, "signal": 302002, "smash jobs": 17, "triage jobs": 7, "vm output": 55693232, "vm restarts [base]": 30, "vm restarts [new]": 130 } 2025/11/04 12:58:48 runner 0 connected 2025/11/04 12:59:27 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 12:59:30 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/04 12:59:44 base crash: UBSAN: array-index-out-of-bounds in dtInsertEntry 2025/11/04 12:59:49 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 12:59:49 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 13:00:24 runner 3 connected 2025/11/04 13:00:27 runner 7 connected 2025/11/04 13:00:32 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:00:41 runner 0 connected 2025/11/04 13:00:46 runner 5 connected 2025/11/04 13:01:02 patched crashed: KASAN: slab-use-after-free Read in jfs_lazycommit [need repro = true] 2025/11/04 13:01:02 scheduled a reproduction of 'KASAN: slab-use-after-free Read in jfs_lazycommit' 2025/11/04 13:01:02 start reproducing 'KASAN: slab-use-after-free Read in jfs_lazycommit' 2025/11/04 13:01:18 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:01:57 crash "kernel BUG in may_open" is already known 2025/11/04 13:01:57 base crash "kernel BUG in may_open" is to be ignored 2025/11/04 13:01:57 patched crashed: kernel BUG in may_open [need repro = false] 2025/11/04 13:01:59 runner 6 connected 2025/11/04 13:02:09 crash "kernel BUG in may_open" is already known 2025/11/04 13:02:09 base crash "kernel BUG in may_open" is to be ignored 2025/11/04 13:02:09 patched crashed: kernel BUG in may_open [need repro = false] 2025/11/04 13:02:21 crash "kernel BUG in may_open" is already known 2025/11/04 13:02:21 base crash "kernel BUG in may_open" is to be ignored 2025/11/04 13:02:21 patched crashed: kernel BUG in may_open [need repro = false] 2025/11/04 13:02:33 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:02:54 runner 7 connected 2025/11/04 13:03:06 runner 3 connected 2025/11/04 13:03:07 base crash: kernel BUG in may_open 2025/11/04 13:03:19 runner 5 connected 2025/11/04 13:03:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 264, "corpus": 46548, "corpus [files]": 45504, "corpus [symbols]": 6064, "cover overflows": 74032, "coverage": 307933, "distributor delayed": 53284, "distributor undelayed": 53284, "distributor violated": 277, "exec candidate": 85463, "exec collide": 8798, "exec fuzz": 16200, "exec gen": 850, "exec hints": 5784, "exec inject": 0, "exec minimize": 6435, "exec retries": 45, "exec seeds": 793, "exec smash": 6523, "exec total [base]": 171867, "exec total [new]": 409017, "exec triage": 150265, "executor restarts [base]": 374, "executor restarts [new]": 1268, "fault jobs": 0, "fuzzer jobs": 21, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 4, "max signal": 313142, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3795, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47959, "no exec duration": 85914000000, "no exec requests": 435, "pending": 0, "prog exec time": 582, "reproducing": 2, "rpc recv": 14548874616, "rpc sent": 3491259080, "signal": 302137, "smash jobs": 3, "triage jobs": 14, "vm output": 58848490, "vm restarts [base]": 32, "vm restarts [new]": 137 } 2025/11/04 13:03:31 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:04:05 runner 0 connected 2025/11/04 13:04:13 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:05:28 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:06:27 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:06:31 base crash: lost connection to test machine 2025/11/04 13:07:01 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:07:28 runner 0 connected 2025/11/04 13:08:20 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:08:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 309, "corpus": 46590, "corpus [files]": 45533, "corpus [symbols]": 6065, "cover overflows": 76395, "coverage": 308008, "distributor delayed": 53383, "distributor undelayed": 53382, "distributor violated": 277, "exec candidate": 85463, "exec collide": 10533, "exec fuzz": 19449, "exec gen": 1020, "exec hints": 6677, "exec inject": 0, "exec minimize": 7677, "exec retries": 49, "exec seeds": 907, "exec smash": 7533, "exec total [base]": 176147, "exec total [new]": 417670, "exec triage": 150499, "executor restarts [base]": 408, "executor restarts [new]": 1331, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 4, "max signal": 313338, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4436, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48035, "no exec duration": 91914000000, "no exec requests": 441, "pending": 0, "prog exec time": 680, "reproducing": 2, "rpc recv": 14900924716, "rpc sent": 3768079968, "signal": 302210, "smash jobs": 4, "triage jobs": 3, "vm output": 62577458, "vm restarts [base]": 34, "vm restarts [new]": 137 } 2025/11/04 13:08:30 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 13:08:32 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 13:08:34 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 13:09:23 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:09:26 runner 8 connected 2025/11/04 13:09:26 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:57724: connect: connection refused 2025/11/04 13:09:26 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:57724: connect: connection refused 2025/11/04 13:09:29 runner 5 connected 2025/11/04 13:09:31 runner 7 connected 2025/11/04 13:09:36 base crash: lost connection to test machine 2025/11/04 13:09:56 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 13:09:58 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:10:34 runner 0 connected 2025/11/04 13:10:54 runner 4 connected 2025/11/04 13:11:15 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:11:47 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:11:59 patched crashed: SYZFAIL: tun: ioctl(TUNSETIFF) failed [need repro = false] 2025/11/04 13:12:02 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/11/04 13:12:15 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/11/04 13:12:26 patched crashed: INFO: task hung in __iterate_supers [need repro = false] 2025/11/04 13:12:41 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/11/04 13:12:46 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:12:55 runner 8 connected 2025/11/04 13:12:57 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/11/04 13:12:58 runner 4 connected 2025/11/04 13:13:04 runner 7 connected 2025/11/04 13:13:14 runner 6 connected 2025/11/04 13:13:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 327, "corpus": 46611, "corpus [files]": 45548, "corpus [symbols]": 6069, "cover overflows": 77954, "coverage": 308040, "distributor delayed": 53468, "distributor undelayed": 53468, "distributor violated": 278, "exec candidate": 85463, "exec collide": 11717, "exec fuzz": 21758, "exec gen": 1141, "exec hints": 7390, "exec inject": 0, "exec minimize": 8294, "exec retries": 70, "exec seeds": 967, "exec smash": 7947, "exec total [base]": 180550, "exec total [new]": 423268, "exec triage": 150655, "executor restarts [base]": 443, "executor restarts [new]": 1383, "fault jobs": 0, "fuzzer jobs": 22, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 8, "max signal": 313458, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4858, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48094, "no exec duration": 93268000000, "no exec requests": 447, "pending": 0, "prog exec time": 485, "reproducing": 2, "rpc recv": 15422797476, "rpc sent": 4017233440, "signal": 302241, "smash jobs": 8, "triage jobs": 6, "vm output": 65142226, "vm restarts [base]": 35, "vm restarts [new]": 145 } 2025/11/04 13:13:29 runner 3 connected 2025/11/04 13:13:46 runner 5 connected 2025/11/04 13:14:08 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:14:23 base crash: kernel BUG in jfs_evict_inode 2025/11/04 13:15:20 runner 2 connected 2025/11/04 13:16:09 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/11/04 13:16:35 base crash: INFO: task hung in addrconf_dad_work 2025/11/04 13:16:36 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 13:17:06 runner 1 connected 2025/11/04 13:17:24 runner 0 connected 2025/11/04 13:17:25 runner 3 connected 2025/11/04 13:17:32 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/04 13:17:34 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:17:56 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/04 13:17:58 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 13:18:25 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:18:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 349, "corpus": 46640, "corpus [files]": 45568, "corpus [symbols]": 6075, "cover overflows": 81306, "coverage": 308098, "distributor delayed": 53577, "distributor undelayed": 53576, "distributor violated": 278, "exec candidate": 85463, "exec collide": 13621, "exec fuzz": 25396, "exec gen": 1328, "exec hints": 8683, "exec inject": 0, "exec minimize": 8973, "exec retries": 70, "exec seeds": 1036, "exec smash": 8688, "exec total [base]": 183172, "exec total [new]": 431988, "exec triage": 150861, "executor restarts [base]": 468, "executor restarts [new]": 1415, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 5, "max signal": 313654, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5196, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48165, "no exec duration": 112683000000, "no exec requests": 481, "pending": 0, "prog exec time": 503, "reproducing": 2, "rpc recv": 15767239320, "rpc sent": 4250273552, "signal": 302293, "smash jobs": 5, "triage jobs": 2, "vm output": 67911888, "vm restarts [base]": 38, "vm restarts [new]": 148 } 2025/11/04 13:18:29 runner 5 connected 2025/11/04 13:18:52 runner 1 connected 2025/11/04 13:18:54 runner 7 connected 2025/11/04 13:19:03 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:19:19 base crash: general protection fault in pcl818_ai_cancel 2025/11/04 13:19:54 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:20:08 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/04 13:20:16 runner 2 connected 2025/11/04 13:20:29 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:21:04 base crash: kernel BUG in jfs_evict_inode 2025/11/04 13:21:05 runner 6 connected 2025/11/04 13:21:08 base crash: WARNING in xfrm6_tunnel_net_exit 2025/11/04 13:21:20 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:21:40 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 13:22:02 runner 1 connected 2025/11/04 13:22:05 runner 0 connected 2025/11/04 13:22:13 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/11/04 13:22:38 runner 3 connected 2025/11/04 13:22:39 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:22:50 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/04 13:23:09 runner 8 connected 2025/11/04 13:23:12 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:23:14 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/04 13:23:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 369, "corpus": 46656, "corpus [files]": 45583, "corpus [symbols]": 6078, "cover overflows": 83597, "coverage": 308126, "distributor delayed": 53666, "distributor undelayed": 53666, "distributor violated": 278, "exec candidate": 85463, "exec collide": 15712, "exec fuzz": 29288, "exec gen": 1546, "exec hints": 9145, "exec inject": 0, "exec minimize": 9385, "exec retries": 74, "exec seeds": 1087, "exec smash": 9098, "exec total [base]": 186133, "exec total [new]": 439686, "exec triage": 151019, "executor restarts [base]": 504, "executor restarts [new]": 1486, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 1, "max signal": 313771, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5458, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48218, "no exec duration": 125295000000, "no exec requests": 506, "pending": 0, "prog exec time": 657, "reproducing": 2, "rpc recv": 16212746136, "rpc sent": 4468273384, "signal": 302319, "smash jobs": 2, "triage jobs": 0, "vm output": 75356179, "vm restarts [base]": 42, "vm restarts [new]": 153 } 2025/11/04 13:23:48 runner 2 connected 2025/11/04 13:24:03 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 13:24:07 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:24:11 runner 6 connected 2025/11/04 13:24:23 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 13:24:34 patched crashed: INFO: task hung in __iterate_supers [need repro = false] 2025/11/04 13:24:36 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:24:59 runner 3 connected 2025/11/04 13:25:10 crash "INFO: task hung in bdev_open" is already known 2025/11/04 13:25:10 base crash "INFO: task hung in bdev_open" is to be ignored 2025/11/04 13:25:10 patched crashed: INFO: task hung in bdev_open [need repro = false] 2025/11/04 13:25:19 runner 8 connected 2025/11/04 13:25:20 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 13:25:24 runner 7 connected 2025/11/04 13:25:30 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:26:00 runner 5 connected 2025/11/04 13:26:04 reproducing crash 'BUG: Bad page state in skb_pp_cow_data': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/core/page_pool.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:26:04 found repro for "BUG: Bad page state in skb_pp_cow_data" (orig title: "-SAME-", reliability: 1), took 29.02 minutes 2025/11/04 13:26:04 "BUG: Bad page state in skb_pp_cow_data": saved crash log into 1762262764.crash.log 2025/11/04 13:26:04 repro finished 'BUG: Bad page state in skb_pp_cow_data', repro=true crepro=false desc='BUG: Bad page state in skb_pp_cow_data' hub=false from_dashboard=false 2025/11/04 13:26:04 "BUG: Bad page state in skb_pp_cow_data": saved repro log into 1762262764.repro.log 2025/11/04 13:26:17 runner 6 connected 2025/11/04 13:26:49 base crash: BUG: sleeping function called from invalid context in hook_sb_delete 2025/11/04 13:26:56 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 13:27:01 runner 0 connected 2025/11/04 13:27:32 attempt #0 to run "BUG: Bad page state in skb_pp_cow_data" on base: crashed with BUG: Bad page state in skb_pp_cow_data 2025/11/04 13:27:32 crashes both: BUG: Bad page state in skb_pp_cow_data / BUG: Bad page state in skb_pp_cow_data 2025/11/04 13:27:46 runner 1 connected 2025/11/04 13:27:52 runner 7 connected 2025/11/04 13:28:15 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:10175: connect: connection refused 2025/11/04 13:28:15 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:10175: connect: connection refused 2025/11/04 13:28:25 base crash: lost connection to test machine 2025/11/04 13:28:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 379, "corpus": 46669, "corpus [files]": 45595, "corpus [symbols]": 6080, "cover overflows": 85545, "coverage": 308151, "distributor delayed": 53772, "distributor undelayed": 53772, "distributor violated": 278, "exec candidate": 85463, "exec collide": 17508, "exec fuzz": 32694, "exec gen": 1722, "exec hints": 9217, "exec inject": 0, "exec minimize": 9779, "exec retries": 79, "exec seeds": 1127, "exec smash": 9428, "exec total [base]": 189297, "exec total [new]": 446077, "exec triage": 151183, "executor restarts [base]": 546, "executor restarts [new]": 1576, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 7, "hints jobs": 1, "max signal": 313859, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5708, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48278, "no exec duration": 125295000000, "no exec requests": 506, "pending": 0, "prog exec time": 539, "reproducing": 1, "rpc recv": 16696296792, "rpc sent": 4677850496, "signal": 302340, "smash jobs": 3, "triage jobs": 8, "vm output": 79828181, "vm restarts [base]": 44, "vm restarts [new]": 161 } 2025/11/04 13:28:28 runner 0 connected 2025/11/04 13:29:07 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 13:29:22 runner 1 connected 2025/11/04 13:29:23 base crash: lost connection to test machine 2025/11/04 13:30:04 runner 7 connected 2025/11/04 13:30:20 runner 2 connected 2025/11/04 13:30:20 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 13:30:28 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/04 13:30:30 base crash: lost connection to test machine 2025/11/04 13:31:17 runner 3 connected 2025/11/04 13:31:20 runner 0 connected 2025/11/04 13:31:25 runner 1 connected 2025/11/04 13:31:51 crash "general protection fault in txEnd" is already known 2025/11/04 13:31:51 base crash "general protection fault in txEnd" is to be ignored 2025/11/04 13:31:51 patched crashed: general protection fault in txEnd [need repro = false] 2025/11/04 13:32:07 base crash: general protection fault in txEnd 2025/11/04 13:32:27 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/11/04 13:32:48 runner 6 connected 2025/11/04 13:33:04 runner 0 connected 2025/11/04 13:33:15 base crash: possible deadlock in ocfs2_xattr_set 2025/11/04 13:33:20 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/04 13:33:24 runner 8 connected 2025/11/04 13:33:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 414, "corpus": 46698, "corpus [files]": 45622, "corpus [symbols]": 6087, "cover overflows": 88496, "coverage": 308196, "distributor delayed": 53932, "distributor undelayed": 53932, "distributor violated": 278, "exec candidate": 85463, "exec collide": 19970, "exec fuzz": 37346, "exec gen": 1988, "exec hints": 9657, "exec inject": 0, "exec minimize": 10462, "exec retries": 81, "exec seeds": 1222, "exec smash": 10176, "exec total [base]": 191699, "exec total [new]": 455745, "exec triage": 151509, "executor restarts [base]": 586, "executor restarts [new]": 1636, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 5, "hints jobs": 1, "max signal": 314032, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6116, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48384, "no exec duration": 138418000000, "no exec requests": 528, "pending": 0, "prog exec time": 503, "reproducing": 1, "rpc recv": 17147246768, "rpc sent": 4918514592, "signal": 302384, "smash jobs": 1, "triage jobs": 6, "vm output": 82931573, "vm restarts [base]": 50, "vm restarts [new]": 165 } 2025/11/04 13:33:36 base crash: unregister_netdevice: waiting for DEV to become free 2025/11/04 13:33:55 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 13:34:04 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 13:34:07 runner 1 connected 2025/11/04 13:34:11 patched crashed: KASAN: slab-use-after-free Read in jfs_syncpt [need repro = true] 2025/11/04 13:34:11 scheduled a reproduction of 'KASAN: slab-use-after-free Read in jfs_syncpt' 2025/11/04 13:34:11 start reproducing 'KASAN: slab-use-after-free Read in jfs_syncpt' 2025/11/04 13:34:18 runner 5 connected 2025/11/04 13:34:25 runner 2 connected 2025/11/04 13:34:28 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/04 13:34:41 base crash: lost connection to test machine 2025/11/04 13:34:53 runner 3 connected 2025/11/04 13:35:00 runner 6 connected 2025/11/04 13:35:09 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/04 13:35:18 runner 4 connected 2025/11/04 13:35:37 runner 1 connected 2025/11/04 13:35:58 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:36:06 runner 0 connected 2025/11/04 13:36:34 crash "WARNING in udf_truncate_extents" is already known 2025/11/04 13:36:34 base crash "WARNING in udf_truncate_extents" is to be ignored 2025/11/04 13:36:34 patched crashed: WARNING in udf_truncate_extents [need repro = false] 2025/11/04 13:36:35 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:36:47 base crash: lost connection to test machine 2025/11/04 13:36:55 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 13:37:31 runner 6 connected 2025/11/04 13:37:34 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:37:36 runner 2 connected 2025/11/04 13:37:44 runner 3 connected 2025/11/04 13:37:50 crash "KASAN: out-of-bounds Read in ext4_xattr_set_entry" is already known 2025/11/04 13:37:50 base crash "KASAN: out-of-bounds Read in ext4_xattr_set_entry" is to be ignored 2025/11/04 13:37:50 patched crashed: KASAN: out-of-bounds Read in ext4_xattr_set_entry [need repro = false] 2025/11/04 13:38:16 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:38:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 423, "corpus": 46718, "corpus [files]": 45635, "corpus [symbols]": 6089, "cover overflows": 90290, "coverage": 308234, "distributor delayed": 54021, "distributor undelayed": 54021, "distributor violated": 278, "exec candidate": 85463, "exec collide": 21743, "exec fuzz": 40868, "exec gen": 2159, "exec hints": 9888, "exec inject": 0, "exec minimize": 11045, "exec retries": 81, "exec seeds": 1270, "exec smash": 10567, "exec total [base]": 194334, "exec total [new]": 462622, "exec triage": 151664, "executor restarts [base]": 622, "executor restarts [new]": 1701, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 1, "max signal": 314145, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6468, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48438, "no exec duration": 145833000000, "no exec requests": 541, "pending": 0, "prog exec time": 551, "reproducing": 2, "rpc recv": 17673186884, "rpc sent": 5119002032, "signal": 302418, "smash jobs": 6, "triage jobs": 8, "vm output": 85859763, "vm restarts [base]": 55, "vm restarts [new]": 171 } 2025/11/04 13:38:30 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/04 13:38:39 runner 4 connected 2025/11/04 13:38:59 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:39:26 runner 2 connected 2025/11/04 13:39:27 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/04 13:39:39 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 13:39:46 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:39:48 base crash: unregister_netdevice: waiting for DEV to become free 2025/11/04 13:40:24 runner 3 connected 2025/11/04 13:40:29 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/11/04 13:40:35 runner 4 connected 2025/11/04 13:40:36 runner 1 connected 2025/11/04 13:40:48 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:41:21 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:41:28 runner 6 connected 2025/11/04 13:41:47 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/11/04 13:42:20 base crash: possible deadlock in ocfs2_evict_inode 2025/11/04 13:42:21 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:42:45 runner 4 connected 2025/11/04 13:43:11 base crash: BUG: sleeping function called from invalid context in hook_sb_delete 2025/11/04 13:43:15 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:43:16 runner 2 connected 2025/11/04 13:43:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 446, "corpus": 46749, "corpus [files]": 45663, "corpus [symbols]": 6096, "cover overflows": 92340, "coverage": 308293, "distributor delayed": 54136, "distributor undelayed": 54136, "distributor violated": 278, "exec candidate": 85463, "exec collide": 23449, "exec fuzz": 44088, "exec gen": 2336, "exec hints": 10241, "exec inject": 0, "exec minimize": 11882, "exec retries": 83, "exec seeds": 1360, "exec smash": 11384, "exec total [base]": 197541, "exec total [new]": 470021, "exec triage": 151859, "executor restarts [base]": 654, "executor restarts [new]": 1754, "fault jobs": 0, "fuzzer jobs": 17, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 4, "max signal": 314465, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6958, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48511, "no exec duration": 156614000000, "no exec requests": 559, "pending": 0, "prog exec time": 526, "reproducing": 2, "rpc recv": 18124009744, "rpc sent": 5329536024, "signal": 302477, "smash jobs": 5, "triage jobs": 8, "vm output": 88696644, "vm restarts [base]": 58, "vm restarts [new]": 176 } 2025/11/04 13:44:02 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:44:08 runner 1 connected 2025/11/04 13:45:02 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/11/04 13:45:13 patched crashed: INFO: task hung in __iterate_supers [need repro = false] 2025/11/04 13:45:13 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/04 13:45:40 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:45:58 runner 5 connected 2025/11/04 13:46:03 runner 8 connected 2025/11/04 13:46:10 runner 0 connected 2025/11/04 13:46:31 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:48:14 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/11/04 13:48:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 456, "corpus": 46772, "corpus [files]": 45682, "corpus [symbols]": 6099, "cover overflows": 94295, "coverage": 308354, "distributor delayed": 54229, "distributor undelayed": 54229, "distributor violated": 278, "exec candidate": 85463, "exec collide": 25168, "exec fuzz": 47493, "exec gen": 2519, "exec hints": 11612, "exec inject": 0, "exec minimize": 12758, "exec retries": 83, "exec seeds": 1423, "exec smash": 11957, "exec total [base]": 201490, "exec total [new]": 478374, "exec triage": 152025, "executor restarts [base]": 692, "executor restarts [new]": 1823, "fault jobs": 0, "fuzzer jobs": 13, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 6, "max signal": 314581, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7466, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48567, "no exec duration": 165625000000, "no exec requests": 574, "pending": 0, "prog exec time": 463, "reproducing": 2, "rpc recv": 18468272676, "rpc sent": 5555269752, "signal": 302536, "smash jobs": 4, "triage jobs": 3, "vm output": 92149114, "vm restarts [base]": 60, "vm restarts [new]": 178 } 2025/11/04 13:48:34 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:49:10 runner 3 connected 2025/11/04 13:50:10 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:50:50 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 13:51:13 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:51:33 base crash: WARNING in udf_truncate_extents 2025/11/04 13:51:47 runner 3 connected 2025/11/04 13:52:24 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/04 13:52:30 runner 0 connected 2025/11/04 13:53:08 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:53:20 runner 8 connected 2025/11/04 13:53:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 466, "corpus": 46782, "corpus [files]": 45689, "corpus [symbols]": 6100, "cover overflows": 96602, "coverage": 308377, "distributor delayed": 54330, "distributor undelayed": 54330, "distributor violated": 278, "exec candidate": 85463, "exec collide": 27379, "exec fuzz": 51746, "exec gen": 2733, "exec hints": 12187, "exec inject": 0, "exec minimize": 13095, "exec retries": 88, "exec seeds": 1442, "exec smash": 12245, "exec total [base]": 206253, "exec total [new]": 486445, "exec triage": 152192, "executor restarts [base]": 727, "executor restarts [new]": 1890, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 2, "max signal": 314716, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7642, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48621, "no exec duration": 179824000000, "no exec requests": 595, "pending": 0, "prog exec time": 490, "reproducing": 2, "rpc recv": 18796662704, "rpc sent": 5813250768, "signal": 302557, "smash jobs": 1, "triage jobs": 3, "vm output": 95665364, "vm restarts [base]": 61, "vm restarts [new]": 181 } 2025/11/04 13:53:50 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:55:03 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/04 13:55:04 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:56:01 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 13:56:01 runner 0 connected 2025/11/04 13:56:05 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:56:10 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 13:56:20 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:29364: connect: connection refused 2025/11/04 13:56:20 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:29364: connect: connection refused 2025/11/04 13:56:30 base crash: lost connection to test machine 2025/11/04 13:56:42 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:56:58 runner 7 connected 2025/11/04 13:57:07 runner 3 connected 2025/11/04 13:57:27 runner 1 connected 2025/11/04 13:57:39 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:58:01 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 13:58:10 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:58:27 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 493, "corpus": 46801, "corpus [files]": 45705, "corpus [symbols]": 6105, "cover overflows": 99251, "coverage": 308417, "distributor delayed": 54442, "distributor undelayed": 54441, "distributor violated": 280, "exec candidate": 85463, "exec collide": 29643, "exec fuzz": 56182, "exec gen": 2972, "exec hints": 12678, "exec inject": 0, "exec minimize": 13607, "exec retries": 91, "exec seeds": 1495, "exec smash": 12740, "exec total [base]": 210490, "exec total [new]": 495147, "exec triage": 152402, "executor restarts [base]": 753, "executor restarts [new]": 1935, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 4, "max signal": 314886, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7891, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48692, "no exec duration": 192254000000, "no exec requests": 621, "pending": 0, "prog exec time": 558, "reproducing": 2, "rpc recv": 19181263784, "rpc sent": 6077789560, "signal": 302597, "smash jobs": 4, "triage jobs": 4, "vm output": 98850039, "vm restarts [base]": 63, "vm restarts [new]": 183 } 2025/11/04 13:58:58 runner 4 connected 2025/11/04 13:59:10 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 13:59:54 base crash: WARNING in xfrm_state_fini 2025/11/04 14:00:11 crash "WARNING in io_ring_exit_work" is already known 2025/11/04 14:00:11 base crash "WARNING in io_ring_exit_work" is to be ignored 2025/11/04 14:00:11 patched crashed: WARNING in io_ring_exit_work [need repro = false] 2025/11/04 14:00:47 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 14:00:50 runner 2 connected 2025/11/04 14:01:08 runner 6 connected 2025/11/04 14:01:21 crash "INFO: task hung in corrupted" is already known 2025/11/04 14:01:21 base crash "INFO: task hung in corrupted" is to be ignored 2025/11/04 14:01:21 patched crashed: INFO: task hung in corrupted [need repro = false] 2025/11/04 14:01:26 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 14:01:53 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 14:02:18 runner 7 connected 2025/11/04 14:02:50 runner 3 connected 2025/11/04 14:03:06 base crash: lost connection to test machine 2025/11/04 14:03:27 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 499, "corpus": 46816, "corpus [files]": 45713, "corpus [symbols]": 6107, "cover overflows": 101193, "coverage": 308435, "distributor delayed": 54541, "distributor undelayed": 54541, "distributor violated": 284, "exec candidate": 85463, "exec collide": 31398, "exec fuzz": 59606, "exec gen": 3147, "exec hints": 12966, "exec inject": 0, "exec minimize": 14019, "exec retries": 93, "exec seeds": 1536, "exec smash": 13023, "exec total [base]": 214509, "exec total [new]": 501670, "exec triage": 152542, "executor restarts [base]": 774, "executor restarts [new]": 1990, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 3, "max signal": 314980, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 8124, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48739, "no exec duration": 202092000000, "no exec requests": 641, "pending": 0, "prog exec time": 488, "reproducing": 2, "rpc recv": 19554199920, "rpc sent": 6297199040, "signal": 302615, "smash jobs": 4, "triage jobs": 4, "vm output": 102105884, "vm restarts [base]": 64, "vm restarts [new]": 187 } 2025/11/04 14:03:34 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 14:04:03 runner 2 connected 2025/11/04 14:04:10 base crash: kernel BUG in jfs_evict_inode 2025/11/04 14:04:18 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 14:04:21 repro finished 'KASAN: slab-use-after-free Read in jfs_lazycommit', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/04 14:04:21 failed repro for "KASAN: slab-use-after-free Read in jfs_lazycommit", err=%!s() 2025/11/04 14:04:21 "KASAN: slab-use-after-free Read in jfs_lazycommit": saved crash log into 1762265061.crash.log 2025/11/04 14:04:21 "KASAN: slab-use-after-free Read in jfs_lazycommit": saved repro log into 1762265061.repro.log 2025/11/04 14:04:51 patched crashed: INFO: task hung in reg_check_chans_work [need repro = false] 2025/11/04 14:05:02 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/04 14:05:02 repro finished 'KASAN: slab-use-after-free Read in jfs_syncpt', repro=true crepro=false desc='general protection fault in lmLogSync' hub=false from_dashboard=false 2025/11/04 14:05:02 found repro for "general protection fault in lmLogSync" (orig title: "KASAN: slab-use-after-free Read in jfs_syncpt", reliability: 1), took 29.73 minutes 2025/11/04 14:05:02 "general protection fault in lmLogSync": saved crash log into 1762265102.crash.log 2025/11/04 14:05:02 "general protection fault in lmLogSync": saved repro log into 1762265102.repro.log 2025/11/04 14:05:07 runner 1 connected 2025/11/04 14:05:15 runner 2 connected 2025/11/04 14:05:17 runner 0 connected 2025/11/04 14:05:47 runner 4 connected 2025/11/04 14:05:58 runner 1 connected 2025/11/04 14:06:37 attempt #0 to run "general protection fault in lmLogSync" on base: crashed with KASAN: slab-use-after-free Read in jfs_syncpt 2025/11/04 14:06:38 crash "general protection fault in lmLogSync" is already known 2025/11/04 14:06:38 base crash "general protection fault in lmLogSync" is to be ignored 2025/11/04 14:06:38 crashes both: general protection fault in lmLogSync / KASAN: slab-use-after-free Read in jfs_syncpt 2025/11/04 14:07:35 runner 0 connected 2025/11/04 14:08:16 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 14:08:27 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 518, "corpus": 46840, "corpus [files]": 45728, "corpus [symbols]": 6108, "cover overflows": 104005, "coverage": 308473, "distributor delayed": 54628, "distributor undelayed": 54628, "distributor violated": 284, "exec candidate": 85463, "exec collide": 34116, "exec fuzz": 64792, "exec gen": 3405, "exec hints": 13207, "exec inject": 0, "exec minimize": 14731, "exec retries": 95, "exec seeds": 1604, "exec smash": 13453, "exec total [base]": 217080, "exec total [new]": 511515, "exec triage": 152764, "executor restarts [base]": 814, "executor restarts [new]": 2064, "fault jobs": 0, "fuzzer jobs": 20, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 4, "max signal": 315183, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 8517, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48815, "no exec duration": 215058000000, "no exec requests": 663, "pending": 0, "prog exec time": 565, "reproducing": 0, "rpc recv": 19938642476, "rpc sent": 6563713424, "signal": 302653, "smash jobs": 7, "triage jobs": 9, "vm output": 106377649, "vm restarts [base]": 67, "vm restarts [new]": 191 } 2025/11/04 14:08:33 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/04 14:08:34 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/04 14:09:00 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 14:09:12 runner 1 connected 2025/11/04 14:09:22 runner 2 connected 2025/11/04 14:09:33 runner 7 connected 2025/11/04 14:09:57 runner 0 connected 2025/11/04 14:10:22 base crash: WARNING in xfrm_state_fini 2025/11/04 14:10:42 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/04 14:11:11 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 14:11:19 runner 0 connected 2025/11/04 14:11:35 crash "possible deadlock in ocfs2_setattr" is already known 2025/11/04 14:11:35 base crash "possible deadlock in ocfs2_setattr" is to be ignored 2025/11/04 14:11:35 patched crashed: possible deadlock in ocfs2_setattr [need repro = false] 2025/11/04 14:11:39 runner 5 connected 2025/11/04 14:11:42 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 14:11:50 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 14:12:01 runner 4 connected 2025/11/04 14:12:06 base crash: INFO: task hung in user_get_super 2025/11/04 14:12:10 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/04 14:12:16 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 14:12:27 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 14:12:32 runner 1 connected 2025/11/04 14:12:40 runner 7 connected 2025/11/04 14:12:47 runner 2 connected 2025/11/04 14:13:00 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/04 14:13:02 runner 2 connected 2025/11/04 14:13:06 runner 6 connected 2025/11/04 14:13:13 runner 3 connected 2025/11/04 14:13:24 runner 8 connected 2025/11/04 14:13:27 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 565, "corpus": 46863, "corpus [files]": 45744, "corpus [symbols]": 6113, "cover overflows": 107271, "coverage": 308520, "distributor delayed": 54704, "distributor undelayed": 54704, "distributor violated": 284, "exec candidate": 85463, "exec collide": 36508, "exec fuzz": 69194, "exec gen": 3658, "exec hints": 13807, "exec inject": 0, "exec minimize": 15493, "exec retries": 96, "exec seeds": 1658, "exec smash": 14038, "exec total [base]": 220017, "exec total [new]": 520717, "exec triage": 152918, "executor restarts [base]": 840, "executor restarts [new]": 2151, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 7, "hints jobs": 3, "max signal": 315271, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 8933, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48866, "no exec duration": 237681000000, "no exec requests": 696, "pending": 0, "prog exec time": 547, "reproducing": 0, "rpc recv": 20516480876, "rpc sent": 6845129936, "signal": 302694, "smash jobs": 3, "triage jobs": 2, "vm output": 109346190, "vm restarts [base]": 69, "vm restarts [new]": 203 } 2025/11/04 14:13:33 base crash: lost connection to test machine 2025/11/04 14:13:38 crash "KASAN: out-of-bounds Read in ext4_xattr_set_entry" is already known 2025/11/04 14:13:38 base crash "KASAN: out-of-bounds Read in ext4_xattr_set_entry" is to be ignored 2025/11/04 14:13:38 patched crashed: KASAN: out-of-bounds Read in ext4_xattr_set_entry [need repro = false] 2025/11/04 14:13:52 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/04 14:13:57 runner 4 connected 2025/11/04 14:14:30 runner 1 connected 2025/11/04 14:14:34 runner 7 connected 2025/11/04 14:14:49 runner 5 connected 2025/11/04 14:14:54 base crash: KASAN: out-of-bounds Read in ext4_xattr_set_entry 2025/11/04 14:15:20 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/04 14:15:23 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/04 14:15:44 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 14:15:51 runner 0 connected 2025/11/04 14:16:17 runner 0 connected 2025/11/04 14:16:22 runner 1 connected 2025/11/04 14:16:41 runner 7 connected 2025/11/04 14:16:59 VM-8 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:5537: connect: connection refused 2025/11/04 14:16:59 VM-8 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:5537: connect: connection refused 2025/11/04 14:17:08 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:18432: connect: connection refused 2025/11/04 14:17:08 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:18432: connect: connection refused 2025/11/04 14:17:09 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 14:17:18 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 14:17:39 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 14:17:43 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13567: connect: connection refused 2025/11/04 14:17:43 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13567: connect: connection refused 2025/11/04 14:17:43 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:9243: connect: connection refused 2025/11/04 14:17:43 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:9243: connect: connection refused 2025/11/04 14:17:53 base crash: lost connection to test machine 2025/11/04 14:17:53 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 14:17:55 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 14:18:07 runner 8 connected 2025/11/04 14:18:09 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 14:18:11 base crash: INFO: task hung in addrconf_dad_work 2025/11/04 14:18:15 runner 7 connected 2025/11/04 14:18:27 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 587, "corpus": 46880, "corpus [files]": 45752, "corpus [symbols]": 6115, "cover overflows": 109358, "coverage": 308548, "distributor delayed": 54771, "distributor undelayed": 54771, "distributor violated": 284, "exec candidate": 85463, "exec collide": 38528, "exec fuzz": 72833, "exec gen": 3855, "exec hints": 14062, "exec inject": 0, "exec minimize": 16105, "exec retries": 100, "exec seeds": 1692, "exec smash": 14361, "exec total [base]": 221819, "exec total [new]": 527942, "exec triage": 153062, "executor restarts [base]": 867, "executor restarts [new]": 2253, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 3, "hints jobs": 2, "max signal": 315392, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 9358, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48915, "no exec duration": 244325000000, "no exec requests": 708, "pending": 0, "prog exec time": 563, "reproducing": 0, "rpc recv": 20973312412, "rpc sent": 7045392784, "signal": 302718, "smash jobs": 1, "triage jobs": 5, "vm output": 112780905, "vm restarts [base]": 71, "vm restarts [new]": 211 } 2025/11/04 14:18:28 crash "INFO: task hung in rtnl_newlink" is already known 2025/11/04 14:18:28 base crash "INFO: task hung in rtnl_newlink" is to be ignored 2025/11/04 14:18:28 patched crashed: INFO: task hung in rtnl_newlink [need repro = false] 2025/11/04 14:18:28 runner 1 connected 2025/11/04 14:18:32 VM-8 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:64702: connect: connection refused 2025/11/04 14:18:32 VM-8 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:64702: connect: connection refused 2025/11/04 14:18:42 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 14:18:43 runner 3 connected 2025/11/04 14:18:49 runner 0 connected 2025/11/04 14:18:52 runner 6 connected 2025/11/04 14:18:56 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:25818: connect: connection refused 2025/11/04 14:18:56 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:25818: connect: connection refused 2025/11/04 14:18:59 runner 0 connected 2025/11/04 14:19:01 runner 1 connected 2025/11/04 14:19:06 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 14:19:18 runner 4 connected 2025/11/04 14:19:39 runner 8 connected 2025/11/04 14:20:02 runner 1 connected 2025/11/04 14:20:12 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 14:20:20 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/11/04 14:20:27 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/04 14:20:33 crash "INFO: task hung in corrupted" is already known 2025/11/04 14:20:33 base crash "INFO: task hung in corrupted" is to be ignored 2025/11/04 14:20:33 patched crashed: INFO: task hung in corrupted [need repro = false] 2025/11/04 14:20:50 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:35392: connect: connection refused 2025/11/04 14:20:50 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:35392: connect: connection refused 2025/11/04 14:20:57 crash "kernel BUG in ocfs2_write_cluster_by_desc" is already known 2025/11/04 14:20:57 base crash "kernel BUG in ocfs2_write_cluster_by_desc" is to be ignored 2025/11/04 14:20:57 patched crashed: kernel BUG in ocfs2_write_cluster_by_desc [need repro = false] 2025/11/04 14:21:00 base crash: lost connection to test machine 2025/11/04 14:21:09 runner 0 connected 2025/11/04 14:21:17 runner 6 connected 2025/11/04 14:21:24 runner 4 connected 2025/11/04 14:21:30 runner 2 connected 2025/11/04 14:21:49 runner 0 connected 2025/11/04 14:21:53 runner 1 connected 2025/11/04 14:21:57 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/04 14:22:47 VM-4 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:16454: connect: connection refused 2025/11/04 14:22:47 VM-4 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:16454: connect: connection refused 2025/11/04 14:22:49 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/04 14:22:54 runner 1 connected 2025/11/04 14:22:57 patched crashed: lost connection to test machine [need repro = false] 2025/11/04 14:23:14 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:43695: connect: connection refused 2025/11/04 14:23:14 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:43695: connect: connection refused 2025/11/04 14:23:22 status reporting terminated 2025/11/04 14:23:22 bug reporting terminated 2025/11/04 14:23:22 repro loop terminated 2025/11/04 14:23:22 base: rpc server terminaled 2025/11/04 14:23:22 new: rpc server terminaled 2025/11/04 14:23:22 base: pool terminated 2025/11/04 14:23:22 base: kernel context loop terminated 2025/11/04 14:23:29 VM-6 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:5163: connect: connection refused 2025/11/04 14:23:29 VM-6 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:5163: connect: connection refused 2025/11/04 14:23:46 new: pool terminated 2025/11/04 14:23:46 new: kernel context loop terminated 2025/11/04 14:23:46 diff fuzzing terminated 2025/11/04 14:23:46 fuzzing is finished 2025/11/04 14:23:46 status at the end: Title On-Base On-Patched BUG: Bad page state in skb_pp_cow_data 1 crashes 1 crashes[reproduced] BUG: sleeping function called from invalid context in hook_sb_delete 7 crashes 14 crashes INFO: task hung in __iterate_supers 1 crashes 4 crashes INFO: task hung in addrconf_dad_work 2 crashes INFO: task hung in bdev_open 1 crashes INFO: task hung in corrupted 6 crashes INFO: task hung in reg_check_chans_work 1 crashes 3 crashes INFO: task hung in rtnl_newlink 1 crashes INFO: task hung in sync_bdevs 1 crashes INFO: task hung in user_get_super 1 crashes KASAN: out-of-bounds Read in ext4_xattr_set_entry 1 crashes 2 crashes KASAN: slab-out-of-bounds Read in ext4_xattr_set_entry 1 crashes KASAN: slab-use-after-free Read in jfs_lazycommit 1 crashes KASAN: slab-use-after-free Read in jfs_syncpt 1 crashes 1 crashes KASAN: slab-use-after-free Write in txEnd 1 crashes KASAN: use-after-free Read in hpfs_get_ea 4 crashes SYZFAIL: tun: ioctl(TUNSETIFF) failed 1 crashes UBSAN: array-index-out-of-bounds in dtInsertEntry 1 crashes WARNING in __rate_control_send_low 1 crashes WARNING in io_ring_exit_work 3 crashes WARNING in rate_control_rate_init 1 crashes WARNING in raw_ioctl 1 crashes 1 crashes WARNING in udf_truncate_extents 1 crashes 2 crashes WARNING in xfrm6_tunnel_net_exit 2 crashes 5 crashes WARNING in xfrm_state_fini 5 crashes 14 crashes general protection fault in lmLogSync [reproduced] general protection fault in pcl818_ai_cancel 3 crashes 14 crashes general protection fault in txEnd 1 crashes 1 crashes kernel BUG in jfs_evict_inode 4 crashes 9 crashes kernel BUG in may_open 1 crashes 3 crashes kernel BUG in ocfs2_write_cluster_by_desc 1 crashes kernel BUG in txUnlock 1 crashes 5 crashes lost connection to test machine 17 crashes 46 crashes possible deadlock in ext4_writepages 1 crashes 3 crashes possible deadlock in hfsplus_get_block 1 crashes possible deadlock in ntfs_fiemap 3 crashes possible deadlock in ocfs2_del_inode_from_orphan 1 crashes possible deadlock in ocfs2_evict_inode 1 crashes possible deadlock in ocfs2_init_acl 1 crashes 6 crashes possible deadlock in ocfs2_reserve_suballoc_bits 2 crashes 15 crashes possible deadlock in ocfs2_setattr 1 crashes possible deadlock in ocfs2_try_remove_refcount_tree 9 crashes 26 crashes possible deadlock in ocfs2_xattr_set 2 crashes 3 crashes possible deadlock in padata_do_serial 4 crashes unregister_netdevice: waiting for DEV to become free 4 crashes 3 crashes