2025/10/14 19:52:55 extracted 329778 text symbol hashes for base and 329782 for patched 2025/10/14 19:52:55 binaries are different, continuing fuzzing 2025/10/14 19:52:55 adding modified_functions to focus areas: ["__pfx_ksm_pmd_entry" "__pfx_ksm_walk_test" "__stable_node_chain" "break_cow" "ksm_do_scan" "ksm_get_folio" "ksm_memory_callback" "ksm_pmd_entry" "ksm_scan_thread" "ksm_walk_test" "max_page_sharing_store" "merge_across_nodes_store" "remove_rmap_item_from_tree" "remove_stable_node" "replace_page" "rmap_walk_ksm" "run_store" "try_to_merge_one_page" "unmerge_ksm_pages"] 2025/10/14 19:52:55 adding directly modified files to focus areas: ["mm/ksm.c"] 2025/10/14 19:52:55 downloading corpus #1: "https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db" 2025/10/14 19:53:54 runner 2 connected 2025/10/14 19:53:54 runner 5 connected 2025/10/14 19:53:54 runner 3 connected 2025/10/14 19:53:54 runner 8 connected 2025/10/14 19:53:54 runner 0 connected 2025/10/14 19:53:54 runner 6 connected 2025/10/14 19:53:54 runner 1 connected 2025/10/14 19:53:54 runner 2 connected 2025/10/14 19:53:54 runner 1 connected 2025/10/14 19:53:54 runner 4 connected 2025/10/14 19:53:55 runner 7 connected 2025/10/14 19:53:55 runner 0 connected 2025/10/14 19:54:00 initializing coverage information... 2025/10/14 19:54:01 executor cover filter: 0 PCs 2025/10/14 19:54:04 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8056 2025/10/14 19:54:04 base: machine check complete 2025/10/14 19:54:05 discovered 7757 source files, 340726 symbols 2025/10/14 19:54:05 coverage filter: __pfx_ksm_pmd_entry: [] 2025/10/14 19:54:05 coverage filter: __pfx_ksm_walk_test: [] 2025/10/14 19:54:05 coverage filter: __stable_node_chain: [__stable_node_chain] 2025/10/14 19:54:05 coverage filter: break_cow: [break_cow] 2025/10/14 19:54:05 coverage filter: ksm_do_scan: [ksm_do_scan] 2025/10/14 19:54:05 coverage filter: ksm_get_folio: [ksm_get_folio] 2025/10/14 19:54:05 coverage filter: ksm_memory_callback: [ksm_memory_callback] 2025/10/14 19:54:05 coverage filter: ksm_pmd_entry: [ksm_pmd_entry] 2025/10/14 19:54:05 coverage filter: ksm_scan_thread: [ksm_scan_thread] 2025/10/14 19:54:05 coverage filter: ksm_walk_test: [ksm_walk_test] 2025/10/14 19:54:05 coverage filter: max_page_sharing_store: [max_page_sharing_store] 2025/10/14 19:54:05 coverage filter: merge_across_nodes_store: [merge_across_nodes_store] 2025/10/14 19:54:05 coverage filter: remove_rmap_item_from_tree: [remove_rmap_item_from_tree] 2025/10/14 19:54:05 coverage filter: remove_stable_node: [remove_stable_node] 2025/10/14 19:54:05 coverage filter: replace_page: [__bpf_trace_svc_replace_page_err __probestub_svc_replace_page_err __traceiter_svc_replace_page_err perf_trace_svc_replace_page_err replace_page replace_page_cache_folio svc_rqst_replace_page trace_event_raw_event_svc_replace_page_err trace_raw_output_svc_replace_page_err trace_svc_replace_page_err] 2025/10/14 19:54:05 coverage filter: rmap_walk_ksm: [rmap_walk_ksm] 2025/10/14 19:54:05 coverage filter: run_store: [run_store] 2025/10/14 19:54:05 coverage filter: try_to_merge_one_page: [try_to_merge_one_page] 2025/10/14 19:54:05 coverage filter: unmerge_ksm_pages: [unmerge_ksm_pages] 2025/10/14 19:54:05 coverage filter: mm/ksm.c: [mm/ksm.c] 2025/10/14 19:54:05 area "symbols": 1617 PCs in the cover filter 2025/10/14 19:54:05 area "files": 2307 PCs in the cover filter 2025/10/14 19:54:05 area "": 0 PCs in the cover filter 2025/10/14 19:54:05 executor cover filter: 0 PCs 2025/10/14 19:54:08 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8056 2025/10/14 19:54:08 new: machine check complete 2025/10/14 19:54:09 new: adding 81571 seeds 2025/10/14 19:55:45 crash "WARNING in xfrm_state_fini" is already known 2025/10/14 19:55:45 base crash "WARNING in xfrm_state_fini" is to be ignored 2025/10/14 19:55:45 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 19:56:29 crash "WARNING in xfrm_state_fini" is already known 2025/10/14 19:56:29 base crash "WARNING in xfrm_state_fini" is to be ignored 2025/10/14 19:56:29 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 19:56:35 runner 3 connected 2025/10/14 19:57:25 runner 1 connected 2025/10/14 19:57:58 STAT { "buffer too small": 0, "candidate triage jobs": 62, "candidates": 76545, "comps overflows": 0, "corpus": 4934, "corpus [files]": 237, "corpus [symbols]": 1, "cover overflows": 3690, "coverage": 164522, "distributor delayed": 4879, "distributor undelayed": 4879, "distributor violated": 0, "exec candidate": 5026, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 8808, "exec total [new]": 22429, "exec triage": 15685, "executor restarts [base]": 52, "executor restarts [new]": 107, "fault jobs": 0, "fuzzer jobs": 62, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 166093, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 5026, "no exec duration": 44108000000, "no exec requests": 354, "pending": 0, "prog exec time": 200, "reproducing": 0, "rpc recv": 1222899684, "rpc sent": 116103552, "signal": 161988, "smash jobs": 0, "triage jobs": 0, "vm output": 2569375, "vm restarts [base]": 3, "vm restarts [new]": 11 } 2025/10/14 19:58:10 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 19:58:59 runner 5 connected 2025/10/14 19:59:27 base crash: WARNING in xfrm_state_fini 2025/10/14 20:00:15 runner 2 connected 2025/10/14 20:00:52 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 20:01:02 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 20:01:41 runner 4 connected 2025/10/14 20:01:51 runner 3 connected 2025/10/14 20:02:12 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:19954: connect: connection refused 2025/10/14 20:02:12 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:19954: connect: connection refused 2025/10/14 20:02:22 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 20:02:58 STAT { "buffer too small": 0, "candidate triage jobs": 48, "candidates": 69067, "comps overflows": 0, "corpus": 12239, "corpus [files]": 452, "corpus [symbols]": 2, "cover overflows": 9780, "coverage": 207884, "distributor delayed": 12890, "distributor undelayed": 12890, "distributor violated": 0, "exec candidate": 12504, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 21337, "exec total [new]": 60005, "exec triage": 39631, "executor restarts [base]": 59, "executor restarts [new]": 141, "fault jobs": 0, "fuzzer jobs": 48, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 210984, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 12504, "no exec duration": 45703000000, "no exec requests": 361, "pending": 0, "prog exec time": 235, "reproducing": 0, "rpc recv": 2278574428, "rpc sent": 277895376, "signal": 203470, "smash jobs": 0, "triage jobs": 0, "vm output": 4582594, "vm restarts [base]": 4, "vm restarts [new]": 14 } 2025/10/14 20:03:11 runner 7 connected 2025/10/14 20:04:59 crash "WARNING in xfrm6_tunnel_net_exit" is already known 2025/10/14 20:04:59 base crash "WARNING in xfrm6_tunnel_net_exit" is to be ignored 2025/10/14 20:04:59 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 20:05:49 runner 7 connected 2025/10/14 20:07:30 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 20:07:39 base crash: unregister_netdevice: waiting for DEV to become free 2025/10/14 20:07:58 STAT { "buffer too small": 0, "candidate triage jobs": 38, "candidates": 61715, "comps overflows": 0, "corpus": 19352, "corpus [files]": 640, "corpus [symbols]": 7, "cover overflows": 15892, "coverage": 235520, "distributor delayed": 20397, "distributor undelayed": 20397, "distributor violated": 1, "exec candidate": 19856, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 34742, "exec total [new]": 100786, "exec triage": 63304, "executor restarts [base]": 62, "executor restarts [new]": 160, "fault jobs": 0, "fuzzer jobs": 38, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 239577, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 19856, "no exec duration": 45711000000, "no exec requests": 362, "pending": 0, "prog exec time": 247, "reproducing": 0, "rpc recv": 3229294940, "rpc sent": 445064160, "signal": 229980, "smash jobs": 0, "triage jobs": 0, "vm output": 6674280, "vm restarts [base]": 4, "vm restarts [new]": 16 } 2025/10/14 20:08:28 runner 7 connected 2025/10/14 20:08:29 runner 1 connected 2025/10/14 20:09:41 crash "WARNING in xfrm6_tunnel_net_exit" is already known 2025/10/14 20:09:41 base crash "WARNING in xfrm6_tunnel_net_exit" is to be ignored 2025/10/14 20:09:41 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 20:10:29 runner 3 connected 2025/10/14 20:10:57 patched crashed: INFO: rcu detected stall in corrupted [need repro = false] 2025/10/14 20:11:21 crash "kernel BUG in txUnlock" is already known 2025/10/14 20:11:21 base crash "kernel BUG in txUnlock" is to be ignored 2025/10/14 20:11:21 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 20:11:22 crash "kernel BUG in txUnlock" is already known 2025/10/14 20:11:22 base crash "kernel BUG in txUnlock" is to be ignored 2025/10/14 20:11:22 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 20:11:46 runner 4 connected 2025/10/14 20:12:11 runner 7 connected 2025/10/14 20:12:12 runner 0 connected 2025/10/14 20:12:17 base crash: kernel BUG in txUnlock 2025/10/14 20:12:58 STAT { "buffer too small": 0, "candidate triage jobs": 44, "candidates": 56423, "comps overflows": 0, "corpus": 24448, "corpus [files]": 779, "corpus [symbols]": 9, "cover overflows": 20619, "coverage": 251993, "distributor delayed": 26940, "distributor undelayed": 26940, "distributor violated": 14, "exec candidate": 25148, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 1, "exec seeds": 0, "exec smash": 0, "exec total [base]": 47236, "exec total [new]": 132631, "exec triage": 80335, "executor restarts [base]": 70, "executor restarts [new]": 192, "fault jobs": 0, "fuzzer jobs": 44, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 256810, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 25148, "no exec duration": 48115000000, "no exec requests": 369, "pending": 0, "prog exec time": 240, "reproducing": 0, "rpc recv": 4123483648, "rpc sent": 590415280, "signal": 245749, "smash jobs": 0, "triage jobs": 0, "vm output": 8680782, "vm restarts [base]": 5, "vm restarts [new]": 21 } 2025/10/14 20:13:07 runner 2 connected 2025/10/14 20:13:09 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 20:14:06 runner 4 connected 2025/10/14 20:14:13 patched crashed: no output from test machine [need repro = false] 2025/10/14 20:15:02 runner 2 connected 2025/10/14 20:16:27 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/10/14 20:17:24 runner 5 connected 2025/10/14 20:17:58 STAT { "buffer too small": 0, "candidate triage jobs": 37, "candidates": 50336, "comps overflows": 0, "corpus": 30320, "corpus [files]": 956, "corpus [symbols]": 12, "cover overflows": 26177, "coverage": 266840, "distributor delayed": 33232, "distributor undelayed": 33232, "distributor violated": 14, "exec candidate": 31235, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 8, "exec seeds": 0, "exec smash": 0, "exec total [base]": 61493, "exec total [new]": 171190, "exec triage": 99857, "executor restarts [base]": 85, "executor restarts [new]": 237, "fault jobs": 0, "fuzzer jobs": 37, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 271974, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 31235, "no exec duration": 48603000000, "no exec requests": 377, "pending": 0, "prog exec time": 117, "reproducing": 0, "rpc recv": 5055475332, "rpc sent": 773407640, "signal": 260087, "smash jobs": 0, "triage jobs": 0, "vm output": 11310871, "vm restarts [base]": 6, "vm restarts [new]": 24 } 2025/10/14 20:18:10 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/10/14 20:18:46 base crash: possible deadlock in ntfs_fiemap 2025/10/14 20:18:58 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 20:18:59 runner 4 connected 2025/10/14 20:19:01 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 20:19:35 runner 2 connected 2025/10/14 20:19:48 runner 0 connected 2025/10/14 20:19:50 runner 5 connected 2025/10/14 20:20:26 base crash: WARNING in xfrm6_tunnel_net_exit 2025/10/14 20:21:15 runner 1 connected 2025/10/14 20:22:22 patched crashed: INFO: task hung in disable_device [need repro = true] 2025/10/14 20:22:22 scheduled a reproduction of 'INFO: task hung in disable_device' 2025/10/14 20:22:45 base crash: INFO: task hung in disable_device 2025/10/14 20:22:58 STAT { "buffer too small": 0, "candidate triage jobs": 17, "candidates": 46008, "comps overflows": 0, "corpus": 34239, "corpus [files]": 1062, "corpus [symbols]": 12, "cover overflows": 32012, "coverage": 276008, "distributor delayed": 38364, "distributor undelayed": 38363, "distributor violated": 15, "exec candidate": 35563, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 8, "exec seeds": 0, "exec smash": 0, "exec total [base]": 74276, "exec total [new]": 211651, "exec triage": 114641, "executor restarts [base]": 95, "executor restarts [new]": 266, "fault jobs": 0, "fuzzer jobs": 17, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 282193, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 35563, "no exec duration": 48790000000, "no exec requests": 385, "pending": 1, "prog exec time": 265, "reproducing": 0, "rpc recv": 5828306812, "rpc sent": 1002391520, "signal": 268692, "smash jobs": 0, "triage jobs": 0, "vm output": 13520636, "vm restarts [base]": 8, "vm restarts [new]": 27 } 2025/10/14 20:23:10 runner 1 connected 2025/10/14 20:23:34 runner 0 connected 2025/10/14 20:24:11 base crash: KASAN: slab-use-after-free Write in lmLogSync 2025/10/14 20:25:00 runner 0 connected 2025/10/14 20:25:06 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/10/14 20:25:36 patched crashed: possible deadlock in btrfs_dirty_inode [need repro = true] 2025/10/14 20:25:36 scheduled a reproduction of 'possible deadlock in btrfs_dirty_inode' 2025/10/14 20:25:55 runner 3 connected 2025/10/14 20:26:14 base crash: lost connection to test machine 2025/10/14 20:26:26 runner 7 connected 2025/10/14 20:27:03 runner 1 connected 2025/10/14 20:27:58 STAT { "buffer too small": 0, "candidate triage jobs": 27, "candidates": 43478, "comps overflows": 0, "corpus": 36528, "corpus [files]": 1171, "corpus [symbols]": 12, "cover overflows": 37933, "coverage": 283049, "distributor delayed": 41158, "distributor undelayed": 41158, "distributor violated": 15, "exec candidate": 38093, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 83237, "exec total [new]": 244233, "exec triage": 123374, "executor restarts [base]": 107, "executor restarts [new]": 308, "fault jobs": 0, "fuzzer jobs": 27, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 289447, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 38093, "no exec duration": 48816000000, "no exec requests": 388, "pending": 2, "prog exec time": 210, "reproducing": 0, "rpc recv": 6480040248, "rpc sent": 1200215520, "signal": 275336, "smash jobs": 0, "triage jobs": 0, "vm output": 16110573, "vm restarts [base]": 11, "vm restarts [new]": 30 } 2025/10/14 20:28:32 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 20:29:21 runner 2 connected 2025/10/14 20:30:34 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 20:31:25 runner 4 connected 2025/10/14 20:31:26 patched crashed: INFO: task hung in reg_check_chans_work [need repro = true] 2025/10/14 20:31:26 scheduled a reproduction of 'INFO: task hung in reg_check_chans_work' 2025/10/14 20:32:09 crash "INFO: task hung in corrupted" is already known 2025/10/14 20:32:09 base crash "INFO: task hung in corrupted" is to be ignored 2025/10/14 20:32:09 patched crashed: INFO: task hung in corrupted [need repro = false] 2025/10/14 20:32:16 runner 7 connected 2025/10/14 20:32:34 base crash: INFO: task hung in corrupted 2025/10/14 20:32:49 base crash: WARNING in xfrm_state_fini 2025/10/14 20:32:57 runner 3 connected 2025/10/14 20:32:58 STAT { "buffer too small": 0, "candidate triage jobs": 23, "candidates": 41826, "comps overflows": 0, "corpus": 38031, "corpus [files]": 1257, "corpus [symbols]": 14, "cover overflows": 41905, "coverage": 286850, "distributor delayed": 43527, "distributor undelayed": 43527, "distributor violated": 36, "exec candidate": 39745, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 92481, "exec total [new]": 269316, "exec triage": 129220, "executor restarts [base]": 116, "executor restarts [new]": 347, "fault jobs": 0, "fuzzer jobs": 23, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 293301, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 39740, "no exec duration": 49881000000, "no exec requests": 401, "pending": 3, "prog exec time": 326, "reproducing": 0, "rpc recv": 6927944992, "rpc sent": 1379310016, "signal": 278911, "smash jobs": 0, "triage jobs": 0, "vm output": 18144172, "vm restarts [base]": 11, "vm restarts [new]": 34 } 2025/10/14 20:33:24 runner 1 connected 2025/10/14 20:33:38 runner 0 connected 2025/10/14 20:34:36 crash "kernel BUG in jfs_evict_inode" is already known 2025/10/14 20:34:36 base crash "kernel BUG in jfs_evict_inode" is to be ignored 2025/10/14 20:34:36 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/10/14 20:34:40 base crash: kernel BUG in jfs_evict_inode 2025/10/14 20:34:41 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/10/14 20:35:24 runner 4 connected 2025/10/14 20:35:28 runner 2 connected 2025/10/14 20:35:30 runner 0 connected 2025/10/14 20:35:48 crash "BUG: sleeping function called from invalid context in hook_sb_delete" is already known 2025/10/14 20:35:48 base crash "BUG: sleeping function called from invalid context in hook_sb_delete" is to be ignored 2025/10/14 20:35:48 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/10/14 20:36:44 runner 4 connected 2025/10/14 20:36:46 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 20:37:35 runner 6 connected 2025/10/14 20:37:45 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 20:37:58 STAT { "buffer too small": 0, "candidate triage jobs": 16, "candidates": 40089, "comps overflows": 0, "corpus": 39554, "corpus [files]": 1340, "corpus [symbols]": 14, "cover overflows": 47742, "coverage": 290620, "distributor delayed": 45496, "distributor undelayed": 45496, "distributor violated": 36, "exec candidate": 41482, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 101210, "exec total [new]": 301949, "exec triage": 135262, "executor restarts [base]": 132, "executor restarts [new]": 382, "fault jobs": 0, "fuzzer jobs": 16, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 297448, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 41457, "no exec duration": 50459000000, "no exec requests": 409, "pending": 3, "prog exec time": 152, "reproducing": 0, "rpc recv": 7573880952, "rpc sent": 1575090560, "signal": 282487, "smash jobs": 0, "triage jobs": 0, "vm output": 20398733, "vm restarts [base]": 14, "vm restarts [new]": 38 } 2025/10/14 20:38:21 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 20:38:35 runner 8 connected 2025/10/14 20:39:10 runner 4 connected 2025/10/14 20:39:23 base crash: kernel BUG in txUnlock 2025/10/14 20:40:12 runner 2 connected 2025/10/14 20:40:26 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/10/14 20:40:55 crash "possible deadlock in ocfs2_init_acl" is already known 2025/10/14 20:40:55 base crash "possible deadlock in ocfs2_init_acl" is to be ignored 2025/10/14 20:40:55 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/10/14 20:41:16 runner 1 connected 2025/10/14 20:41:45 runner 8 connected 2025/10/14 20:42:35 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 20:42:58 STAT { "buffer too small": 0, "candidate triage jobs": 10, "candidates": 19349, "comps overflows": 0, "corpus": 41041, "corpus [files]": 1419, "corpus [symbols]": 15, "cover overflows": 53111, "coverage": 294335, "distributor delayed": 47216, "distributor undelayed": 47215, "distributor violated": 36, "exec candidate": 62222, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 110621, "exec total [new]": 333825, "exec triage": 140882, "executor restarts [base]": 150, "executor restarts [new]": 415, "fault jobs": 0, "fuzzer jobs": 10, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 301392, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43082, "no exec duration": 50987000000, "no exec requests": 418, "pending": 3, "prog exec time": 246, "reproducing": 0, "rpc recv": 8111300100, "rpc sent": 1768562336, "signal": 286192, "smash jobs": 0, "triage jobs": 0, "vm output": 22907492, "vm restarts [base]": 16, "vm restarts [new]": 41 } 2025/10/14 20:43:23 runner 1 connected 2025/10/14 20:44:58 triaged 90.3% of the corpus 2025/10/14 20:44:58 starting bug reproductions 2025/10/14 20:44:58 starting bug reproductions (max 6 VMs, 4 repros) 2025/10/14 20:44:58 reproduction of "INFO: task hung in disable_device" aborted: it's no longer needed 2025/10/14 20:44:58 start reproducing 'possible deadlock in btrfs_dirty_inode' 2025/10/14 20:44:58 start reproducing 'INFO: task hung in reg_check_chans_work' 2025/10/14 20:45:02 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 20:45:52 runner 8 connected 2025/10/14 20:46:15 crash "general protection fault in pcl818_ai_cancel" is already known 2025/10/14 20:46:15 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/10/14 20:46:15 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/10/14 20:47:05 runner 4 connected 2025/10/14 20:47:48 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 20:47:48 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 20:47:48 start reproducing 'BUG: Bad page map' 2025/10/14 20:47:58 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 1, "corpus": 41773, "corpus [files]": 1468, "corpus [symbols]": 15, "cover overflows": 58037, "coverage": 295871, "distributor delayed": 48272, "distributor undelayed": 48272, "distributor violated": 55, "exec candidate": 81571, "exec collide": 592, "exec fuzz": 1167, "exec gen": 70, "exec hints": 193, "exec inject": 0, "exec minimize": 305, "exec retries": 15, "exec seeds": 45, "exec smash": 281, "exec total [base]": 121594, "exec total [new]": 358771, "exec triage": 143820, "executor restarts [base]": 164, "executor restarts [new]": 439, "fault jobs": 0, "fuzzer jobs": 21, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 6, "max signal": 303086, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 183, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43932, "no exec duration": 55938000000, "no exec requests": 436, "pending": 0, "prog exec time": 313, "reproducing": 3, "rpc recv": 8562084028, "rpc sent": 1953637680, "signal": 287634, "smash jobs": 10, "triage jobs": 5, "vm output": 25097305, "vm restarts [base]": 16, "vm restarts [new]": 44 } 2025/10/14 20:48:37 runner 8 connected 2025/10/14 20:48:47 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 20:48:47 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 20:49:35 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 20:49:35 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 20:49:35 runner 4 connected 2025/10/14 20:50:24 runner 8 connected 2025/10/14 20:50:47 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 20:51:05 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 20:51:05 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 20:51:35 runner 7 connected 2025/10/14 20:51:49 crash "general protection fault in pcl818_ai_cancel" is already known 2025/10/14 20:51:49 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/10/14 20:51:49 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/10/14 20:51:55 runner 4 connected 2025/10/14 20:52:39 runner 6 connected 2025/10/14 20:52:58 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 19, "corpus": 41816, "corpus [files]": 1470, "corpus [symbols]": 15, "cover overflows": 60409, "coverage": 295942, "distributor delayed": 48469, "distributor undelayed": 48469, "distributor violated": 69, "exec candidate": 81571, "exec collide": 2556, "exec fuzz": 5022, "exec gen": 246, "exec hints": 2285, "exec inject": 0, "exec minimize": 1405, "exec retries": 16, "exec seeds": 177, "exec smash": 1402, "exec total [base]": 129596, "exec total [new]": 369554, "exec triage": 144161, "executor restarts [base]": 176, "executor restarts [new]": 473, "fault jobs": 0, "fuzzer jobs": 19, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 4, "max signal": 303382, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 815, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44046, "no exec duration": 71038000000, "no exec requests": 470, "pending": 3, "prog exec time": 403, "reproducing": 3, "rpc recv": 9077629344, "rpc sent": 2183125664, "signal": 287702, "smash jobs": 3, "triage jobs": 12, "vm output": 27760795, "vm restarts [base]": 16, "vm restarts [new]": 50 } 2025/10/14 20:53:45 base crash: INFO: task hung in corrupted 2025/10/14 20:54:08 crash "possible deadlock in ocfs2_init_acl" is already known 2025/10/14 20:54:08 base crash "possible deadlock in ocfs2_init_acl" is to be ignored 2025/10/14 20:54:08 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/10/14 20:54:15 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 20:54:15 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 20:54:16 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 20:54:16 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 20:54:34 runner 2 connected 2025/10/14 20:54:56 runner 6 connected 2025/10/14 20:55:04 runner 4 connected 2025/10/14 20:55:05 runner 8 connected 2025/10/14 20:55:26 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 20:55:26 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 20:55:34 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 20:55:34 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 20:56:00 base crash: general protection fault in txEnd 2025/10/14 20:56:13 runner 5 connected 2025/10/14 20:56:23 runner 6 connected 2025/10/14 20:56:49 runner 0 connected 2025/10/14 20:56:51 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 20:56:51 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 20:57:12 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 20:57:12 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 20:57:13 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/10/14 20:57:41 runner 6 connected 2025/10/14 20:57:56 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 20:57:56 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 20:57:58 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 51, "corpus": 41864, "corpus [files]": 1470, "corpus [symbols]": 15, "cover overflows": 62149, "coverage": 296029, "distributor delayed": 48679, "distributor undelayed": 48671, "distributor violated": 84, "exec candidate": 81571, "exec collide": 3702, "exec fuzz": 7179, "exec gen": 383, "exec hints": 3931, "exec inject": 0, "exec minimize": 2890, "exec retries": 16, "exec seeds": 331, "exec smash": 2708, "exec total [base]": 135870, "exec total [new]": 377918, "exec triage": 144494, "executor restarts [base]": 192, "executor restarts [new]": 514, "fault jobs": 0, "fuzzer jobs": 28, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 3, "max signal": 303616, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1505, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44154, "no exec duration": 77396000000, "no exec requests": 483, "pending": 10, "prog exec time": 897, "reproducing": 3, "rpc recv": 9668714484, "rpc sent": 2429585032, "signal": 287782, "smash jobs": 2, "triage jobs": 23, "vm output": 31065542, "vm restarts [base]": 18, "vm restarts [new]": 56 } 2025/10/14 20:58:02 runner 4 connected 2025/10/14 20:58:02 runner 1 connected 2025/10/14 20:58:03 repro finished 'possible deadlock in btrfs_dirty_inode', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 20:58:03 failed repro for "possible deadlock in btrfs_dirty_inode", err=%!s() 2025/10/14 20:58:03 "possible deadlock in btrfs_dirty_inode": saved crash log into 1760475483.crash.log 2025/10/14 20:58:03 "possible deadlock in btrfs_dirty_inode": saved repro log into 1760475483.repro.log 2025/10/14 20:58:35 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 20:58:45 runner 5 connected 2025/10/14 20:59:02 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 20:59:02 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 20:59:22 base crash: kernel BUG in jfs_evict_inode 2025/10/14 20:59:24 runner 6 connected 2025/10/14 20:59:50 runner 4 connected 2025/10/14 21:00:12 runner 1 connected 2025/10/14 21:00:54 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 21:01:43 runner 8 connected 2025/10/14 21:02:18 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:02:18 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:02:58 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 78, "corpus": 41925, "corpus [files]": 1475, "corpus [symbols]": 15, "cover overflows": 63761, "coverage": 296166, "distributor delayed": 48858, "distributor undelayed": 48858, "distributor violated": 89, "exec candidate": 81571, "exec collide": 4909, "exec fuzz": 9392, "exec gen": 493, "exec hints": 5370, "exec inject": 0, "exec minimize": 4288, "exec retries": 16, "exec seeds": 480, "exec smash": 3956, "exec total [base]": 141903, "exec total [new]": 385991, "exec triage": 144794, "executor restarts [base]": 209, "executor restarts [new]": 544, "fault jobs": 0, "fuzzer jobs": 38, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 15, "max signal": 303823, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2263, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44252, "no exec duration": 80467000000, "no exec requests": 489, "pending": 12, "prog exec time": 339, "reproducing": 2, "rpc recv": 10222414008, "rpc sent": 2683711480, "signal": 287888, "smash jobs": 16, "triage jobs": 7, "vm output": 34666433, "vm restarts [base]": 20, "vm restarts [new]": 61 } 2025/10/14 21:03:07 runner 6 connected 2025/10/14 21:03:31 runner 0 connected 2025/10/14 21:03:47 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 21:04:36 runner 6 connected 2025/10/14 21:05:07 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:05:07 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:05:21 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:05:21 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:05:32 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:05:32 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:05:55 runner 0 connected 2025/10/14 21:06:08 base crash: unregister_netdevice: waiting for DEV to become free 2025/10/14 21:06:09 runner 4 connected 2025/10/14 21:06:14 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/10/14 21:06:14 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/10/14 21:06:14 start reproducing 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/10/14 21:06:20 runner 5 connected 2025/10/14 21:06:56 runner 0 connected 2025/10/14 21:07:02 runner 8 connected 2025/10/14 21:07:58 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 156, "corpus": 42000, "corpus [files]": 1478, "corpus [symbols]": 16, "cover overflows": 66206, "coverage": 296361, "distributor delayed": 49081, "distributor undelayed": 49081, "distributor violated": 90, "exec candidate": 81571, "exec collide": 6209, "exec fuzz": 11818, "exec gen": 639, "exec hints": 7141, "exec inject": 0, "exec minimize": 6160, "exec retries": 16, "exec seeds": 703, "exec smash": 5832, "exec total [base]": 149137, "exec total [new]": 395995, "exec triage": 145177, "executor restarts [base]": 218, "executor restarts [new]": 579, "fault jobs": 0, "fuzzer jobs": 45, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 17, "max signal": 304069, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3209, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44379, "no exec duration": 84168000000, "no exec requests": 508, "pending": 15, "prog exec time": 403, "reproducing": 3, "rpc recv": 10887058876, "rpc sent": 3025435056, "signal": 288025, "smash jobs": 19, "triage jobs": 9, "vm output": 38423337, "vm restarts [base]": 21, "vm restarts [new]": 68 } 2025/10/14 21:08:30 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:08:30 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:08:55 base crash: lost connection to test machine 2025/10/14 21:09:18 runner 6 connected 2025/10/14 21:09:27 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:09:27 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:09:44 runner 2 connected 2025/10/14 21:10:17 runner 4 connected 2025/10/14 21:11:55 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:11:55 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:11:58 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:11:58 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:12:45 runner 4 connected 2025/10/14 21:12:48 runner 8 connected 2025/10/14 21:12:58 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 221, "corpus": 42066, "corpus [files]": 1490, "corpus [symbols]": 16, "cover overflows": 68220, "coverage": 296467, "distributor delayed": 49282, "distributor undelayed": 49278, "distributor violated": 99, "exec candidate": 81571, "exec collide": 7388, "exec fuzz": 14147, "exec gen": 763, "exec hints": 8816, "exec inject": 0, "exec minimize": 7473, "exec retries": 16, "exec seeds": 892, "exec smash": 7602, "exec total [base]": 155692, "exec total [new]": 404909, "exec triage": 145516, "executor restarts [base]": 234, "executor restarts [new]": 599, "fault jobs": 0, "fuzzer jobs": 35, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 15, "max signal": 304299, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3866, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44488, "no exec duration": 97432000000, "no exec requests": 536, "pending": 19, "prog exec time": 352, "reproducing": 3, "rpc recv": 11408727660, "rpc sent": 3330310936, "signal": 288117, "smash jobs": 12, "triage jobs": 8, "vm output": 43224694, "vm restarts [base]": 22, "vm restarts [new]": 72 } 2025/10/14 21:13:07 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:13:07 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:13:20 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:13:20 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:13:57 runner 5 connected 2025/10/14 21:14:08 runner 6 connected 2025/10/14 21:14:12 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:14:12 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:14:47 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:14:47 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:15:00 runner 8 connected 2025/10/14 21:15:35 runner 5 connected 2025/10/14 21:15:53 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 21:16:06 crash "possible deadlock in ocfs2_init_acl" is already known 2025/10/14 21:16:06 base crash "possible deadlock in ocfs2_init_acl" is to be ignored 2025/10/14 21:16:06 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/10/14 21:16:42 runner 7 connected 2025/10/14 21:16:54 runner 6 connected 2025/10/14 21:17:08 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:17:08 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:17:19 repro finished 'possible deadlock in ocfs2_reserve_suballoc_bits', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 21:17:19 failed repro for "possible deadlock in ocfs2_reserve_suballoc_bits", err=%!s() 2025/10/14 21:17:19 "possible deadlock in ocfs2_reserve_suballoc_bits": saved crash log into 1760476639.crash.log 2025/10/14 21:17:19 "possible deadlock in ocfs2_reserve_suballoc_bits": saved repro log into 1760476639.repro.log 2025/10/14 21:17:57 runner 8 connected 2025/10/14 21:17:58 STAT { "buffer too small": 5, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 257, "corpus": 42105, "corpus [files]": 1491, "corpus [symbols]": 16, "cover overflows": 70024, "coverage": 296592, "distributor delayed": 49452, "distributor undelayed": 49452, "distributor violated": 108, "exec candidate": 81571, "exec collide": 8934, "exec fuzz": 17075, "exec gen": 933, "exec hints": 11860, "exec inject": 0, "exec minimize": 8499, "exec retries": 16, "exec seeds": 1016, "exec smash": 8584, "exec total [base]": 164781, "exec total [new]": 415014, "exec triage": 145800, "executor restarts [base]": 247, "executor restarts [new]": 630, "fault jobs": 0, "fuzzer jobs": 32, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 14, "max signal": 304554, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4345, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44582, "no exec duration": 99969000000, "no exec requests": 558, "pending": 24, "prog exec time": 377, "reproducing": 2, "rpc recv": 11996380484, "rpc sent": 3622366952, "signal": 288204, "smash jobs": 12, "triage jobs": 6, "vm output": 45305905, "vm restarts [base]": 22, "vm restarts [new]": 79 } 2025/10/14 21:18:23 base crash: lost connection to test machine 2025/10/14 21:18:48 runner 0 connected 2025/10/14 21:18:49 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 21:19:14 runner 1 connected 2025/10/14 21:19:37 runner 5 connected 2025/10/14 21:20:19 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:20:19 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:20:25 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:20:55 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:20:55 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:21:01 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:21:07 runner 6 connected 2025/10/14 21:21:45 runner 8 connected 2025/10/14 21:22:03 base crash: WARNING in xfrm_state_fini 2025/10/14 21:22:26 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:22:26 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:22:52 runner 2 connected 2025/10/14 21:22:58 STAT { "buffer too small": 5, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 268, "corpus": 42166, "corpus [files]": 1491, "corpus [symbols]": 16, "cover overflows": 71619, "coverage": 296703, "distributor delayed": 49627, "distributor undelayed": 49627, "distributor violated": 108, "exec candidate": 81571, "exec collide": 9908, "exec fuzz": 18909, "exec gen": 1029, "exec hints": 12989, "exec inject": 0, "exec minimize": 9753, "exec retries": 16, "exec seeds": 1182, "exec smash": 10193, "exec total [base]": 170150, "exec total [new]": 422387, "exec triage": 146105, "executor restarts [base]": 253, "executor restarts [new]": 660, "fault jobs": 0, "fuzzer jobs": 43, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 13, "max signal": 304798, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5072, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44699, "no exec duration": 100052000000, "no exec requests": 559, "pending": 27, "prog exec time": 646, "reproducing": 2, "rpc recv": 12466243128, "rpc sent": 3806740456, "signal": 288306, "smash jobs": 18, "triage jobs": 12, "vm output": 49176698, "vm restarts [base]": 24, "vm restarts [new]": 83 } 2025/10/14 21:23:03 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:23:03 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:23:15 runner 4 connected 2025/10/14 21:23:53 runner 0 connected 2025/10/14 21:24:40 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:24:40 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:25:28 runner 6 connected 2025/10/14 21:25:59 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:25:59 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:26:33 patched crashed: general protection fault in txEnd [need repro = false] 2025/10/14 21:26:46 runner 8 connected 2025/10/14 21:27:10 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:27:10 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:27:22 runner 4 connected 2025/10/14 21:27:58 STAT { "buffer too small": 5, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 278, "corpus": 42230, "corpus [files]": 1491, "corpus [symbols]": 16, "cover overflows": 73267, "coverage": 296862, "distributor delayed": 49803, "distributor undelayed": 49803, "distributor violated": 108, "exec candidate": 81571, "exec collide": 11323, "exec fuzz": 21497, "exec gen": 1167, "exec hints": 14377, "exec inject": 0, "exec minimize": 11064, "exec retries": 16, "exec seeds": 1354, "exec smash": 11936, "exec total [base]": 175388, "exec total [new]": 431467, "exec triage": 146427, "executor restarts [base]": 262, "executor restarts [new]": 714, "fault jobs": 0, "fuzzer jobs": 24, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 6, "max signal": 305079, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5772, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44814, "no exec duration": 104234000000, "no exec requests": 574, "pending": 31, "prog exec time": 395, "reproducing": 2, "rpc recv": 12950737604, "rpc sent": 3992654872, "signal": 288457, "smash jobs": 7, "triage jobs": 11, "vm output": 52036856, "vm restarts [base]": 24, "vm restarts [new]": 88 } 2025/10/14 21:27:59 runner 8 connected 2025/10/14 21:28:22 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 21:28:30 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 21:28:36 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:28:36 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:28:54 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 21:29:11 runner 6 connected 2025/10/14 21:29:19 runner 4 connected 2025/10/14 21:29:25 runner 0 connected 2025/10/14 21:29:42 runner 5 connected 2025/10/14 21:29:52 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:29:52 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:29:58 base crash: WARNING in xfrm6_tunnel_net_exit 2025/10/14 21:30:03 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/10/14 21:30:40 runner 0 connected 2025/10/14 21:30:47 runner 1 connected 2025/10/14 21:30:53 runner 7 connected 2025/10/14 21:32:08 base crash: lost connection to test machine 2025/10/14 21:32:15 patched crashed: KASAN: slab-use-after-free Write in lmLogSync [need repro = false] 2025/10/14 21:32:17 patched crashed: KASAN: slab-use-after-free Read in dtSplitPage [need repro = true] 2025/10/14 21:32:17 scheduled a reproduction of 'KASAN: slab-use-after-free Read in dtSplitPage' 2025/10/14 21:32:17 start reproducing 'KASAN: slab-use-after-free Read in dtSplitPage' 2025/10/14 21:32:58 STAT { "buffer too small": 5, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 315, "corpus": 42286, "corpus [files]": 1491, "corpus [symbols]": 16, "cover overflows": 75150, "coverage": 296961, "distributor delayed": 49941, "distributor undelayed": 49939, "distributor violated": 111, "exec candidate": 81571, "exec collide": 12874, "exec fuzz": 24432, "exec gen": 1301, "exec hints": 15638, "exec inject": 0, "exec minimize": 12218, "exec retries": 16, "exec seeds": 1520, "exec smash": 13262, "exec total [base]": 181738, "exec total [new]": 440261, "exec triage": 146690, "executor restarts [base]": 271, "executor restarts [new]": 758, "fault jobs": 0, "fuzzer jobs": 27, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 6, "max signal": 305248, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6471, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44915, "no exec duration": 104367000000, "no exec requests": 577, "pending": 33, "prog exec time": 481, "reproducing": 3, "rpc recv": 13531778580, "rpc sent": 4240799544, "signal": 288552, "smash jobs": 9, "triage jobs": 12, "vm output": 55655480, "vm restarts [base]": 25, "vm restarts [new]": 95 } 2025/10/14 21:32:59 runner 1 connected 2025/10/14 21:33:04 runner 6 connected 2025/10/14 21:33:16 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:34:00 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:34:00 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:34:20 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:34:20 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:34:26 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:34:51 runner 4 connected 2025/10/14 21:34:52 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:34:52 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:35:17 runner 6 connected 2025/10/14 21:35:34 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 21:35:49 runner 5 connected 2025/10/14 21:35:51 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:36:02 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:36:02 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:36:04 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/10/14 21:36:04 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/10/14 21:36:04 start reproducing 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/10/14 21:36:24 runner 8 connected 2025/10/14 21:36:41 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:36:53 runner 7 connected 2025/10/14 21:37:15 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:37:23 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:37:50 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:37:58 STAT { "buffer too small": 5, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 354, "corpus": 42323, "corpus [files]": 1492, "corpus [symbols]": 16, "cover overflows": 75977, "coverage": 297043, "distributor delayed": 50034, "distributor undelayed": 50027, "distributor violated": 111, "exec candidate": 81571, "exec collide": 13402, "exec fuzz": 25490, "exec gen": 1361, "exec hints": 16158, "exec inject": 0, "exec minimize": 12872, "exec retries": 17, "exec seeds": 1635, "exec smash": 14072, "exec total [base]": 188678, "exec total [new]": 444132, "exec triage": 146813, "executor restarts [base]": 286, "executor restarts [new]": 814, "fault jobs": 0, "fuzzer jobs": 31, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 7, "max signal": 305395, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6910, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44968, "no exec duration": 105293000000, "no exec requests": 580, "pending": 37, "prog exec time": 885, "reproducing": 4, "rpc recv": 14091689592, "rpc sent": 4449965152, "signal": 288630, "smash jobs": 10, "triage jobs": 14, "vm output": 59241163, "vm restarts [base]": 26, "vm restarts [new]": 101 } 2025/10/14 21:38:02 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:38:09 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:38:21 repro finished 'INFO: task hung in reg_check_chans_work', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 21:38:21 failed repro for "INFO: task hung in reg_check_chans_work", err=%!s() 2025/10/14 21:38:21 "INFO: task hung in reg_check_chans_work": saved crash log into 1760477901.crash.log 2025/10/14 21:38:21 "INFO: task hung in reg_check_chans_work": saved repro log into 1760477901.repro.log 2025/10/14 21:38:30 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:38:32 patched crashed: possible deadlock in run_unpack_ex [need repro = true] 2025/10/14 21:38:32 scheduled a reproduction of 'possible deadlock in run_unpack_ex' 2025/10/14 21:38:32 start reproducing 'possible deadlock in run_unpack_ex' 2025/10/14 21:39:07 base crash: possible deadlock in run_unpack_ex 2025/10/14 21:39:08 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:39:18 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:39:20 runner 8 connected 2025/10/14 21:39:38 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:39:38 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:39:39 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:39:55 runner 0 connected 2025/10/14 21:40:13 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 21:40:26 runner 7 connected 2025/10/14 21:40:26 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:40:30 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:41:01 runner 8 connected 2025/10/14 21:41:06 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:41:29 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:41:29 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:41:48 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:41:55 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:42:17 runner 7 connected 2025/10/14 21:42:24 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:42:25 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:42:58 STAT { "buffer too small": 5, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 372, "corpus": 42353, "corpus [files]": 1493, "corpus [symbols]": 16, "cover overflows": 76769, "coverage": 297134, "distributor delayed": 50118, "distributor undelayed": 50118, "distributor violated": 118, "exec candidate": 81571, "exec collide": 14004, "exec fuzz": 26482, "exec gen": 1414, "exec hints": 16825, "exec inject": 0, "exec minimize": 13544, "exec retries": 18, "exec seeds": 1713, "exec smash": 14886, "exec total [base]": 193677, "exec total [new]": 448142, "exec triage": 146941, "executor restarts [base]": 299, "executor restarts [new]": 843, "fault jobs": 0, "fuzzer jobs": 18, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 5, "max signal": 305541, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7406, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45017, "no exec duration": 419578000000, "no exec requests": 1594, "pending": 39, "prog exec time": 408, "reproducing": 4, "rpc recv": 14498683784, "rpc sent": 4613178400, "signal": 288703, "smash jobs": 4, "triage jobs": 9, "vm output": 61746587, "vm restarts [base]": 27, "vm restarts [new]": 105 } 2025/10/14 21:43:06 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:43:13 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:43:37 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:43:37 base crash: WARNING in xfrm_state_fini 2025/10/14 21:43:42 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:43:42 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:43:42 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:44:23 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:44:26 runner 0 connected 2025/10/14 21:44:28 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:44:30 runner 7 connected 2025/10/14 21:44:34 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:44:54 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:44:58 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:45:38 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:45:48 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:46:14 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:46:18 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:46:34 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:46:34 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:46:56 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:47:03 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 21:47:10 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:47:10 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:47:21 runner 6 connected 2025/10/14 21:47:28 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:47:45 runner 8 connected 2025/10/14 21:47:52 runner 7 connected 2025/10/14 21:47:55 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:47:58 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 389, "corpus": 42392, "corpus [files]": 1493, "corpus [symbols]": 16, "cover overflows": 77460, "coverage": 297237, "distributor delayed": 50190, "distributor undelayed": 50189, "distributor violated": 118, "exec candidate": 81571, "exec collide": 14542, "exec fuzz": 27423, "exec gen": 1463, "exec hints": 17463, "exec inject": 0, "exec minimize": 14449, "exec retries": 18, "exec seeds": 1819, "exec smash": 15627, "exec total [base]": 197714, "exec total [new]": 452170, "exec triage": 147055, "executor restarts [base]": 312, "executor restarts [new]": 869, "fault jobs": 0, "fuzzer jobs": 34, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 12, "max signal": 305632, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7970, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45067, "no exec duration": 708919000000, "no exec requests": 2600, "pending": 42, "prog exec time": 1555, "reproducing": 4, "rpc recv": 14890906656, "rpc sent": 4754725000, "signal": 288775, "smash jobs": 17, "triage jobs": 5, "vm output": 64154648, "vm restarts [base]": 28, "vm restarts [new]": 109 } 2025/10/14 21:48:14 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:48:39 base crash: INFO: task hung in __iterate_supers 2025/10/14 21:48:46 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:49:03 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:49:14 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:49:20 runner 2 connected 2025/10/14 21:49:27 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:49:31 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:49:31 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:50:08 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:50:13 runner 8 connected 2025/10/14 21:50:17 patched crashed: INFO: task hung in rfkill_global_led_trigger_worker [need repro = true] 2025/10/14 21:50:17 scheduled a reproduction of 'INFO: task hung in rfkill_global_led_trigger_worker' 2025/10/14 21:50:36 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:51:05 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:51:05 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:51:07 runner 6 connected 2025/10/14 21:51:21 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:51:24 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:51:32 base crash: KASAN: slab-use-after-free Read in l2cap_unregister_user 2025/10/14 21:51:52 repro finished 'possible deadlock in run_unpack_ex', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 21:51:52 failed repro for "possible deadlock in run_unpack_ex", err=%!s() 2025/10/14 21:51:52 start reproducing 'INFO: task hung in rfkill_global_led_trigger_worker' 2025/10/14 21:51:52 "possible deadlock in run_unpack_ex": saved crash log into 1760478712.crash.log 2025/10/14 21:51:52 "possible deadlock in run_unpack_ex": saved repro log into 1760478712.repro.log 2025/10/14 21:51:52 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:51:55 runner 7 connected 2025/10/14 21:52:22 runner 2 connected 2025/10/14 21:52:41 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:52:58 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 405, "corpus": 42411, "corpus [files]": 1493, "corpus [symbols]": 16, "cover overflows": 78004, "coverage": 297381, "distributor delayed": 50250, "distributor undelayed": 50250, "distributor violated": 120, "exec candidate": 81571, "exec collide": 14822, "exec fuzz": 27979, "exec gen": 1491, "exec hints": 17777, "exec inject": 0, "exec minimize": 14933, "exec retries": 18, "exec seeds": 1877, "exec smash": 16020, "exec total [base]": 199912, "exec total [new]": 454387, "exec triage": 147153, "executor restarts [base]": 342, "executor restarts [new]": 921, "fault jobs": 0, "fuzzer jobs": 31, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 8, "max signal": 305854, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 8344, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45105, "no exec duration": 921904000000, "no exec requests": 3136, "pending": 44, "prog exec time": 685, "reproducing": 4, "rpc recv": 15211750992, "rpc sent": 4854110504, "signal": 288932, "smash jobs": 17, "triage jobs": 6, "vm output": 66964135, "vm restarts [base]": 30, "vm restarts [new]": 112 } 2025/10/14 21:53:05 base crash: general protection fault in pcl818_ai_cancel 2025/10/14 21:53:09 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:53:27 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:53:47 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/10/14 21:53:55 runner 2 connected 2025/10/14 21:54:00 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:54:29 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:54:35 runner 6 connected 2025/10/14 21:54:37 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:54:37 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:54:38 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:55:15 reproducing crash 'KASAN: slab-use-after-free Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:55:15 repro finished 'KASAN: slab-use-after-free Read in dtSplitPage', repro=true crepro=false desc='KASAN: slab-out-of-bounds Read in dtSplitPage' hub=false from_dashboard=false 2025/10/14 21:55:15 found repro for "KASAN: slab-out-of-bounds Read in dtSplitPage" (orig title: "KASAN: slab-use-after-free Read in dtSplitPage", reliability: 1), took 22.50 minutes 2025/10/14 21:55:15 "KASAN: slab-out-of-bounds Read in dtSplitPage": saved crash log into 1760478915.crash.log 2025/10/14 21:55:15 "KASAN: slab-out-of-bounds Read in dtSplitPage": saved repro log into 1760478915.repro.log 2025/10/14 21:55:21 base crash: possible deadlock in ocfs2_evict_inode 2025/10/14 21:55:21 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 21:55:26 runner 7 connected 2025/10/14 21:55:31 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:55:36 runner 1 connected 2025/10/14 21:56:00 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:56:00 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:56:00 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:56:10 runner 8 connected 2025/10/14 21:56:10 runner 1 connected 2025/10/14 21:56:33 attempt #0 to run "KASAN: slab-out-of-bounds Read in dtSplitPage" on base: crashed with KASAN: slab-use-after-free Read in dtSplitPage 2025/10/14 21:56:33 crashes both: KASAN: slab-out-of-bounds Read in dtSplitPage / KASAN: slab-use-after-free Read in dtSplitPage 2025/10/14 21:56:46 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:56:50 runner 7 connected 2025/10/14 21:56:53 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:56:53 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:57:22 runner 0 connected 2025/10/14 21:57:30 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:57:30 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:57:41 runner 0 connected 2025/10/14 21:57:43 runner 8 connected 2025/10/14 21:57:58 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 441, "corpus": 42435, "corpus [files]": 1493, "corpus [symbols]": 16, "cover overflows": 79026, "coverage": 297443, "distributor delayed": 50300, "distributor undelayed": 50299, "distributor violated": 120, "exec candidate": 81571, "exec collide": 15602, "exec fuzz": 29460, "exec gen": 1561, "exec hints": 18287, "exec inject": 0, "exec minimize": 15492, "exec retries": 18, "exec seeds": 1950, "exec smash": 16758, "exec total [base]": 203913, "exec total [new]": 458682, "exec triage": 147233, "executor restarts [base]": 364, "executor restarts [new]": 965, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 3, "max signal": 305934, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 8698, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45138, "no exec duration": 1041383000000, "no exec requests": 3533, "pending": 48, "prog exec time": 474, "reproducing": 3, "rpc recv": 15755666208, "rpc sent": 4995491376, "signal": 288997, "smash jobs": 2, "triage jobs": 7, "vm output": 69539008, "vm restarts [base]": 33, "vm restarts [new]": 119 } 2025/10/14 21:58:19 runner 6 connected 2025/10/14 21:58:24 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:58:39 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:58:39 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:58:54 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:59:02 base crash: possible deadlock in ocfs2_init_acl 2025/10/14 21:59:25 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:59:25 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:59:29 runner 1 connected 2025/10/14 21:59:38 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 21:59:42 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 21:59:42 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 21:59:50 runner 0 connected 2025/10/14 22:00:07 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:00:13 runner 8 connected 2025/10/14 22:00:31 runner 0 connected 2025/10/14 22:00:44 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/10/14 22:00:50 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 22:00:52 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:00:54 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:00:54 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:01:20 patched crashed: KASAN: slab-use-after-free Read in hdm_disconnect [need repro = true] 2025/10/14 22:01:20 scheduled a reproduction of 'KASAN: slab-use-after-free Read in hdm_disconnect' 2025/10/14 22:01:20 start reproducing 'KASAN: slab-use-after-free Read in hdm_disconnect' 2025/10/14 22:01:24 reproducing crash 'possible deadlock in ocfs2_reserve_suballoc_bits': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:01:24 repro finished 'possible deadlock in ocfs2_reserve_suballoc_bits', repro=true crepro=false desc='possible deadlock in ocfs2_reserve_suballoc_bits' hub=false from_dashboard=false 2025/10/14 22:01:24 found repro for "possible deadlock in ocfs2_reserve_suballoc_bits" (orig title: "-SAME-", reliability: 1), took 24.29 minutes 2025/10/14 22:01:24 "possible deadlock in ocfs2_reserve_suballoc_bits": saved crash log into 1760479284.crash.log 2025/10/14 22:01:24 "possible deadlock in ocfs2_reserve_suballoc_bits": saved repro log into 1760479284.repro.log 2025/10/14 22:01:30 base crash: possible deadlock in ocfs2_init_acl 2025/10/14 22:01:33 runner 7 connected 2025/10/14 22:01:43 runner 6 connected 2025/10/14 22:02:05 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:02:05 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:02:09 runner 8 connected 2025/10/14 22:02:09 runner 0 connected 2025/10/14 22:02:19 runner 2 connected 2025/10/14 22:02:28 runner 1 connected 2025/10/14 22:02:38 attempt #0 to run "possible deadlock in ocfs2_reserve_suballoc_bits" on base: crashed with possible deadlock in ocfs2_reserve_suballoc_bits 2025/10/14 22:02:38 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/10/14 22:02:38 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/10/14 22:02:38 crashes both: possible deadlock in ocfs2_reserve_suballoc_bits / possible deadlock in ocfs2_reserve_suballoc_bits 2025/10/14 22:02:53 runner 7 connected 2025/10/14 22:02:58 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 471, "corpus": 42469, "corpus [files]": 1494, "corpus [symbols]": 16, "cover overflows": 80095, "coverage": 297526, "distributor delayed": 50383, "distributor undelayed": 50383, "distributor violated": 120, "exec candidate": 81571, "exec collide": 16536, "exec fuzz": 31262, "exec gen": 1639, "exec hints": 18983, "exec inject": 0, "exec minimize": 16241, "exec retries": 19, "exec seeds": 2035, "exec smash": 17444, "exec total [base]": 208456, "exec total [new]": 463844, "exec triage": 147360, "executor restarts [base]": 378, "executor restarts [new]": 1001, "fault jobs": 0, "fuzzer jobs": 21, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 8, "max signal": 306027, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 9097, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45188, "no exec duration": 1041413000000, "no exec requests": 3534, "pending": 53, "prog exec time": 434, "reproducing": 3, "rpc recv": 16348131888, "rpc sent": 5162598976, "signal": 289066, "smash jobs": 7, "triage jobs": 6, "vm output": 71598603, "vm restarts [base]": 35, "vm restarts [new]": 129 } 2025/10/14 22:02:58 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 22:03:23 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:03:23 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:03:26 runner 0 connected 2025/10/14 22:03:28 base crash: possible deadlock in ocfs2_init_acl 2025/10/14 22:03:29 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/10/14 22:03:31 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:03:48 runner 1 connected 2025/10/14 22:03:51 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:03:51 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:03:54 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:03:59 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:03:59 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:04:03 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:04:11 runner 8 connected 2025/10/14 22:04:18 runner 1 connected 2025/10/14 22:04:19 runner 2 connected 2025/10/14 22:04:39 runner 0 connected 2025/10/14 22:04:47 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:04:48 runner 7 connected 2025/10/14 22:05:09 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:05:23 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:06:04 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:06:15 base crash: lost connection to test machine 2025/10/14 22:06:29 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:06:39 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:07:03 runner 1 connected 2025/10/14 22:07:20 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:07:30 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:07:30 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:07:33 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 22:07:44 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:07:49 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:07:49 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:07:58 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 494, "corpus": 42502, "corpus [files]": 1495, "corpus [symbols]": 16, "cover overflows": 81583, "coverage": 297570, "distributor delayed": 50477, "distributor undelayed": 50473, "distributor violated": 120, "exec candidate": 81571, "exec collide": 17671, "exec fuzz": 33412, "exec gen": 1764, "exec hints": 20045, "exec inject": 0, "exec minimize": 17226, "exec retries": 19, "exec seeds": 2131, "exec smash": 18349, "exec total [base]": 212759, "exec total [new]": 470453, "exec triage": 147511, "executor restarts [base]": 405, "executor restarts [new]": 1043, "fault jobs": 0, "fuzzer jobs": 21, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 8, "max signal": 306127, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 9635, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45246, "no exec duration": 1044849000000, "no exec requests": 3545, "pending": 58, "prog exec time": 7481, "reproducing": 3, "rpc recv": 16870886852, "rpc sent": 5329271160, "signal": 289105, "smash jobs": 5, "triage jobs": 8, "vm output": 73867517, "vm restarts [base]": 39, "vm restarts [new]": 133 } 2025/10/14 22:07:59 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:08:20 runner 0 connected 2025/10/14 22:08:29 runner 1 connected 2025/10/14 22:08:37 runner 6 connected 2025/10/14 22:08:42 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:09:07 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:09:38 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:09:56 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:09:56 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:10:01 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:10:44 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/10/14 22:10:47 runner 6 connected 2025/10/14 22:10:51 patched crashed: INFO: task hung in corrupted [need repro = false] 2025/10/14 22:11:00 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:11:01 base crash: INFO: task hung in corrupted 2025/10/14 22:11:33 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:11:34 runner 2 connected 2025/10/14 22:11:41 patched crashed: no output from test machine [need repro = false] 2025/10/14 22:11:42 runner 7 connected 2025/10/14 22:11:50 runner 1 connected 2025/10/14 22:12:16 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:12:30 runner 8 connected 2025/10/14 22:12:32 VM-6 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:14671: connect: connection refused 2025/10/14 22:12:32 VM-6 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:14671: connect: connection refused 2025/10/14 22:12:33 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/10/14 22:12:42 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 22:12:46 base crash: possible deadlock in ocfs2_init_acl 2025/10/14 22:12:51 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:12:58 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 499, "corpus": 42540, "corpus [files]": 1495, "corpus [symbols]": 16, "cover overflows": 82086, "coverage": 297630, "distributor delayed": 50572, "distributor undelayed": 50570, "distributor violated": 127, "exec candidate": 81571, "exec collide": 18095, "exec fuzz": 34239, "exec gen": 1806, "exec hints": 20308, "exec inject": 0, "exec minimize": 18119, "exec retries": 19, "exec seeds": 2187, "exec smash": 18968, "exec total [base]": 216393, "exec total [new]": 473719, "exec triage": 147648, "executor restarts [base]": 416, "executor restarts [new]": 1070, "fault jobs": 0, "fuzzer jobs": 42, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 11, "max signal": 306212, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 10201, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45302, "no exec duration": 1045185000000, "no exec requests": 3548, "pending": 59, "prog exec time": 679, "reproducing": 3, "rpc recv": 17317447644, "rpc sent": 5435299968, "signal": 289163, "smash jobs": 23, "triage jobs": 8, "vm output": 78100527, "vm restarts [base]": 41, "vm restarts [new]": 139 } 2025/10/14 22:13:27 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:22698: connect: connection refused 2025/10/14 22:13:27 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:22698: connect: connection refused 2025/10/14 22:13:29 runner 0 connected 2025/10/14 22:13:37 base crash: lost connection to test machine 2025/10/14 22:13:37 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:13:39 runner 6 connected 2025/10/14 22:13:42 runner 2 connected 2025/10/14 22:14:08 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:14:27 runner 1 connected 2025/10/14 22:14:33 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:14:33 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:15:09 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:15:24 runner 8 connected 2025/10/14 22:15:37 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:16:01 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:16:01 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:16:21 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:16:49 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:16:49 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:16:52 runner 0 connected 2025/10/14 22:16:55 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:17:47 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:17:47 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:17:47 runner 6 connected 2025/10/14 22:17:58 STAT { "buffer too small": 9, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 516, "corpus": 42584, "corpus [files]": 1496, "corpus [symbols]": 16, "cover overflows": 83570, "coverage": 297700, "distributor delayed": 50691, "distributor undelayed": 50690, "distributor violated": 131, "exec candidate": 81571, "exec collide": 19100, "exec fuzz": 36063, "exec gen": 1900, "exec hints": 21588, "exec inject": 0, "exec minimize": 18947, "exec retries": 19, "exec seeds": 2292, "exec smash": 20287, "exec total [base]": 219785, "exec total [new]": 480374, "exec triage": 147846, "executor restarts [base]": 431, "executor restarts [new]": 1089, "fault jobs": 0, "fuzzer jobs": 22, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 9, "max signal": 306398, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 10656, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45373, "no exec duration": 1051659000000, "no exec requests": 3558, "pending": 63, "prog exec time": 487, "reproducing": 3, "rpc recv": 17746403484, "rpc sent": 5589297168, "signal": 289226, "smash jobs": 8, "triage jobs": 5, "vm output": 83021070, "vm restarts [base]": 43, "vm restarts [new]": 144 } 2025/10/14 22:18:35 runner 1 connected 2025/10/14 22:18:55 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:19:08 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:19:08 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:19:21 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 22:19:22 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:19:22 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:19:22 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:19:24 base crash: INFO: task hung in reg_check_chans_work 2025/10/14 22:19:55 runner 8 connected 2025/10/14 22:20:04 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:20:04 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:20:05 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:20:05 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:20:09 runner 7 connected 2025/10/14 22:20:10 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:20:11 runner 6 connected 2025/10/14 22:20:13 runner 0 connected 2025/10/14 22:20:45 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:20:49 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:20:49 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:20:52 runner 1 connected 2025/10/14 22:20:54 runner 0 connected 2025/10/14 22:21:30 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:21:37 runner 7 connected 2025/10/14 22:22:01 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:22:12 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:22:12 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:22:47 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:22:58 STAT { "buffer too small": 10, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 519, "corpus": 42622, "corpus [files]": 1496, "corpus [symbols]": 16, "cover overflows": 84809, "coverage": 297771, "distributor delayed": 50817, "distributor undelayed": 50814, "distributor violated": 131, "exec candidate": 81571, "exec collide": 20297, "exec fuzz": 38266, "exec gen": 2014, "exec hints": 22438, "exec inject": 0, "exec minimize": 19702, "exec retries": 47, "exec seeds": 2395, "exec smash": 21037, "exec total [base]": 224709, "exec total [new]": 486571, "exec triage": 148039, "executor restarts [base]": 451, "executor restarts [new]": 1157, "fault jobs": 0, "fuzzer jobs": 31, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 7, "max signal": 306574, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 11173, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45451, "no exec duration": 1055555000000, "no exec requests": 3569, "pending": 69, "prog exec time": 451, "reproducing": 3, "rpc recv": 18247717848, "rpc sent": 5755404472, "signal": 289284, "smash jobs": 11, "triage jobs": 13, "vm output": 86095996, "vm restarts [base]": 44, "vm restarts [new]": 151 } 2025/10/14 22:23:01 runner 8 connected 2025/10/14 22:23:15 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:23:33 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:23:33 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:23:36 base crash: WARNING in xfrm_state_fini 2025/10/14 22:24:02 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:24:23 runner 8 connected 2025/10/14 22:24:24 runner 2 connected 2025/10/14 22:24:28 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:25:18 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:25:32 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:25:32 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:25:40 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/10/14 22:25:46 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:25:46 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:26:21 runner 6 connected 2025/10/14 22:26:28 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:26:29 runner 0 connected 2025/10/14 22:26:34 runner 7 connected 2025/10/14 22:26:35 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:26:55 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:27:45 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:27:46 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 22:27:50 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:27:58 STAT { "buffer too small": 10, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 569, "corpus": 42677, "corpus [files]": 1496, "corpus [symbols]": 16, "cover overflows": 86347, "coverage": 297947, "distributor delayed": 50971, "distributor undelayed": 50961, "distributor violated": 139, "exec candidate": 81571, "exec collide": 21246, "exec fuzz": 40083, "exec gen": 2106, "exec hints": 23557, "exec inject": 0, "exec minimize": 20799, "exec retries": 47, "exec seeds": 2568, "exec smash": 22423, "exec total [base]": 230353, "exec total [new]": 493449, "exec triage": 148286, "executor restarts [base]": 475, "executor restarts [new]": 1195, "fault jobs": 0, "fuzzer jobs": 33, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 10, "max signal": 306789, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 11846, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45542, "no exec duration": 1055899000000, "no exec requests": 3575, "pending": 72, "prog exec time": 444, "reproducing": 3, "rpc recv": 18722582992, "rpc sent": 5939425192, "signal": 289408, "smash jobs": 8, "triage jobs": 15, "vm output": 88681073, "vm restarts [base]": 45, "vm restarts [new]": 156 } 2025/10/14 22:28:11 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:28:17 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:28:17 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:28:22 patched crashed: INFO: task hung in corrupted [need repro = false] 2025/10/14 22:28:34 runner 1 connected 2025/10/14 22:29:06 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:29:07 reproducing crash 'KASAN: slab-use-after-free Read in hdm_disconnect': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/most/most_usb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:29:07 repro finished 'KASAN: slab-use-after-free Read in hdm_disconnect', repro=true crepro=false desc='KASAN: slab-use-after-free Read in hdm_disconnect' hub=false from_dashboard=false 2025/10/14 22:29:07 found repro for "KASAN: slab-use-after-free Read in hdm_disconnect" (orig title: "-SAME-", reliability: 1), took 27.75 minutes 2025/10/14 22:29:07 "KASAN: slab-use-after-free Read in hdm_disconnect": saved crash log into 1760480947.crash.log 2025/10/14 22:29:07 "KASAN: slab-use-after-free Read in hdm_disconnect": saved repro log into 1760480947.repro.log 2025/10/14 22:29:07 runner 0 connected 2025/10/14 22:29:11 runner 8 connected 2025/10/14 22:29:32 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:29:52 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:29:52 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:29:56 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:29:56 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:29:56 runner 2 connected 2025/10/14 22:30:22 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:30:25 attempt #0 to run "KASAN: slab-use-after-free Read in hdm_disconnect" on base: crashed with KASAN: slab-use-after-free Read in hdm_disconnect 2025/10/14 22:30:25 crashes both: KASAN: slab-use-after-free Read in hdm_disconnect / KASAN: slab-use-after-free Read in hdm_disconnect 2025/10/14 22:30:42 runner 1 connected 2025/10/14 22:30:43 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:30:43 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:30:44 runner 0 connected 2025/10/14 22:31:03 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:31:03 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:31:14 runner 0 connected 2025/10/14 22:31:31 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/10/14 22:31:31 runner 7 connected 2025/10/14 22:31:32 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:31:45 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/10/14 22:31:52 runner 2 connected 2025/10/14 22:32:01 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:32:18 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:32:18 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:32:21 runner 1 connected 2025/10/14 22:32:34 runner 2 connected 2025/10/14 22:32:48 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:32:58 STAT { "buffer too small": 10, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 594, "corpus": 42711, "corpus [files]": 1496, "corpus [symbols]": 16, "cover overflows": 88506, "coverage": 298010, "distributor delayed": 51053, "distributor undelayed": 51053, "distributor violated": 144, "exec candidate": 81571, "exec collide": 22785, "exec fuzz": 42944, "exec gen": 2279, "exec hints": 25863, "exec inject": 0, "exec minimize": 21422, "exec retries": 48, "exec seeds": 2670, "exec smash": 23250, "exec total [base]": 233788, "exec total [new]": 502042, "exec triage": 148440, "executor restarts [base]": 494, "executor restarts [new]": 1248, "fault jobs": 0, "fuzzer jobs": 27, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 11, "max signal": 306884, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 12241, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45596, "no exec duration": 1068770000000, "no exec requests": 3595, "pending": 78, "prog exec time": 344, "reproducing": 2, "rpc recv": 19252069328, "rpc sent": 6138654056, "signal": 289468, "smash jobs": 10, "triage jobs": 6, "vm output": 91377109, "vm restarts [base]": 48, "vm restarts [new]": 164 } 2025/10/14 22:33:06 runner 0 connected 2025/10/14 22:33:13 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/10/14 22:33:16 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:33:29 crash "possible deadlock in ocfs2_xattr_set" is already known 2025/10/14 22:33:29 base crash "possible deadlock in ocfs2_xattr_set" is to be ignored 2025/10/14 22:33:29 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/10/14 22:33:56 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 22:34:02 runner 0 connected 2025/10/14 22:34:05 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:34:18 runner 7 connected 2025/10/14 22:34:35 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:34:43 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:34:43 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:34:52 runner 6 connected 2025/10/14 22:35:21 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:35:21 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:35:22 base crash: WARNING in xfrm_state_fini 2025/10/14 22:35:27 repro finished 'INFO: task hung in rfkill_global_led_trigger_worker', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 22:35:27 failed repro for "INFO: task hung in rfkill_global_led_trigger_worker", err=%!s() 2025/10/14 22:35:27 "INFO: task hung in rfkill_global_led_trigger_worker": saved crash log into 1760481327.crash.log 2025/10/14 22:35:27 "INFO: task hung in rfkill_global_led_trigger_worker": saved repro log into 1760481327.repro.log 2025/10/14 22:35:32 runner 2 connected 2025/10/14 22:35:33 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:36:02 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:36:09 runner 0 connected 2025/10/14 22:36:11 runner 0 connected 2025/10/14 22:36:16 runner 3 connected 2025/10/14 22:36:29 VM-6 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:2044: connect: connection refused 2025/10/14 22:36:29 VM-6 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:2044: connect: connection refused 2025/10/14 22:36:29 patched crashed: possible deadlock in ocfs2_calc_xattr_init [need repro = true] 2025/10/14 22:36:29 scheduled a reproduction of 'possible deadlock in ocfs2_calc_xattr_init' 2025/10/14 22:36:29 start reproducing 'possible deadlock in ocfs2_calc_xattr_init' 2025/10/14 22:36:39 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 22:36:39 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:36:39 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:36:52 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:37:18 runner 1 connected 2025/10/14 22:37:20 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:37:27 runner 6 connected 2025/10/14 22:37:29 runner 2 connected 2025/10/14 22:37:54 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:37:54 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:37:58 STAT { "buffer too small": 10, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 634, "corpus": 42752, "corpus [files]": 1497, "corpus [symbols]": 16, "cover overflows": 90471, "coverage": 298085, "distributor delayed": 51165, "distributor undelayed": 51165, "distributor violated": 144, "exec candidate": 81571, "exec collide": 24215, "exec fuzz": 45666, "exec gen": 2433, "exec hints": 27180, "exec inject": 0, "exec minimize": 22508, "exec retries": 49, "exec seeds": 2793, "exec smash": 24311, "exec total [base]": 239167, "exec total [new]": 510153, "exec triage": 148655, "executor restarts [base]": 510, "executor restarts [new]": 1298, "fault jobs": 0, "fuzzer jobs": 19, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 7, "max signal": 307027, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 12875, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45673, "no exec duration": 1077586000000, "no exec requests": 3607, "pending": 82, "prog exec time": 457, "reproducing": 2, "rpc recv": 19899341692, "rpc sent": 6362782272, "signal": 289545, "smash jobs": 6, "triage jobs": 6, "vm output": 94924791, "vm restarts [base]": 50, "vm restarts [new]": 173 } 2025/10/14 22:38:05 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:38:05 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:38:09 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:38:17 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 22:38:47 runner 6 connected 2025/10/14 22:39:02 runner 7 connected 2025/10/14 22:39:13 runner 2 connected 2025/10/14 22:39:17 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:39:24 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:39:24 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:39:36 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 22:39:46 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:39:53 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:39:53 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:40:06 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 22:40:14 base crash: WARNING in xfrm_state_fini 2025/10/14 22:40:14 runner 1 connected 2025/10/14 22:40:25 runner 6 connected 2025/10/14 22:40:34 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:40:42 runner 8 connected 2025/10/14 22:40:55 runner 2 connected 2025/10/14 22:41:02 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:41:02 runner 0 connected 2025/10/14 22:41:32 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/10/14 22:41:49 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:42:19 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:42:21 runner 1 connected 2025/10/14 22:42:57 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:42:57 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:42:57 base crash: lost connection to test machine 2025/10/14 22:42:58 STAT { "buffer too small": 10, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 663, "corpus": 42783, "corpus [files]": 1499, "corpus [symbols]": 16, "cover overflows": 92502, "coverage": 298134, "distributor delayed": 51268, "distributor undelayed": 51267, "distributor violated": 146, "exec candidate": 81571, "exec collide": 25970, "exec fuzz": 49112, "exec gen": 2597, "exec hints": 28708, "exec inject": 0, "exec minimize": 23198, "exec retries": 49, "exec seeds": 2886, "exec smash": 25061, "exec total [base]": 244605, "exec total [new]": 518762, "exec triage": 148833, "executor restarts [base]": 526, "executor restarts [new]": 1341, "fault jobs": 0, "fuzzer jobs": 20, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 5, "max signal": 307131, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 13297, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45736, "no exec duration": 1082614000000, "no exec requests": 3616, "pending": 86, "prog exec time": 455, "reproducing": 2, "rpc recv": 20464812936, "rpc sent": 6595119936, "signal": 289593, "smash jobs": 8, "triage jobs": 7, "vm output": 98816441, "vm restarts [base]": 52, "vm restarts [new]": 180 } 2025/10/14 22:43:04 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:43:36 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:43:53 runner 3 connected 2025/10/14 22:43:53 runner 1 connected 2025/10/14 22:44:33 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:44:33 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:44:33 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/10/14 22:44:42 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:45:10 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:45:22 runner 3 connected 2025/10/14 22:45:22 runner 6 connected 2025/10/14 22:45:46 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:45:46 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:46:00 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:46:14 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:46:14 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:46:33 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:46:34 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 22:46:35 runner 1 connected 2025/10/14 22:47:03 runner 7 connected 2025/10/14 22:47:16 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:47:24 runner 2 connected 2025/10/14 22:47:41 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:47:41 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:47:51 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:47:57 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:47:57 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:47:58 STAT { "buffer too small": 12, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 693, "corpus": 42825, "corpus [files]": 1499, "corpus [symbols]": 16, "cover overflows": 94417, "coverage": 298193, "distributor delayed": 51371, "distributor undelayed": 51371, "distributor violated": 146, "exec candidate": 81571, "exec collide": 27703, "exec fuzz": 52404, "exec gen": 2755, "exec hints": 29674, "exec inject": 0, "exec minimize": 23831, "exec retries": 50, "exec seeds": 3015, "exec smash": 26042, "exec total [base]": 250746, "exec total [new]": 526837, "exec triage": 149018, "executor restarts [base]": 540, "executor restarts [new]": 1389, "fault jobs": 0, "fuzzer jobs": 19, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 4, "max signal": 307240, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 13733, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45807, "no exec duration": 1088422000000, "no exec requests": 3628, "pending": 91, "prog exec time": 476, "reproducing": 2, "rpc recv": 20993171612, "rpc sent": 6851343352, "signal": 289644, "smash jobs": 9, "triage jobs": 6, "vm output": 102242206, "vm restarts [base]": 53, "vm restarts [new]": 186 } 2025/10/14 22:48:00 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:48:00 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:48:37 runner 7 connected 2025/10/14 22:48:40 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:48:46 runner 8 connected 2025/10/14 22:48:57 runner 3 connected 2025/10/14 22:48:58 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 22:49:07 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:49:13 repro finished 'possible deadlock in ocfs2_calc_xattr_init', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 22:49:13 failed repro for "possible deadlock in ocfs2_calc_xattr_init", err=%!s() 2025/10/14 22:49:13 "possible deadlock in ocfs2_calc_xattr_init": saved crash log into 1760482153.crash.log 2025/10/14 22:49:13 "possible deadlock in ocfs2_calc_xattr_init": saved repro log into 1760482153.repro.log 2025/10/14 22:49:15 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/10/14 22:49:24 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:49:24 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:49:46 runner 7 connected 2025/10/14 22:49:58 patched crashed: BUG: Bad page map [need repro = true] 2025/10/14 22:49:58 scheduled a reproduction of 'BUG: Bad page map' 2025/10/14 22:49:59 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:50:03 runner 0 connected 2025/10/14 22:50:04 runner 1 connected 2025/10/14 22:50:09 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 22:50:14 runner 6 connected 2025/10/14 22:50:26 reproducing crash 'BUG: Bad page map': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/ksm.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 22:50:48 runner 2 connected 2025/10/14 22:50:58 base crash: WARNING in xfrm_state_fini 2025/10/14 22:50:59 runner 1 connected 2025/10/14 22:51:46 runner 0 connected 2025/10/14 22:51:56 base crash: lost connection to test machine 2025/10/14 22:52:09 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 22:52:19 crash "possible deadlock in ocfs2_xattr_set" is already known 2025/10/14 22:52:19 base crash "possible deadlock in ocfs2_xattr_set" is to be ignored 2025/10/14 22:52:19 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/10/14 22:52:44 runner 1 connected 2025/10/14 22:52:53 bug reporting terminated 2025/10/14 22:52:53 status reporting terminated 2025/10/14 22:52:53 new: rpc server terminaled 2025/10/14 22:52:53 base: rpc server terminaled 2025/10/14 22:52:53 base: pool terminated 2025/10/14 22:52:53 base: kernel context loop terminated 2025/10/14 22:58:22 reproducing crash 'BUG: Bad page map': concatenation step failed with context deadline exceeded 2025/10/14 22:58:22 repro finished 'BUG: Bad page map', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 22:58:22 repro loop terminated 2025/10/14 22:58:22 new: pool terminated 2025/10/14 22:58:22 new: kernel context loop terminated 2025/10/14 22:58:22 diff fuzzing terminated 2025/10/14 22:58:22 fuzzing is finished 2025/10/14 22:58:22 status at the end: Title On-Base On-Patched BUG: Bad page map 95 crashes BUG: sleeping function called from invalid context in hook_sb_delete 1 crashes INFO: rcu detected stall in corrupted 1 crashes INFO: task hung in __iterate_supers 1 crashes INFO: task hung in corrupted 3 crashes 3 crashes INFO: task hung in disable_device 1 crashes 1 crashes INFO: task hung in reg_check_chans_work 1 crashes 1 crashes INFO: task hung in rfkill_global_led_trigger_worker 1 crashes KASAN: slab-out-of-bounds Read in dtSplitPage [reproduced] KASAN: slab-use-after-free Read in dtSplitPage 1 crashes 1 crashes KASAN: slab-use-after-free Read in hdm_disconnect 1 crashes 1 crashes[reproduced] KASAN: slab-use-after-free Read in l2cap_unregister_user 1 crashes KASAN: slab-use-after-free Write in lmLogSync 1 crashes 1 crashes WARNING in xfrm6_tunnel_net_exit 2 crashes 7 crashes WARNING in xfrm_state_fini 8 crashes 10 crashes general protection fault in pcl818_ai_cancel 1 crashes 3 crashes general protection fault in txEnd 1 crashes 1 crashes kernel BUG in jfs_evict_inode 2 crashes 2 crashes kernel BUG in txUnlock 2 crashes 3 crashes lost connection to test machine 8 crashes 21 crashes no output from test machine 2 crashes possible deadlock in btrfs_dirty_inode 1 crashes possible deadlock in ntfs_fiemap 1 crashes possible deadlock in ocfs2_calc_xattr_init 1 crashes possible deadlock in ocfs2_evict_inode 1 crashes possible deadlock in ocfs2_init_acl 4 crashes 7 crashes possible deadlock in ocfs2_reserve_suballoc_bits 2 crashes 2 crashes[reproduced] possible deadlock in ocfs2_try_remove_refcount_tree 8 crashes 8 crashes possible deadlock in ocfs2_xattr_set 2 crashes possible deadlock in run_unpack_ex 1 crashes 1 crashes unregister_netdevice: waiting for DEV to become free 2 crashes 4 crashes 2025/10/14 22:58:22 possibly patched-only: BUG: Bad page map