2025/08/13 20:47:03 extracted 303683 symbol hashes for base and 303683 for patched 2025/08/13 20:47:03 adding modified_functions to focus areas: ["nvmet_execute_disc_identify"] 2025/08/13 20:47:03 adding directly modified files to focus areas: ["rust/kernel/mm/virt.rs"] 2025/08/13 20:47:04 downloaded the corpus from https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db 2025/08/13 20:48:02 runner 4 connected 2025/08/13 20:48:03 runner 5 connected 2025/08/13 20:48:03 runner 1 connected 2025/08/13 20:48:03 runner 0 connected 2025/08/13 20:48:03 runner 8 connected 2025/08/13 20:48:09 runner 3 connected 2025/08/13 20:48:09 runner 2 connected 2025/08/13 20:48:09 runner 7 connected 2025/08/13 20:48:10 runner 9 connected 2025/08/13 20:48:10 runner 0 connected 2025/08/13 20:48:10 initializing coverage information... 2025/08/13 20:48:10 executor cover filter: 0 PCs 2025/08/13 20:48:10 runner 1 connected 2025/08/13 20:48:11 runner 3 connected 2025/08/13 20:48:16 discovered 7697 source files, 338543 symbols 2025/08/13 20:48:16 coverage filter: nvmet_execute_disc_identify: [nvmet_execute_disc_identify] 2025/08/13 20:48:16 coverage filter: rust/kernel/mm/virt.rs: [] 2025/08/13 20:48:16 area "symbols": 15 PCs in the cover filter 2025/08/13 20:48:16 area "files": 0 PCs in the cover filter 2025/08/13 20:48:16 area "": 0 PCs in the cover filter 2025/08/13 20:48:16 executor cover filter: 0 PCs 2025/08/13 20:48:20 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/13 20:48:20 base: machine check complete 2025/08/13 20:48:21 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/13 20:48:21 new: machine check complete 2025/08/13 20:48:22 new: adding 78567 seeds 2025/08/13 20:50:23 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 20:51:22 runner 0 connected 2025/08/13 20:51:49 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 20:52:06 STAT { "buffer too small": 0, "candidate triage jobs": 57, "candidates": 74836, "comps overflows": 0, "corpus": 3653, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 2095, "coverage": 153258, "distributor delayed": 3966, "distributor undelayed": 3964, "distributor violated": 1, "exec candidate": 3731, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 9676, "exec total [new]": 16174, "exec triage": 11545, "executor restarts": 92, "fault jobs": 0, "fuzzer jobs": 57, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 155031, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 3731, "no exec duration": 33288000000, "no exec requests": 224, "pending": 0, "prog exec time": 207, "reproducing": 0, "rpc recv": 787947568, "rpc sent": 87327768, "signal": 151492, "smash jobs": 0, "triage jobs": 0, "vm output": 2267742, "vm restarts [base]": 4, "vm restarts [new]": 9 } 2025/08/13 20:52:11 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 20:52:38 runner 4 connected 2025/08/13 20:53:01 runner 8 connected 2025/08/13 20:53:10 patched crashed: possible deadlock in input_event [need repro = true] 2025/08/13 20:53:10 scheduled a reproduction of 'possible deadlock in input_event' 2025/08/13 20:53:21 patched crashed: possible deadlock in input_event [need repro = true] 2025/08/13 20:53:21 scheduled a reproduction of 'possible deadlock in input_event' 2025/08/13 20:53:41 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/13 20:53:41 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/13 20:53:50 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/13 20:53:50 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/13 20:53:56 patched crashed: possible deadlock in input_event [need repro = true] 2025/08/13 20:53:56 scheduled a reproduction of 'possible deadlock in input_event' 2025/08/13 20:53:59 runner 7 connected 2025/08/13 20:54:11 runner 1 connected 2025/08/13 20:54:30 runner 3 connected 2025/08/13 20:54:35 base crash: WARNING in xfrm_state_fini 2025/08/13 20:54:38 runner 9 connected 2025/08/13 20:54:45 runner 0 connected 2025/08/13 20:55:23 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = true] 2025/08/13 20:55:23 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_alloc_spi' 2025/08/13 20:55:25 runner 3 connected 2025/08/13 20:55:33 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = true] 2025/08/13 20:55:33 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_alloc_spi' 2025/08/13 20:56:11 runner 4 connected 2025/08/13 20:56:20 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 20:56:22 runner 8 connected 2025/08/13 20:56:23 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = true] 2025/08/13 20:56:23 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_alloc_spi' 2025/08/13 20:57:00 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/13 20:57:06 STAT { "buffer too small": 0, "candidate triage jobs": 37, "candidates": 70350, "comps overflows": 0, "corpus": 8114, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 4433, "coverage": 195467, "distributor delayed": 10893, "distributor undelayed": 10893, "distributor violated": 163, "exec candidate": 8217, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 23284, "exec total [new]": 35235, "exec triage": 25333, "executor restarts": 166, "fault jobs": 0, "fuzzer jobs": 37, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 197077, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 8217, "no exec duration": 33288000000, "no exec requests": 224, "pending": 8, "prog exec time": 218, "reproducing": 0, "rpc recv": 1496489564, "rpc sent": 210787072, "signal": 192753, "smash jobs": 0, "triage jobs": 0, "vm output": 5349420, "vm restarts [base]": 5, "vm restarts [new]": 18 } 2025/08/13 20:57:07 base crash: KASAN: slab-use-after-free Write in __xfrm_state_delete 2025/08/13 20:57:08 runner 9 connected 2025/08/13 20:57:12 new: boot error: can't ssh into the instance 2025/08/13 20:57:12 new: boot error: can't ssh into the instance 2025/08/13 20:57:12 runner 1 connected 2025/08/13 20:57:49 runner 0 connected 2025/08/13 20:57:56 runner 3 connected 2025/08/13 20:58:00 runner 2 connected 2025/08/13 20:58:01 runner 6 connected 2025/08/13 20:58:53 patched crashed: KASAN: slab-use-after-free Read in l2cap_unregister_user [need repro = true] 2025/08/13 20:58:53 scheduled a reproduction of 'KASAN: slab-use-after-free Read in l2cap_unregister_user' 2025/08/13 20:59:33 base crash: WARNING in xfrm_state_fini 2025/08/13 20:59:41 runner 6 connected 2025/08/13 20:59:58 patched crashed: INFO: trying to register non-static key in ocfs2_dlm_shutdown [need repro = true] 2025/08/13 20:59:58 scheduled a reproduction of 'INFO: trying to register non-static key in ocfs2_dlm_shutdown' 2025/08/13 21:00:06 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 21:00:09 base crash: WARNING in xfrm_state_fini 2025/08/13 21:00:23 runner 2 connected 2025/08/13 21:00:48 runner 5 connected 2025/08/13 21:00:49 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 21:00:56 runner 7 connected 2025/08/13 21:00:56 runner 3 connected 2025/08/13 21:01:03 base crash: lost connection to test machine 2025/08/13 21:01:21 patched crashed: general protection fault in pcl818_ai_cancel [need repro = true] 2025/08/13 21:01:21 scheduled a reproduction of 'general protection fault in pcl818_ai_cancel' 2025/08/13 21:01:37 runner 1 connected 2025/08/13 21:02:06 STAT { "buffer too small": 0, "candidate triage jobs": 37, "candidates": 63753, "comps overflows": 0, "corpus": 14637, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 8609, "coverage": 230557, "distributor delayed": 17114, "distributor undelayed": 17114, "distributor violated": 163, "exec candidate": 14814, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 33103, "exec total [new]": 65510, "exec triage": 45720, "executor restarts": 238, "fault jobs": 0, "fuzzer jobs": 37, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 232334, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 14814, "no exec duration": 33511000000, "no exec requests": 227, "pending": 11, "prog exec time": 358, "reproducing": 0, "rpc recv": 2414435988, "rpc sent": 361551568, "signal": 227142, "smash jobs": 0, "triage jobs": 0, "vm output": 8906587, "vm restarts [base]": 9, "vm restarts [new]": 26 } 2025/08/13 21:02:10 runner 3 connected 2025/08/13 21:02:21 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/13 21:02:21 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/13 21:02:52 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 21:03:03 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 21:03:10 runner 2 connected 2025/08/13 21:03:36 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 21:03:40 runner 9 connected 2025/08/13 21:03:45 runner 1 connected 2025/08/13 21:04:24 runner 0 connected 2025/08/13 21:04:53 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/08/13 21:04:57 patched crashed: KASAN: slab-use-after-free Write in __xfrm_state_delete [need repro = false] 2025/08/13 21:05:24 patched crashed: INFO: rcu detected stall in sys_bpf [need repro = false] 2025/08/13 21:05:42 runner 2 connected 2025/08/13 21:05:43 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 21:05:46 runner 6 connected 2025/08/13 21:06:32 runner 8 connected 2025/08/13 21:06:36 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 21:06:58 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 21:07:06 STAT { "buffer too small": 0, "candidate triage jobs": 39, "candidates": 57049, "comps overflows": 0, "corpus": 21249, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 13204, "coverage": 253436, "distributor delayed": 24165, "distributor undelayed": 24165, "distributor violated": 164, "exec candidate": 21518, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 48318, "exec total [new]": 100192, "exec triage": 66684, "executor restarts": 287, "fault jobs": 0, "fuzzer jobs": 39, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 255602, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 21518, "no exec duration": 34998000000, "no exec requests": 239, "pending": 12, "prog exec time": 269, "reproducing": 0, "rpc recv": 3157072360, "rpc sent": 527380048, "signal": 249324, "smash jobs": 0, "triage jobs": 0, "vm output": 11550505, "vm restarts [base]": 10, "vm restarts [new]": 33 } 2025/08/13 21:07:26 runner 1 connected 2025/08/13 21:07:46 runner 4 connected 2025/08/13 21:10:23 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = true] 2025/08/13 21:10:23 scheduled a reproduction of 'unregister_netdevice: waiting for DEV to become free' 2025/08/13 21:10:34 patched crashed: possible deadlock in ocfs2_page_mkwrite [need repro = true] 2025/08/13 21:10:34 scheduled a reproduction of 'possible deadlock in ocfs2_page_mkwrite' 2025/08/13 21:10:45 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 21:10:48 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 21:11:08 base: boot error: can't ssh into the instance 2025/08/13 21:11:11 runner 0 connected 2025/08/13 21:11:14 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 21:11:17 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 21:11:18 base crash: lost connection to test machine 2025/08/13 21:11:27 runner 7 connected 2025/08/13 21:11:30 runner 2 connected 2025/08/13 21:12:06 STAT { "buffer too small": 0, "candidate triage jobs": 25, "candidates": 50929, "comps overflows": 0, "corpus": 27199, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 17721, "coverage": 268217, "distributor delayed": 31025, "distributor undelayed": 31024, "distributor violated": 249, "exec candidate": 27638, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 65732, "exec total [new]": 136438, "exec triage": 86090, "executor restarts": 340, "fault jobs": 0, "fuzzer jobs": 25, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 271027, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 27638, "no exec duration": 36191000000, "no exec requests": 255, "pending": 14, "prog exec time": 145, "reproducing": 0, "rpc recv": 3737594908, "rpc sent": 698291136, "signal": 263507, "smash jobs": 0, "triage jobs": 0, "vm output": 13741801, "vm restarts [base]": 10, "vm restarts [new]": 38 } 2025/08/13 21:12:07 runner 6 connected 2025/08/13 21:12:07 runner 3 connected 2025/08/13 21:13:03 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 21:13:12 base crash: WARNING in xfrm_state_fini 2025/08/13 21:13:44 runner 9 connected 2025/08/13 21:14:01 runner 3 connected 2025/08/13 21:14:24 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 21:14:32 base crash: WARNING in xfrm_state_fini 2025/08/13 21:14:39 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = true] 2025/08/13 21:14:39 scheduled a reproduction of 'unregister_netdevice: waiting for DEV to become free' 2025/08/13 21:15:06 runner 4 connected 2025/08/13 21:15:13 patched crashed: possible deadlock in run_unpack_ex [need repro = true] 2025/08/13 21:15:13 scheduled a reproduction of 'possible deadlock in run_unpack_ex' 2025/08/13 21:15:21 runner 2 connected 2025/08/13 21:15:23 patched crashed: possible deadlock in run_unpack_ex [need repro = true] 2025/08/13 21:15:23 scheduled a reproduction of 'possible deadlock in run_unpack_ex' 2025/08/13 21:15:28 runner 1 connected 2025/08/13 21:15:30 new: boot error: can't ssh into the instance 2025/08/13 21:15:54 patched crashed: possible deadlock in ntfs_fiemap [need repro = true] 2025/08/13 21:15:54 scheduled a reproduction of 'possible deadlock in ntfs_fiemap' 2025/08/13 21:16:02 runner 6 connected 2025/08/13 21:16:05 runner 0 connected 2025/08/13 21:16:18 runner 5 connected 2025/08/13 21:16:29 patched crashed: possible deadlock in ocfs2_setattr [need repro = true] 2025/08/13 21:16:29 scheduled a reproduction of 'possible deadlock in ocfs2_setattr' 2025/08/13 21:16:42 patched crashed: possible deadlock in ocfs2_setattr [need repro = true] 2025/08/13 21:16:42 scheduled a reproduction of 'possible deadlock in ocfs2_setattr' 2025/08/13 21:16:43 runner 7 connected 2025/08/13 21:16:49 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/13 21:17:01 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/13 21:17:06 STAT { "buffer too small": 0, "candidate triage jobs": 24, "candidates": 46886, "comps overflows": 0, "corpus": 31188, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 20337, "coverage": 278517, "distributor delayed": 36702, "distributor undelayed": 36698, "distributor violated": 257, "exec candidate": 31681, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 78553, "exec total [new]": 159244, "exec triage": 98506, "executor restarts": 405, "fault jobs": 0, "fuzzer jobs": 24, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 281366, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 31681, "no exec duration": 36366000000, "no exec requests": 259, "pending": 20, "prog exec time": 257, "reproducing": 0, "rpc recv": 4393648240, "rpc sent": 831369272, "signal": 273615, "smash jobs": 0, "triage jobs": 0, "vm output": 16419354, "vm restarts [base]": 13, "vm restarts [new]": 46 } 2025/08/13 21:17:11 runner 2 connected 2025/08/13 21:17:31 runner 4 connected 2025/08/13 21:17:32 runner 3 connected 2025/08/13 21:17:50 runner 0 connected 2025/08/13 21:18:37 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 21:18:42 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/13 21:18:44 patched crashed: possible deadlock in ocfs2_reserve_local_alloc_bits [need repro = true] 2025/08/13 21:18:44 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_local_alloc_bits' 2025/08/13 21:18:49 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 21:19:23 runner 3 connected 2025/08/13 21:19:25 runner 9 connected 2025/08/13 21:19:25 runner 6 connected 2025/08/13 21:19:30 runner 1 connected 2025/08/13 21:20:39 new: boot error: can't ssh into the instance 2025/08/13 21:21:14 base: boot error: can't ssh into the instance 2025/08/13 21:21:19 new: boot error: can't ssh into the instance 2025/08/13 21:21:22 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/13 21:21:22 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/13 21:21:23 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/13 21:21:23 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/13 21:21:28 runner 3 connected 2025/08/13 21:21:59 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = false] 2025/08/13 21:22:03 runner 0 connected 2025/08/13 21:22:04 runner 6 connected 2025/08/13 21:22:06 STAT { "buffer too small": 0, "candidate triage jobs": 26, "candidates": 42139, "comps overflows": 0, "corpus": 35869, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 23216, "coverage": 289507, "distributor delayed": 41864, "distributor undelayed": 41864, "distributor violated": 263, "exec candidate": 36428, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 91850, "exec total [new]": 186328, "exec triage": 113033, "executor restarts": 464, "fault jobs": 0, "fuzzer jobs": 26, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 292427, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 36427, "no exec duration": 36417000000, "no exec requests": 261, "pending": 23, "prog exec time": 259, "reproducing": 0, "rpc recv": 5056721608, "rpc sent": 988310360, "signal": 284465, "smash jobs": 0, "triage jobs": 0, "vm output": 20146087, "vm restarts [base]": 16, "vm restarts [new]": 54 } 2025/08/13 21:22:08 runner 8 connected 2025/08/13 21:22:10 runner 2 connected 2025/08/13 21:22:48 runner 9 connected 2025/08/13 21:23:00 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 21:23:49 runner 2 connected 2025/08/13 21:23:52 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/08/13 21:23:56 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = true] 2025/08/13 21:23:56 scheduled a reproduction of 'WARNING in xfrm6_tunnel_net_exit' 2025/08/13 21:24:09 patched crashed: WARNING in io_ring_exit_work [need repro = true] 2025/08/13 21:24:09 scheduled a reproduction of 'WARNING in io_ring_exit_work' 2025/08/13 21:24:09 patched crashed: WARNING in io_ring_exit_work [need repro = true] 2025/08/13 21:24:09 scheduled a reproduction of 'WARNING in io_ring_exit_work' 2025/08/13 21:24:24 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 21:24:36 base crash: WARNING in io_ring_exit_work 2025/08/13 21:24:41 runner 1 connected 2025/08/13 21:24:46 runner 7 connected 2025/08/13 21:24:50 runner 0 connected 2025/08/13 21:24:58 runner 5 connected 2025/08/13 21:25:06 runner 9 connected 2025/08/13 21:25:25 runner 2 connected 2025/08/13 21:25:35 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/13 21:26:17 runner 1 connected 2025/08/13 21:27:06 STAT { "buffer too small": 0, "candidate triage jobs": 29, "candidates": 38291, "comps overflows": 0, "corpus": 39656, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 26770, "coverage": 297984, "distributor delayed": 45263, "distributor undelayed": 45263, "distributor violated": 263, "exec candidate": 40276, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 106115, "exec total [new]": 215532, "exec triage": 124845, "executor restarts": 541, "fault jobs": 0, "fuzzer jobs": 29, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 10, "hints jobs": 0, "max signal": 300851, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 40275, "no exec duration": 36425000000, "no exec requests": 262, "pending": 26, "prog exec time": 244, "reproducing": 0, "rpc recv": 5805496208, "rpc sent": 1192561904, "signal": 292876, "smash jobs": 0, "triage jobs": 0, "vm output": 24621062, "vm restarts [base]": 19, "vm restarts [new]": 62 } 2025/08/13 21:27:22 patched crashed: general protection fault in pcl818_ai_cancel [need repro = true] 2025/08/13 21:27:22 scheduled a reproduction of 'general protection fault in pcl818_ai_cancel' 2025/08/13 21:27:50 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/13 21:28:38 runner 3 connected 2025/08/13 21:29:01 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 21:29:12 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 21:29:36 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/13 21:29:36 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/13 21:29:47 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/13 21:29:47 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/13 21:29:51 runner 1 connected 2025/08/13 21:29:54 runner 6 connected 2025/08/13 21:30:07 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/13 21:30:10 base crash: KASAN: slab-use-after-free Read in xfrm_state_find 2025/08/13 21:30:18 runner 9 connected 2025/08/13 21:30:26 base crash: kernel BUG in jfs_evict_inode 2025/08/13 21:30:35 runner 2 connected 2025/08/13 21:30:56 runner 2 connected 2025/08/13 21:30:58 runner 0 connected 2025/08/13 21:31:00 patched crashed: possible deadlock in run_unpack_ex [need repro = true] 2025/08/13 21:31:00 scheduled a reproduction of 'possible deadlock in run_unpack_ex' 2025/08/13 21:31:11 patched crashed: possible deadlock in run_unpack_ex [need repro = true] 2025/08/13 21:31:11 scheduled a reproduction of 'possible deadlock in run_unpack_ex' 2025/08/13 21:31:15 runner 3 connected 2025/08/13 21:31:41 runner 5 connected 2025/08/13 21:31:48 patched crashed: possible deadlock in mark_as_free_ex [need repro = true] 2025/08/13 21:31:48 scheduled a reproduction of 'possible deadlock in mark_as_free_ex' 2025/08/13 21:32:00 runner 7 connected 2025/08/13 21:32:06 STAT { "buffer too small": 0, "candidate triage jobs": 9, "candidates": 35991, "comps overflows": 0, "corpus": 41883, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 30810, "coverage": 302541, "distributor delayed": 47516, "distributor undelayed": 47516, "distributor violated": 263, "exec candidate": 42576, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 118038, "exec total [new]": 243044, "exec triage": 132235, "executor restarts": 597, "fault jobs": 0, "fuzzer jobs": 9, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 305577, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 42575, "no exec duration": 36428000000, "no exec requests": 263, "pending": 32, "prog exec time": 252, "reproducing": 0, "rpc recv": 6300989312, "rpc sent": 1368576112, "signal": 297360, "smash jobs": 0, "triage jobs": 0, "vm output": 27890036, "vm restarts [base]": 23, "vm restarts [new]": 68 } 2025/08/13 21:32:37 runner 4 connected 2025/08/13 21:34:01 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/08/13 21:34:03 patched crashed: possible deadlock in mark_as_free_ex [need repro = true] 2025/08/13 21:34:03 scheduled a reproduction of 'possible deadlock in mark_as_free_ex' 2025/08/13 21:34:10 base crash: possible deadlock in mark_as_free_ex 2025/08/13 21:34:45 runner 2 connected 2025/08/13 21:34:50 runner 6 connected 2025/08/13 21:34:53 runner 3 connected 2025/08/13 21:35:57 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/13 21:36:33 patched crashed: KASAN: slab-use-after-free Read in xfrm_state_find [need repro = false] 2025/08/13 21:36:46 runner 0 connected 2025/08/13 21:36:51 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 21:36:56 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 21:37:06 STAT { "buffer too small": 0, "candidate triage jobs": 8, "candidates": 34486, "comps overflows": 0, "corpus": 43273, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 35726, "coverage": 305592, "distributor delayed": 48940, "distributor undelayed": 48940, "distributor violated": 263, "exec candidate": 44081, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 132554, "exec total [new]": 274989, "exec triage": 137053, "executor restarts": 640, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 308869, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44058, "no exec duration": 36440000000, "no exec requests": 265, "pending": 33, "prog exec time": 305, "reproducing": 0, "rpc recv": 6640589776, "rpc sent": 1542543064, "signal": 300364, "smash jobs": 0, "triage jobs": 0, "vm output": 31103096, "vm restarts [base]": 25, "vm restarts [new]": 71 } 2025/08/13 21:37:18 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/13 21:37:22 runner 0 connected 2025/08/13 21:37:28 new: boot error: can't ssh into the instance 2025/08/13 21:37:31 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/13 21:37:40 runner 7 connected 2025/08/13 21:37:45 runner 8 connected 2025/08/13 21:37:50 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/13 21:38:07 runner 3 connected 2025/08/13 21:38:17 runner 3 connected 2025/08/13 21:38:18 runner 1 connected 2025/08/13 21:38:39 runner 0 connected 2025/08/13 21:38:44 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 21:38:58 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/13 21:39:41 runner 5 connected 2025/08/13 21:39:50 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/13 21:40:07 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 21:40:20 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/13 21:40:20 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/13 21:40:33 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/13 21:40:33 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/13 21:40:40 runner 3 connected 2025/08/13 21:40:51 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = false] 2025/08/13 21:41:04 runner 8 connected 2025/08/13 21:41:16 runner 0 connected 2025/08/13 21:41:30 runner 7 connected 2025/08/13 21:41:51 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/13 21:41:57 runner 2 connected 2025/08/13 21:42:06 STAT { "buffer too small": 0, "candidate triage jobs": 12, "candidates": 33641, "comps overflows": 0, "corpus": 44044, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 38427, "coverage": 307018, "distributor delayed": 49850, "distributor undelayed": 49850, "distributor violated": 263, "exec candidate": 44926, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 143710, "exec total [new]": 293315, "exec triage": 139601, "executor restarts": 697, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 310412, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44876, "no exec duration": 36448000000, "no exec requests": 266, "pending": 35, "prog exec time": 426, "reproducing": 0, "rpc recv": 7107794652, "rpc sent": 1662875720, "signal": 301922, "smash jobs": 0, "triage jobs": 0, "vm output": 33789507, "vm restarts [base]": 28, "vm restarts [new]": 81 } 2025/08/13 21:42:55 runner 2 connected 2025/08/13 21:43:00 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 21:43:22 patched crashed: INFO: task hung in corrupted [need repro = true] 2025/08/13 21:43:22 scheduled a reproduction of 'INFO: task hung in corrupted' 2025/08/13 21:43:41 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/08/13 21:43:57 runner 0 connected 2025/08/13 21:44:10 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/08/13 21:44:27 runner 5 connected 2025/08/13 21:44:38 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/13 21:44:45 runner 1 connected 2025/08/13 21:45:14 runner 7 connected 2025/08/13 21:45:34 runner 3 connected 2025/08/13 21:46:00 patched crashed: INFO: task hung in tun_chr_close [need repro = true] 2025/08/13 21:46:00 scheduled a reproduction of 'INFO: task hung in tun_chr_close' 2025/08/13 21:47:06 STAT { "buffer too small": 0, "candidate triage jobs": 7, "candidates": 17657, "comps overflows": 0, "corpus": 44520, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 41428, "coverage": 308042, "distributor delayed": 50399, "distributor undelayed": 50399, "distributor violated": 266, "exec candidate": 60910, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 151457, "exec total [new]": 312234, "exec triage": 141252, "executor restarts": 737, "fault jobs": 0, "fuzzer jobs": 7, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 311577, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45383, "no exec duration": 36453000000, "no exec requests": 267, "pending": 37, "prog exec time": 281, "reproducing": 0, "rpc recv": 7383233088, "rpc sent": 1767654304, "signal": 302924, "smash jobs": 0, "triage jobs": 0, "vm output": 35820834, "vm restarts [base]": 30, "vm restarts [new]": 85 } 2025/08/13 21:47:07 runner 3 connected 2025/08/13 21:47:59 base crash: no output from test machine 2025/08/13 21:49:04 new: boot error: can't ssh into the instance 2025/08/13 21:49:04 runner 0 connected 2025/08/13 21:49:06 triaged 92.1% of the corpus 2025/08/13 21:49:06 starting bug reproductions 2025/08/13 21:49:06 starting bug reproductions (max 10 VMs, 7 repros) 2025/08/13 21:49:06 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/13 21:49:06 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/13 21:49:06 reproduction of "KASAN: slab-use-after-free Read in xfrm_alloc_spi" aborted: it's no longer needed 2025/08/13 21:49:06 reproduction of "KASAN: slab-use-after-free Read in xfrm_alloc_spi" aborted: it's no longer needed 2025/08/13 21:49:06 reproduction of "KASAN: slab-use-after-free Read in xfrm_alloc_spi" aborted: it's no longer needed 2025/08/13 21:49:06 reproduction of "KASAN: slab-use-after-free Read in __xfrm_state_lookup" aborted: it's no longer needed 2025/08/13 21:49:06 reproduction of "unregister_netdevice: waiting for DEV to become free" aborted: it's no longer needed 2025/08/13 21:49:06 reproduction of "unregister_netdevice: waiting for DEV to become free" aborted: it's no longer needed 2025/08/13 21:49:06 start reproducing 'possible deadlock in input_event' 2025/08/13 21:49:06 start reproducing 'possible deadlock in ntfs_fiemap' 2025/08/13 21:49:06 start reproducing 'general protection fault in pcl818_ai_cancel' 2025/08/13 21:49:06 start reproducing 'INFO: trying to register non-static key in ocfs2_dlm_shutdown' 2025/08/13 21:49:06 start reproducing 'KASAN: slab-use-after-free Read in l2cap_unregister_user' 2025/08/13 21:49:06 start reproducing 'possible deadlock in ocfs2_page_mkwrite' 2025/08/13 21:49:06 start reproducing 'possible deadlock in run_unpack_ex' 2025/08/13 21:50:13 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/13 21:50:21 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/13 21:50:54 base crash: KASAN: slab-use-after-free Write in __xfrm_state_delete 2025/08/13 21:51:10 runner 1 connected 2025/08/13 21:51:18 runner 2 connected 2025/08/13 21:51:51 runner 0 connected 2025/08/13 21:52:06 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 6211, "comps overflows": 0, "corpus": 44620, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43464, "coverage": 308255, "distributor delayed": 50560, "distributor undelayed": 50560, "distributor violated": 266, "exec candidate": 72356, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159762, "exec total [new]": 324206, "exec triage": 141777, "executor restarts": 747, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311832, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45519, "no exec duration": 36460000000, "no exec requests": 268, "pending": 21, "prog exec time": 0, "reproducing": 7, "rpc recv": 7559051948, "rpc sent": 1828935528, "signal": 303123, "smash jobs": 0, "triage jobs": 0, "vm output": 37029876, "vm restarts [base]": 34, "vm restarts [new]": 86 } 2025/08/13 21:56:17 base crash: no output from test machine 2025/08/13 21:56:30 base crash: no output from test machine 2025/08/13 21:56:34 base crash: no output from test machine 2025/08/13 21:56:50 base crash: no output from test machine 2025/08/13 21:57:06 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 6211, "comps overflows": 0, "corpus": 44620, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43464, "coverage": 308255, "distributor delayed": 50560, "distributor undelayed": 50560, "distributor violated": 266, "exec candidate": 72356, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159762, "exec total [new]": 324206, "exec triage": 141777, "executor restarts": 747, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311832, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45519, "no exec duration": 36460000000, "no exec requests": 268, "pending": 21, "prog exec time": 0, "reproducing": 7, "rpc recv": 7559051948, "rpc sent": 1828935528, "signal": 303123, "smash jobs": 0, "triage jobs": 0, "vm output": 40267904, "vm restarts [base]": 34, "vm restarts [new]": 86 } 2025/08/13 21:57:32 runner 3 connected 2025/08/13 21:57:47 runner 0 connected 2025/08/13 21:58:32 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 21:59:12 new: boot error: can't ssh into the instance 2025/08/13 21:59:12 new: boot error: can't ssh into the instance 2025/08/13 22:02:06 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 6211, "comps overflows": 0, "corpus": 44620, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43464, "coverage": 308255, "distributor delayed": 50560, "distributor undelayed": 50560, "distributor violated": 266, "exec candidate": 72356, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159762, "exec total [new]": 324206, "exec triage": 141777, "executor restarts": 747, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311832, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45519, "no exec duration": 36460000000, "no exec requests": 268, "pending": 21, "prog exec time": 0, "reproducing": 7, "rpc recv": 7620819972, "rpc sent": 1828936088, "signal": 303123, "smash jobs": 0, "triage jobs": 0, "vm output": 43039606, "vm restarts [base]": 36, "vm restarts [new]": 86 } 2025/08/13 22:02:22 new: boot error: can't ssh into the instance 2025/08/13 22:02:31 base crash: no output from test machine 2025/08/13 22:02:47 base crash: no output from test machine 2025/08/13 22:02:52 repro finished 'possible deadlock in ocfs2_page_mkwrite', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 22:02:52 failed repro for "possible deadlock in ocfs2_page_mkwrite", err=%!s() 2025/08/13 22:02:52 start reproducing 'possible deadlock in ocfs2_setattr' 2025/08/13 22:02:52 "possible deadlock in ocfs2_page_mkwrite": saved crash log into 1755122572.crash.log 2025/08/13 22:02:52 "possible deadlock in ocfs2_page_mkwrite": saved repro log into 1755122572.repro.log 2025/08/13 22:02:56 repro finished 'possible deadlock in ntfs_fiemap', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 22:02:56 failed repro for "possible deadlock in ntfs_fiemap", err=%!s() 2025/08/13 22:02:56 start reproducing 'possible deadlock in ocfs2_reserve_local_alloc_bits' 2025/08/13 22:02:56 "possible deadlock in ntfs_fiemap": saved crash log into 1755122576.crash.log 2025/08/13 22:02:56 "possible deadlock in ntfs_fiemap": saved repro log into 1755122576.repro.log 2025/08/13 22:03:02 repro finished 'general protection fault in pcl818_ai_cancel', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 22:03:02 failed repro for "general protection fault in pcl818_ai_cancel", err=%!s() 2025/08/13 22:03:02 reproduction of "WARNING in xfrm6_tunnel_net_exit" aborted: it's no longer needed 2025/08/13 22:03:02 reproduction of "WARNING in io_ring_exit_work" aborted: it's no longer needed 2025/08/13 22:03:02 reproduction of "WARNING in io_ring_exit_work" aborted: it's no longer needed 2025/08/13 22:03:02 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/13 22:03:02 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/13 22:03:02 reproduction of "possible deadlock in mark_as_free_ex" aborted: it's no longer needed 2025/08/13 22:03:02 reproduction of "possible deadlock in mark_as_free_ex" aborted: it's no longer needed 2025/08/13 22:03:02 start reproducing 'WARNING in dbAdjTree' 2025/08/13 22:03:02 "general protection fault in pcl818_ai_cancel": saved crash log into 1755122582.crash.log 2025/08/13 22:03:02 "general protection fault in pcl818_ai_cancel": saved repro log into 1755122582.repro.log 2025/08/13 22:03:18 repro finished 'possible deadlock in input_event', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 22:03:18 failed repro for "possible deadlock in input_event", err=%!s() 2025/08/13 22:03:18 start reproducing 'possible deadlock in ocfs2_init_acl' 2025/08/13 22:03:18 "possible deadlock in input_event": saved crash log into 1755122598.crash.log 2025/08/13 22:03:18 "possible deadlock in input_event": saved repro log into 1755122598.repro.log 2025/08/13 22:03:29 runner 3 connected 2025/08/13 22:03:37 runner 0 connected 2025/08/13 22:06:11 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:06:23 base: boot error: can't ssh into the instance 2025/08/13 22:06:35 base: boot error: can't ssh into the instance 2025/08/13 22:07:06 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 6211, "comps overflows": 0, "corpus": 44620, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43464, "coverage": 308255, "distributor delayed": 50560, "distributor undelayed": 50560, "distributor violated": 266, "exec candidate": 72356, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159762, "exec total [new]": 324206, "exec triage": 141777, "executor restarts": 747, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311832, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45519, "no exec duration": 36460000000, "no exec requests": 268, "pending": 10, "prog exec time": 0, "reproducing": 7, "rpc recv": 7682587996, "rpc sent": 1828936648, "signal": 303123, "smash jobs": 0, "triage jobs": 0, "vm output": 45658280, "vm restarts [base]": 38, "vm restarts [new]": 86 } 2025/08/13 22:07:20 runner 2 connected 2025/08/13 22:07:33 runner 1 connected 2025/08/13 22:08:29 base crash: no output from test machine 2025/08/13 22:08:36 base crash: no output from test machine 2025/08/13 22:08:40 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:09:26 runner 3 connected 2025/08/13 22:09:33 runner 0 connected 2025/08/13 22:09:41 new: boot error: can't ssh into the instance 2025/08/13 22:11:25 new: boot error: can't ssh into the instance 2025/08/13 22:12:06 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 6211, "comps overflows": 0, "corpus": 44620, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43464, "coverage": 308255, "distributor delayed": 50560, "distributor undelayed": 50560, "distributor violated": 266, "exec candidate": 72356, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159762, "exec total [new]": 324206, "exec triage": 141777, "executor restarts": 747, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311832, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45519, "no exec duration": 36460000000, "no exec requests": 268, "pending": 10, "prog exec time": 0, "reproducing": 7, "rpc recv": 7806124052, "rpc sent": 1828937768, "signal": 303123, "smash jobs": 0, "triage jobs": 0, "vm output": 47711415, "vm restarts [base]": 42, "vm restarts [new]": 86 } 2025/08/13 22:12:19 base crash: no output from test machine 2025/08/13 22:12:33 base crash: no output from test machine 2025/08/13 22:13:16 runner 2 connected 2025/08/13 22:14:15 new: boot error: can't ssh into the instance 2025/08/13 22:14:25 base crash: no output from test machine 2025/08/13 22:14:32 base crash: no output from test machine 2025/08/13 22:15:22 runner 3 connected 2025/08/13 22:15:29 runner 0 connected 2025/08/13 22:15:59 repro finished 'possible deadlock in ocfs2_setattr', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 22:15:59 start reproducing 'INFO: task hung in corrupted' 2025/08/13 22:15:59 failed repro for "possible deadlock in ocfs2_setattr", err=%!s() 2025/08/13 22:15:59 "possible deadlock in ocfs2_setattr": saved crash log into 1755123359.crash.log 2025/08/13 22:15:59 "possible deadlock in ocfs2_setattr": saved repro log into 1755123359.repro.log 2025/08/13 22:16:17 new: boot error: can't ssh into the instance 2025/08/13 22:16:28 repro finished 'possible deadlock in ocfs2_reserve_local_alloc_bits', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 22:16:28 failed repro for "possible deadlock in ocfs2_reserve_local_alloc_bits", err=%!s() 2025/08/13 22:16:28 start reproducing 'INFO: task hung in tun_chr_close' 2025/08/13 22:16:28 "possible deadlock in ocfs2_reserve_local_alloc_bits": saved crash log into 1755123388.crash.log 2025/08/13 22:16:28 "possible deadlock in ocfs2_reserve_local_alloc_bits": saved repro log into 1755123388.repro.log 2025/08/13 22:17:06 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 6211, "comps overflows": 0, "corpus": 44620, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43464, "coverage": 308255, "distributor delayed": 50560, "distributor undelayed": 50560, "distributor violated": 266, "exec candidate": 72356, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159762, "exec total [new]": 324206, "exec triage": 141777, "executor restarts": 747, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311832, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45519, "no exec duration": 36460000000, "no exec requests": 268, "pending": 8, "prog exec time": 0, "reproducing": 7, "rpc recv": 7898776092, "rpc sent": 1828938608, "signal": 303123, "smash jobs": 0, "triage jobs": 0, "vm output": 50080889, "vm restarts [base]": 45, "vm restarts [new]": 86 } 2025/08/13 22:17:17 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:17:40 repro finished 'possible deadlock in ocfs2_init_acl', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 22:17:40 failed repro for "possible deadlock in ocfs2_init_acl", err=%!s() 2025/08/13 22:17:40 start reproducing 'possible deadlock in input_event' 2025/08/13 22:17:40 "possible deadlock in ocfs2_init_acl": saved crash log into 1755123460.crash.log 2025/08/13 22:17:40 "possible deadlock in ocfs2_init_acl": saved repro log into 1755123460.repro.log 2025/08/13 22:18:15 base crash: no output from test machine 2025/08/13 22:19:12 runner 2 connected 2025/08/13 22:19:47 new: boot error: can't ssh into the instance 2025/08/13 22:20:22 base crash: no output from test machine 2025/08/13 22:20:29 base crash: no output from test machine 2025/08/13 22:21:13 runner 3 connected 2025/08/13 22:21:18 runner 0 connected 2025/08/13 22:21:30 new: boot error: can't ssh into the instance 2025/08/13 22:22:06 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 6211, "comps overflows": 0, "corpus": 44620, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43464, "coverage": 308255, "distributor delayed": 50560, "distributor undelayed": 50560, "distributor violated": 266, "exec candidate": 72356, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159762, "exec total [new]": 324206, "exec triage": 141777, "executor restarts": 747, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311832, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45519, "no exec duration": 36460000000, "no exec requests": 268, "pending": 7, "prog exec time": 0, "reproducing": 7, "rpc recv": 7991428132, "rpc sent": 1828939448, "signal": 303123, "smash jobs": 0, "triage jobs": 0, "vm output": 51514553, "vm restarts [base]": 48, "vm restarts [new]": 86 } 2025/08/13 22:22:39 base: boot error: can't ssh into the instance 2025/08/13 22:23:29 runner 1 connected 2025/08/13 22:24:12 base crash: no output from test machine 2025/08/13 22:24:50 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:25:00 runner 2 connected 2025/08/13 22:26:13 base crash: no output from test machine 2025/08/13 22:26:18 base crash: no output from test machine 2025/08/13 22:26:29 new: boot error: can't ssh into the instance 2025/08/13 22:27:02 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:27:06 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 6211, "comps overflows": 0, "corpus": 44620, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43464, "coverage": 308255, "distributor delayed": 50560, "distributor undelayed": 50560, "distributor violated": 266, "exec candidate": 72356, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159762, "exec total [new]": 324206, "exec triage": 141777, "executor restarts": 747, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311832, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45519, "no exec duration": 36460000000, "no exec requests": 268, "pending": 7, "prog exec time": 0, "reproducing": 7, "rpc recv": 8053196164, "rpc sent": 1828940008, "signal": 303123, "smash jobs": 0, "triage jobs": 0, "vm output": 52513338, "vm restarts [base]": 50, "vm restarts [new]": 86 } 2025/08/13 22:27:07 runner 0 connected 2025/08/13 22:27:11 runner 3 connected 2025/08/13 22:28:28 base crash: no output from test machine 2025/08/13 22:29:10 new: boot error: can't ssh into the instance 2025/08/13 22:29:25 runner 1 connected 2025/08/13 22:30:00 base crash: no output from test machine 2025/08/13 22:30:34 new: boot error: can't ssh into the instance 2025/08/13 22:30:50 runner 2 connected 2025/08/13 22:31:46 repro finished 'KASAN: slab-use-after-free Read in l2cap_unregister_user', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 22:31:46 start reproducing 'possible deadlock in ocfs2_setattr' 2025/08/13 22:31:46 failed repro for "KASAN: slab-use-after-free Read in l2cap_unregister_user", err=%!s() 2025/08/13 22:31:46 "KASAN: slab-use-after-free Read in l2cap_unregister_user": saved crash log into 1755124306.crash.log 2025/08/13 22:31:46 "KASAN: slab-use-after-free Read in l2cap_unregister_user": saved repro log into 1755124306.repro.log 2025/08/13 22:32:06 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 6211, "comps overflows": 0, "corpus": 44620, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43464, "coverage": 308255, "distributor delayed": 50560, "distributor undelayed": 50560, "distributor violated": 266, "exec candidate": 72356, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159762, "exec total [new]": 324206, "exec triage": 141777, "executor restarts": 747, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311832, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45519, "no exec duration": 36460000000, "no exec requests": 268, "pending": 6, "prog exec time": 0, "reproducing": 7, "rpc recv": 8176732220, "rpc sent": 1828941128, "signal": 303123, "smash jobs": 0, "triage jobs": 0, "vm output": 55363659, "vm restarts [base]": 54, "vm restarts [new]": 86 } 2025/08/13 22:32:07 base crash: no output from test machine 2025/08/13 22:32:11 base crash: no output from test machine 2025/08/13 22:33:03 runner 0 connected 2025/08/13 22:33:08 runner 3 connected 2025/08/13 22:33:48 new: boot error: can't ssh into the instance 2025/08/13 22:34:24 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/fsntfs.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:34:24 base crash: no output from test machine 2025/08/13 22:35:19 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:35:21 runner 1 connected 2025/08/13 22:35:49 base crash: no output from test machine 2025/08/13 22:36:19 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:36:48 runner 2 connected 2025/08/13 22:36:52 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:37:06 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 6211, "comps overflows": 0, "corpus": 44620, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43464, "coverage": 308255, "distributor delayed": 50560, "distributor undelayed": 50560, "distributor violated": 266, "exec candidate": 72356, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159762, "exec total [new]": 324206, "exec triage": 141777, "executor restarts": 747, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311832, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45519, "no exec duration": 36460000000, "no exec requests": 268, "pending": 6, "prog exec time": 0, "reproducing": 7, "rpc recv": 8300268276, "rpc sent": 1828942248, "signal": 303123, "smash jobs": 0, "triage jobs": 0, "vm output": 57092876, "vm restarts [base]": 58, "vm restarts [new]": 86 } 2025/08/13 22:37:55 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:38:03 base crash: no output from test machine 2025/08/13 22:38:07 base crash: no output from test machine 2025/08/13 22:38:25 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/fsntfs.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:39:00 runner 0 connected 2025/08/13 22:39:04 runner 3 connected 2025/08/13 22:39:45 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/fsntfs.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:40:21 base crash: no output from test machine 2025/08/13 22:40:56 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:41:18 runner 1 connected 2025/08/13 22:41:47 base crash: no output from test machine 2025/08/13 22:42:06 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 6211, "comps overflows": 0, "corpus": 44620, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43464, "coverage": 308255, "distributor delayed": 50560, "distributor undelayed": 50560, "distributor violated": 266, "exec candidate": 72356, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159762, "exec total [new]": 324206, "exec triage": 141777, "executor restarts": 747, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311832, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 11, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45519, "no exec duration": 36460000000, "no exec requests": 268, "pending": 6, "prog exec time": 0, "reproducing": 7, "rpc recv": 8392920316, "rpc sent": 1828943088, "signal": 303123, "smash jobs": 0, "triage jobs": 0, "vm output": 58772453, "vm restarts [base]": 61, "vm restarts [new]": 86 } 2025/08/13 22:42:18 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:42:27 new: boot error: can't ssh into the instance 2025/08/13 22:42:36 new: boot error: can't ssh into the instance 2025/08/13 22:42:44 runner 2 connected 2025/08/13 22:43:38 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:43:59 base crash: no output from test machine 2025/08/13 22:44:04 base crash: no output from test machine 2025/08/13 22:44:04 repro finished 'possible deadlock in ocfs2_setattr', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 22:44:04 failed repro for "possible deadlock in ocfs2_setattr", err=%!s() 2025/08/13 22:44:04 start reproducing 'general protection fault in pcl818_ai_cancel' 2025/08/13 22:44:04 "possible deadlock in ocfs2_setattr": saved crash log into 1755125044.crash.log 2025/08/13 22:44:04 "possible deadlock in ocfs2_setattr": saved repro log into 1755125044.repro.log 2025/08/13 22:44:12 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:45:00 runner 3 connected 2025/08/13 22:45:41 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:46:17 base crash: no output from test machine 2025/08/13 22:47:06 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 6211, "comps overflows": 0, "corpus": 44620, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43464, "coverage": 308255, "distributor delayed": 50560, "distributor undelayed": 50560, "distributor violated": 266, "exec candidate": 72356, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159762, "exec total [new]": 324206, "exec triage": 141777, "executor restarts": 747, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311832, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 16, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45519, "no exec duration": 36460000000, "no exec requests": 268, "pending": 5, "prog exec time": 0, "reproducing": 7, "rpc recv": 8454688348, "rpc sent": 1828943648, "signal": 303123, "smash jobs": 0, "triage jobs": 0, "vm output": 59946244, "vm restarts [base]": 63, "vm restarts [new]": 86 } 2025/08/13 22:47:11 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:47:16 runner 1 connected 2025/08/13 22:47:43 base crash: no output from test machine 2025/08/13 22:48:32 runner 2 connected 2025/08/13 22:49:51 new: boot error: can't ssh into the instance 2025/08/13 22:49:58 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:50:00 base crash: no output from test machine 2025/08/13 22:50:57 runner 3 connected 2025/08/13 22:51:30 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/fsntfs.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:52:06 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 6211, "comps overflows": 0, "corpus": 44620, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43464, "coverage": 308255, "distributor delayed": 50560, "distributor undelayed": 50560, "distributor violated": 266, "exec candidate": 72356, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159762, "exec total [new]": 324206, "exec triage": 141777, "executor restarts": 747, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311832, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 19, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45519, "no exec duration": 36460000000, "no exec requests": 268, "pending": 5, "prog exec time": 0, "reproducing": 7, "rpc recv": 8547340396, "rpc sent": 1828944488, "signal": 303123, "smash jobs": 0, "triage jobs": 0, "vm output": 60742992, "vm restarts [base]": 66, "vm restarts [new]": 86 } 2025/08/13 22:52:16 base crash: no output from test machine 2025/08/13 22:52:33 new: boot error: can't ssh into the instance 2025/08/13 22:53:04 runner 1 connected 2025/08/13 22:53:31 base crash: no output from test machine 2025/08/13 22:54:00 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:54:05 base: boot error: can't ssh into the instance 2025/08/13 22:54:20 runner 2 connected 2025/08/13 22:54:45 new: boot error: can't ssh into the instance 2025/08/13 22:54:54 runner 0 connected 2025/08/13 22:55:57 base crash: no output from test machine 2025/08/13 22:56:32 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/fsntfs.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:56:40 new: boot error: can't ssh into the instance 2025/08/13 22:56:46 runner 3 connected 2025/08/13 22:57:06 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 6211, "comps overflows": 0, "corpus": 44620, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43464, "coverage": 308255, "distributor delayed": 50560, "distributor undelayed": 50560, "distributor violated": 266, "exec candidate": 72356, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159762, "exec total [new]": 324206, "exec triage": 141777, "executor restarts": 747, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311832, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 21, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45519, "no exec duration": 36460000000, "no exec requests": 268, "pending": 5, "prog exec time": 0, "reproducing": 7, "rpc recv": 8670876452, "rpc sent": 1828945608, "signal": 303123, "smash jobs": 0, "triage jobs": 0, "vm output": 61861368, "vm restarts [base]": 70, "vm restarts [new]": 86 } 2025/08/13 22:57:56 repro finished 'INFO: task hung in tun_chr_close', repro=true crepro=false desc='lost connection to test machine' hub=false from_dashboard=false 2025/08/13 22:57:56 found repro for "lost connection to test machine" (orig title: "INFO: task hung in tun_chr_close", reliability: 1), took 41.46 minutes 2025/08/13 22:57:56 start reproducing 'possible deadlock in ocfs2_init_acl' 2025/08/13 22:57:56 "lost connection to test machine": saved crash log into 1755125876.crash.log 2025/08/13 22:57:56 "lost connection to test machine": saved repro log into 1755125876.repro.log 2025/08/13 22:58:03 base crash: no output from test machine 2025/08/13 22:58:42 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/fsntfs.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:58:46 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 22:58:52 runner 1 connected 2025/08/13 22:59:20 base crash: no output from test machine 2025/08/13 22:59:34 attempt #0 to run "lost connection to test machine" on base: crashed with lost connection to test machine 2025/08/13 22:59:34 crashes both: lost connection to test machine / lost connection to test machine 2025/08/13 22:59:56 repro finished 'INFO: task hung in corrupted', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 22:59:56 failed repro for "INFO: task hung in corrupted", err=%!s() 2025/08/13 22:59:56 "INFO: task hung in corrupted": saved crash log into 1755125996.crash.log 2025/08/13 22:59:56 "INFO: task hung in corrupted": saved repro log into 1755125996.repro.log 2025/08/13 23:00:03 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:00:04 new: boot error: can't ssh into the instance 2025/08/13 23:00:09 runner 2 connected 2025/08/13 23:00:17 new: boot error: can't ssh into the instance 2025/08/13 23:00:23 runner 0 connected 2025/08/13 23:00:47 runner 0 connected 2025/08/13 23:01:15 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:01:46 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:02:06 STAT { "buffer too small": 0, "candidate triage jobs": 7, "candidates": 5466, "comps overflows": 0, "corpus": 44620, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43605, "coverage": 308255, "distributor delayed": 50567, "distributor undelayed": 50560, "distributor violated": 266, "exec candidate": 73101, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 160447, "exec total [new]": 324955, "exec triage": 141779, "executor restarts": 751, "fault jobs": 0, "fuzzer jobs": 7, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 311837, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 23, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45524, "no exec duration": 293427000000, "no exec requests": 890, "pending": 5, "prog exec time": 197, "reproducing": 6, "rpc recv": 8794886300, "rpc sent": 1834651896, "signal": 303123, "smash jobs": 0, "triage jobs": 0, "vm output": 63359687, "vm restarts [base]": 73, "vm restarts [new]": 87 } 2025/08/13 23:02:39 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:02:39 new: boot error: can't ssh into the instance 2025/08/13 23:03:19 runner 1 connected 2025/08/13 23:06:10 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/08/13 23:06:18 new: boot error: can't ssh into the instance 2025/08/13 23:07:06 STAT { "buffer too small": 0, "candidate triage jobs": 13, "candidates": 443, "comps overflows": 0, "corpus": 44657, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 44457, "coverage": 308333, "distributor delayed": 50601, "distributor undelayed": 50588, "distributor violated": 272, "exec candidate": 78124, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 165606, "exec total [new]": 330114, "exec triage": 141911, "executor restarts": 761, "fault jobs": 0, "fuzzer jobs": 13, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 311924, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 23, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45571, "no exec duration": 1674460000000, "no exec requests": 5112, "pending": 5, "prog exec time": 209, "reproducing": 6, "rpc recv": 8833302204, "rpc sent": 1862860712, "signal": 303202, "smash jobs": 0, "triage jobs": 0, "vm output": 68010475, "vm restarts [base]": 73, "vm restarts [new]": 88 } 2025/08/13 23:09:34 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/13 23:09:53 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:10:09 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:10:24 runner 0 connected 2025/08/13 23:10:57 runner 1 connected 2025/08/13 23:11:10 repro finished 'possible deadlock in ocfs2_init_acl', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 23:11:10 failed repro for "possible deadlock in ocfs2_init_acl", err=%!s() 2025/08/13 23:11:10 "possible deadlock in ocfs2_init_acl": saved crash log into 1755126670.crash.log 2025/08/13 23:11:10 "possible deadlock in ocfs2_init_acl": saved repro log into 1755126670.repro.log 2025/08/13 23:11:15 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:12:06 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 3, "corpus": 44682, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 44741, "coverage": 308372, "distributor delayed": 50604, "distributor undelayed": 50604, "distributor violated": 288, "exec candidate": 78567, "exec collide": 216, "exec fuzz": 422, "exec gen": 20, "exec hints": 178, "exec inject": 0, "exec minimize": 122, "exec retries": 20, "exec seeds": 21, "exec smash": 100, "exec total [base]": 167221, "exec total [new]": 331740, "exec triage": 142015, "executor restarts": 766, "fault jobs": 0, "fuzzer jobs": 10, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 1, "hints jobs": 4, "max signal": 312047, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 91, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45593, "no exec duration": 2386657000000, "no exec requests": 6668, "pending": 5, "prog exec time": 797, "reproducing": 5, "rpc recv": 8901823076, "rpc sent": 1898108832, "signal": 303240, "smash jobs": 3, "triage jobs": 3, "vm output": 72356545, "vm restarts [base]": 74, "vm restarts [new]": 89 } 2025/08/13 23:12:30 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:13:08 runner 2 connected 2025/08/13 23:13:49 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:14:04 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/13 23:14:55 runner 2 connected 2025/08/13 23:15:27 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/13 23:16:16 new: boot error: can't ssh into the instance 2025/08/13 23:16:18 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:16:19 base crash: lost connection to test machine 2025/08/13 23:16:23 runner 3 connected 2025/08/13 23:17:05 runner 0 connected 2025/08/13 23:17:06 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 26, "corpus": 44708, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 45053, "coverage": 308412, "distributor delayed": 50623, "distributor undelayed": 50622, "distributor violated": 288, "exec candidate": 78567, "exec collide": 395, "exec fuzz": 764, "exec gen": 41, "exec hints": 394, "exec inject": 0, "exec minimize": 575, "exec retries": 20, "exec seeds": 92, "exec smash": 359, "exec total [base]": 168838, "exec total [new]": 333355, "exec triage": 142089, "executor restarts": 775, "fault jobs": 0, "fuzzer jobs": 48, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 1, "hints jobs": 22, "max signal": 312091, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 353, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45622, "no exec duration": 3040298000000, "no exec requests": 8078, "pending": 5, "prog exec time": 774, "reproducing": 5, "rpc recv": 9017321336, "rpc sent": 1949364592, "signal": 303280, "smash jobs": 25, "triage jobs": 1, "vm output": 77438708, "vm restarts [base]": 75, "vm restarts [new]": 92 } 2025/08/13 23:17:13 new: boot error: can't ssh into the instance 2025/08/13 23:17:15 runner 1 connected 2025/08/13 23:17:16 runner 1 connected 2025/08/13 23:17:44 new: boot error: can't ssh into the instance 2025/08/13 23:18:20 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:19:01 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:19:15 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/fsntfs.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:19:23 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:20000: connect: connection refused 2025/08/13 23:19:23 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:20000: connect: connection refused 2025/08/13 23:19:31 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:9633: connect: connection refused 2025/08/13 23:19:31 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:9633: connect: connection refused 2025/08/13 23:19:33 base crash: lost connection to test machine 2025/08/13 23:19:35 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:59718: connect: connection refused 2025/08/13 23:19:35 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:59718: connect: connection refused 2025/08/13 23:19:41 base crash: lost connection to test machine 2025/08/13 23:19:45 base crash: lost connection to test machine 2025/08/13 23:19:49 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:48265: connect: connection refused 2025/08/13 23:19:49 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:48265: connect: connection refused 2025/08/13 23:19:58 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:57796: connect: connection refused 2025/08/13 23:19:58 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:57796: connect: connection refused 2025/08/13 23:19:59 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:20:07 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:20:08 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:20:14 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:31587: connect: connection refused 2025/08/13 23:20:14 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:31587: connect: connection refused 2025/08/13 23:20:24 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:20:29 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:20:29 runner 0 connected 2025/08/13 23:20:30 runner 3 connected 2025/08/13 23:20:48 runner 1 connected 2025/08/13 23:20:55 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/13 23:20:58 runner 2 connected 2025/08/13 23:21:20 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:28360: connect: connection refused 2025/08/13 23:21:20 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:28360: connect: connection refused 2025/08/13 23:21:21 runner 0 connected 2025/08/13 23:21:24 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:6318: connect: connection refused 2025/08/13 23:21:24 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:6318: connect: connection refused 2025/08/13 23:21:25 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:11459: connect: connection refused 2025/08/13 23:21:25 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:11459: connect: connection refused 2025/08/13 23:21:25 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:2528: connect: connection refused 2025/08/13 23:21:25 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:2528: connect: connection refused 2025/08/13 23:21:30 base crash: lost connection to test machine 2025/08/13 23:21:34 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:21:34 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:21:34 repro finished 'WARNING in dbAdjTree', repro=true crepro=false desc='WARNING in dbAdjTree' hub=false from_dashboard=false 2025/08/13 23:21:34 found repro for "WARNING in dbAdjTree" (orig title: "-SAME-", reliability: 1), took 78.53 minutes 2025/08/13 23:21:34 start reproducing 'WARNING in dbAdjTree' 2025/08/13 23:21:34 "WARNING in dbAdjTree": saved crash log into 1755127294.crash.log 2025/08/13 23:21:34 "WARNING in dbAdjTree": saved repro log into 1755127294.repro.log 2025/08/13 23:21:35 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:21:35 base crash: lost connection to test machine 2025/08/13 23:21:44 runner 2 connected 2025/08/13 23:21:58 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:22:04 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:55959: connect: connection refused 2025/08/13 23:22:04 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:55959: connect: connection refused 2025/08/13 23:22:06 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 44, "corpus": 44723, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 45531, "coverage": 308437, "distributor delayed": 50668, "distributor undelayed": 50665, "distributor violated": 288, "exec candidate": 78567, "exec collide": 632, "exec fuzz": 1240, "exec gen": 57, "exec hints": 656, "exec inject": 0, "exec minimize": 937, "exec retries": 20, "exec seeds": 143, "exec smash": 773, "exec total [base]": 170697, "exec total [new]": 335267, "exec triage": 142174, "executor restarts": 795, "fault jobs": 0, "fuzzer jobs": 56, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 0, "hints jobs": 21, "max signal": 312196, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 571, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45655, "no exec duration": 3260712000000, "no exec requests": 8696, "pending": 4, "prog exec time": 1457, "reproducing": 5, "rpc recv": 9322909976, "rpc sent": 2026829128, "signal": 303305, "smash jobs": 25, "triage jobs": 10, "vm output": 80745390, "vm restarts [base]": 79, "vm restarts [new]": 96 } 2025/08/13 23:22:14 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:22:26 runner 3 connected 2025/08/13 23:22:31 runner 1 connected 2025/08/13 23:22:32 runner 2 connected 2025/08/13 23:22:32 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:55488: connect: connection refused 2025/08/13 23:22:32 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:55488: connect: connection refused 2025/08/13 23:22:42 base crash: lost connection to test machine 2025/08/13 23:22:57 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:40539: connect: connection refused 2025/08/13 23:22:57 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:40539: connect: connection refused 2025/08/13 23:22:57 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:58581: connect: connection refused 2025/08/13 23:22:57 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:58581: connect: connection refused 2025/08/13 23:23:04 attempt #0 to run "WARNING in dbAdjTree" on base: crashed with WARNING in dbAdjTree 2025/08/13 23:23:04 crashes both: WARNING in dbAdjTree / WARNING in dbAdjTree 2025/08/13 23:23:07 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:23:07 base crash: lost connection to test machine 2025/08/13 23:23:10 runner 0 connected 2025/08/13 23:23:22 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/fsntfs.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:23:31 runner 2 connected 2025/08/13 23:23:32 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:20589: connect: connection refused 2025/08/13 23:23:32 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:20589: connect: connection refused 2025/08/13 23:23:36 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:39930: connect: connection refused 2025/08/13 23:23:36 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:39930: connect: connection refused 2025/08/13 23:23:42 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:23:46 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:23:58 runner 2 connected 2025/08/13 23:24:01 runner 0 connected 2025/08/13 23:24:01 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:2275: connect: connection refused 2025/08/13 23:24:01 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:2275: connect: connection refused 2025/08/13 23:24:04 runner 3 connected 2025/08/13 23:24:11 base crash: lost connection to test machine 2025/08/13 23:24:22 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:28835: connect: connection refused 2025/08/13 23:24:22 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:28835: connect: connection refused 2025/08/13 23:24:32 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:24:35 runner 0 connected 2025/08/13 23:24:39 runner 1 connected 2025/08/13 23:24:39 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:25:01 runner 2 connected 2025/08/13 23:25:02 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:42176: connect: connection refused 2025/08/13 23:25:02 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:42176: connect: connection refused 2025/08/13 23:25:12 base crash: lost connection to test machine 2025/08/13 23:25:13 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:20874: connect: connection refused 2025/08/13 23:25:13 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:20874: connect: connection refused 2025/08/13 23:25:16 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:25649: connect: connection refused 2025/08/13 23:25:16 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:25649: connect: connection refused 2025/08/13 23:25:23 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:25:26 base crash: lost connection to test machine 2025/08/13 23:25:29 runner 2 connected 2025/08/13 23:26:00 reproducing crash 'possible deadlock in run_unpack_ex': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ntfs3/run.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 23:26:00 runner 3 connected 2025/08/13 23:26:12 runner 1 connected 2025/08/13 23:26:15 runner 0 connected 2025/08/13 23:26:28 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:20776: connect: connection refused 2025/08/13 23:26:28 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:20776: connect: connection refused 2025/08/13 23:26:38 base crash: lost connection to test machine 2025/08/13 23:27:06 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 64, "corpus": 44730, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 45682, "coverage": 308486, "distributor delayed": 50694, "distributor undelayed": 50694, "distributor violated": 288, "exec candidate": 78567, "exec collide": 750, "exec fuzz": 1486, "exec gen": 69, "exec hints": 807, "exec inject": 0, "exec minimize": 1182, "exec retries": 20, "exec seeds": 165, "exec smash": 977, "exec total [base]": 171402, "exec total [new]": 336329, "exec triage": 142227, "executor restarts": 836, "fault jobs": 0, "fuzzer jobs": 41, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 16, "max signal": 312225, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 721, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45672, "no exec duration": 3270689000000, "no exec requests": 8711, "pending": 4, "prog exec time": 575, "reproducing": 5, "rpc recv": 9803855556, "rpc sent": 2071715368, "signal": 303353, "smash jobs": 17, "triage jobs": 8, "vm output": 83872457, "vm restarts [base]": 86, "vm restarts [new]": 104 } 2025/08/13 23:27:19 new: boot error: can't ssh into the instance 2025/08/13 23:27:23 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:55231: connect: connection refused 2025/08/13 23:27:23 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:55231: connect: connection refused 2025/08/13 23:27:28 runner 2 connected 2025/08/13 23:27:33 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:27:36 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:58771: connect: connection refused 2025/08/13 23:27:36 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:58771: connect: connection refused 2025/08/13 23:27:46 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:27:47 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:28531: connect: connection refused 2025/08/13 23:27:47 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:28531: connect: connection refused 2025/08/13 23:27:51 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:24261: connect: connection refused 2025/08/13 23:27:51 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:24261: connect: connection refused 2025/08/13 23:27:57 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:28:01 base crash: lost connection to test machine 2025/08/13 23:28:37 runner 0 connected 2025/08/13 23:28:43 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:15806: connect: connection refused 2025/08/13 23:28:43 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:15806: connect: connection refused 2025/08/13 23:28:46 runner 2 connected 2025/08/13 23:28:50 runner 3 connected 2025/08/13 23:28:53 base crash: lost connection to test machine 2025/08/13 23:29:17 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:7890: connect: connection refused 2025/08/13 23:29:17 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:7890: connect: connection refused 2025/08/13 23:29:20 new: boot error: can't ssh into the instance 2025/08/13 23:29:27 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:29:27 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:47008: connect: connection refused 2025/08/13 23:29:27 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:47008: connect: connection refused 2025/08/13 23:29:37 base crash: lost connection to test machine 2025/08/13 23:29:50 base: boot error: can't ssh into the instance 2025/08/13 23:30:16 runner 2 connected 2025/08/13 23:30:27 runner 0 connected 2025/08/13 23:30:40 runner 1 connected 2025/08/13 23:32:03 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:32:06 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 77, "corpus": 44736, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 45866, "coverage": 308496, "distributor delayed": 50752, "distributor undelayed": 50723, "distributor violated": 288, "exec candidate": 78567, "exec collide": 914, "exec fuzz": 1778, "exec gen": 85, "exec hints": 1002, "exec inject": 0, "exec minimize": 1290, "exec retries": 20, "exec seeds": 195, "exec smash": 1220, "exec total [base]": 172901, "exec total [new]": 337410, "exec triage": 142260, "executor restarts": 874, "fault jobs": 0, "fuzzer jobs": 45, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 1, "hints jobs": 7, "max signal": 312292, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 790, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45702, "no exec duration": 3370386000000, "no exec requests": 8925, "pending": 4, "prog exec time": 849, "reproducing": 5, "rpc recv": 10033135084, "rpc sent": 2118746664, "signal": 303361, "smash jobs": 9, "triage jobs": 29, "vm output": 87699748, "vm restarts [base]": 90, "vm restarts [new]": 107 } 2025/08/13 23:32:33 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:37593: connect: connection refused 2025/08/13 23:32:33 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:37593: connect: connection refused 2025/08/13 23:32:43 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:33:31 runner 2 connected 2025/08/13 23:33:56 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:47807: connect: connection refused 2025/08/13 23:33:56 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:47807: connect: connection refused 2025/08/13 23:34:06 base crash: lost connection to test machine 2025/08/13 23:34:23 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:32229: connect: connection refused 2025/08/13 23:34:23 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:32229: connect: connection refused 2025/08/13 23:34:25 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:64860: connect: connection refused 2025/08/13 23:34:25 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:64860: connect: connection refused 2025/08/13 23:34:25 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:49671: connect: connection refused 2025/08/13 23:34:25 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:49671: connect: connection refused 2025/08/13 23:34:33 base crash: lost connection to test machine 2025/08/13 23:34:35 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:34:35 base crash: lost connection to test machine 2025/08/13 23:34:55 runner 0 connected 2025/08/13 23:35:23 runner 3 connected 2025/08/13 23:35:24 runner 1 connected 2025/08/13 23:37:06 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 77, "corpus": 44736, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 45902, "coverage": 308496, "distributor delayed": 50754, "distributor undelayed": 50729, "distributor violated": 294, "exec candidate": 78567, "exec collide": 937, "exec fuzz": 1838, "exec gen": 87, "exec hints": 1041, "exec inject": 0, "exec minimize": 1310, "exec retries": 20, "exec seeds": 195, "exec smash": 1268, "exec total [base]": 173095, "exec total [new]": 337609, "exec triage": 142268, "executor restarts": 879, "fault jobs": 0, "fuzzer jobs": 43, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 6, "max signal": 312296, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 802, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45704, "no exec duration": 3449997000000, "no exec requests": 9089, "pending": 4, "prog exec time": 0, "reproducing": 5, "rpc recv": 10158020160, "rpc sent": 2127525368, "signal": 303361, "smash jobs": 8, "triage jobs": 29, "vm output": 90285997, "vm restarts [base]": 93, "vm restarts [new]": 108 } 2025/08/13 23:37:25 new: boot error: can't ssh into the instance 2025/08/13 23:37:38 new: boot error: can't ssh into the instance 2025/08/13 23:38:29 runner 1 connected 2025/08/13 23:38:58 base: boot error: can't ssh into the instance 2025/08/13 23:39:01 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:25319: connect: connection refused 2025/08/13 23:39:01 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:25319: connect: connection refused 2025/08/13 23:39:11 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:39:47 runner 2 connected 2025/08/13 23:42:06 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 77, "corpus": 44736, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 45905, "coverage": 308496, "distributor delayed": 50754, "distributor undelayed": 50754, "distributor violated": 294, "exec candidate": 78567, "exec collide": 937, "exec fuzz": 1839, "exec gen": 87, "exec hints": 1042, "exec inject": 0, "exec minimize": 1316, "exec retries": 20, "exec seeds": 195, "exec smash": 1269, "exec total [base]": 173105, "exec total [new]": 337645, "exec triage": 142293, "executor restarts": 882, "fault jobs": 0, "fuzzer jobs": 41, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 6, "max signal": 312297, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 829, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45704, "no exec duration": 3451337000000, "no exec requests": 9096, "pending": 4, "prog exec time": 0, "reproducing": 5, "rpc recv": 10222175460, "rpc sent": 2131296312, "signal": 303361, "smash jobs": 8, "triage jobs": 27, "vm output": 92863627, "vm restarts [base]": 94, "vm restarts [new]": 109 } 2025/08/13 23:42:09 new: boot error: can't ssh into the instance 2025/08/13 23:42:57 runner 0 connected 2025/08/13 23:43:22 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:33327: connect: connection refused 2025/08/13 23:43:22 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:33327: connect: connection refused 2025/08/13 23:43:32 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:43:33 repro finished 'possible deadlock in input_event', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 23:43:33 failed repro for "possible deadlock in input_event", err=%!s() 2025/08/13 23:43:33 start reproducing 'possible deadlock in input_event' 2025/08/13 23:43:33 "possible deadlock in input_event": saved crash log into 1755128613.crash.log 2025/08/13 23:43:33 "possible deadlock in input_event": saved repro log into 1755128613.repro.log 2025/08/13 23:44:20 runner 0 connected 2025/08/13 23:44:41 new: boot error: can't ssh into the instance 2025/08/13 23:44:45 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:44056: connect: connection refused 2025/08/13 23:44:45 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:44056: connect: connection refused 2025/08/13 23:44:47 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:12980: connect: connection refused 2025/08/13 23:44:47 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:12980: connect: connection refused 2025/08/13 23:44:48 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:62349: connect: connection refused 2025/08/13 23:44:48 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:62349: connect: connection refused 2025/08/13 23:44:55 base crash: lost connection to test machine 2025/08/13 23:44:57 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:44:58 base crash: lost connection to test machine 2025/08/13 23:45:30 runner 2 connected 2025/08/13 23:45:46 runner 1 connected 2025/08/13 23:45:48 runner 0 connected 2025/08/13 23:45:48 runner 3 connected 2025/08/13 23:46:00 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:53400: connect: connection refused 2025/08/13 23:46:00 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:53400: connect: connection refused 2025/08/13 23:46:02 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:9809: connect: connection refused 2025/08/13 23:46:02 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:9809: connect: connection refused 2025/08/13 23:46:10 base crash: lost connection to test machine 2025/08/13 23:46:12 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:46:24 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:2194: connect: connection refused 2025/08/13 23:46:24 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:2194: connect: connection refused 2025/08/13 23:46:29 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:37090: connect: connection refused 2025/08/13 23:46:29 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:37090: connect: connection refused 2025/08/13 23:46:31 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:20800: connect: connection refused 2025/08/13 23:46:31 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:20800: connect: connection refused 2025/08/13 23:46:32 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:45698: connect: connection refused 2025/08/13 23:46:32 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:45698: connect: connection refused 2025/08/13 23:46:34 base crash: lost connection to test machine 2025/08/13 23:46:39 base crash: lost connection to test machine 2025/08/13 23:46:41 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 23:46:42 base crash: lost connection to test machine 2025/08/13 23:47:00 runner 0 connected 2025/08/13 23:47:01 status reporting terminated 2025/08/13 23:47:01 bug reporting terminated 2025/08/13 23:47:01 repro finished 'possible deadlock in input_event', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 23:47:01 repro finished 'INFO: trying to register non-static key in ocfs2_dlm_shutdown', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 23:47:01 failed to recv *flatrpc.InfoRequestRawT: read tcp 127.0.0.1:33249->127.0.0.1:38846: use of closed network connection 2025/08/13 23:47:51 repro finished 'WARNING in dbAdjTree', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 23:49:33 repro finished 'possible deadlock in run_unpack_ex', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 23:51:15 repro finished 'general protection fault in pcl818_ai_cancel', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 23:56:40 syz-diff (base): kernel context loop terminated 2025/08/13 23:56:47 syz-diff (new): kernel context loop terminated 2025/08/13 23:56:47 diff fuzzing terminated 2025/08/13 23:56:47 fuzzing is finished 2025/08/13 23:56:47 status at the end: Title On-Base On-Patched INFO: rcu detected stall in sys_bpf 1 crashes INFO: task hung in corrupted 1 crashes INFO: task hung in tun_chr_close 1 crashes INFO: trying to register non-static key in ocfs2_dlm_shutdown 1 crashes KASAN: slab-use-after-free Read in __xfrm_state_lookup 2 crashes 3 crashes KASAN: slab-use-after-free Read in l2cap_unregister_user 1 crashes KASAN: slab-use-after-free Read in xfrm_alloc_spi 4 crashes 11 crashes KASAN: slab-use-after-free Read in xfrm_state_find 1 crashes 1 crashes KASAN: slab-use-after-free Write in __xfrm_state_delete 2 crashes 1 crashes WARNING in dbAdjTree 1 crashes 2 crashes[reproduced] WARNING in io_ring_exit_work 1 crashes 2 crashes WARNING in xfrm6_tunnel_net_exit 6 crashes 4 crashes WARNING in xfrm_state_fini 5 crashes 13 crashes general protection fault in pcl818_ai_cancel 2 crashes kernel BUG in jfs_evict_inode 1 crashes 1 crashes lost connection to test machine 27 crashes 35 crashes[reproduced] no output from test machine 39 crashes possible deadlock in input_event 3 crashes possible deadlock in mark_as_free_ex 1 crashes 2 crashes possible deadlock in ntfs_fiemap 1 crashes possible deadlock in ocfs2_init_acl 2 crashes possible deadlock in ocfs2_page_mkwrite 1 crashes possible deadlock in ocfs2_reserve_local_alloc_bits 1 crashes possible deadlock in ocfs2_reserve_suballoc_bits 2 crashes 4 crashes possible deadlock in ocfs2_setattr 2 crashes possible deadlock in ocfs2_try_remove_refcount_tree 3 crashes 2 crashes possible deadlock in run_unpack_ex 4 crashes unregister_netdevice: waiting for DEV to become free 2 crashes 2 crashes