2025/08/20 19:09:49 extracted 303749 symbol hashes for base and 303749 for patched 2025/08/20 19:09:49 binaries are different, continuing fuzzing 2025/08/20 19:09:49 adding modified_functions to focus areas: ["shrink_worker" "zswap_cpu_comp_prepare" "zswap_store"] 2025/08/20 19:09:49 adding directly modified files to focus areas: ["mm/zswap.c"] 2025/08/20 19:09:51 downloaded the corpus from https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db 2025/08/20 19:10:48 runner 3 connected 2025/08/20 19:10:48 runner 9 connected 2025/08/20 19:10:48 runner 2 connected 2025/08/20 19:10:48 runner 0 connected 2025/08/20 19:10:48 runner 1 connected 2025/08/20 19:10:48 runner 7 connected 2025/08/20 19:10:48 runner 2 connected 2025/08/20 19:10:49 runner 5 connected 2025/08/20 19:10:49 runner 0 connected 2025/08/20 19:10:49 runner 6 connected 2025/08/20 19:10:49 runner 1 connected 2025/08/20 19:10:49 runner 8 connected 2025/08/20 19:10:50 runner 4 connected 2025/08/20 19:10:55 initializing coverage information... 2025/08/20 19:10:55 executor cover filter: 0 PCs 2025/08/20 19:10:59 discovered 7699 source files, 338618 symbols 2025/08/20 19:10:59 coverage filter: shrink_worker: [mb_cache_shrink_worker shrink_worker] 2025/08/20 19:10:59 coverage filter: zswap_cpu_comp_prepare: [zswap_cpu_comp_prepare] 2025/08/20 19:10:59 coverage filter: zswap_store: [zswap_store] 2025/08/20 19:10:59 coverage filter: mm/zswap.c: [mm/zswap.c] 2025/08/20 19:10:59 area "symbols": 225 PCs in the cover filter 2025/08/20 19:10:59 area "files": 740 PCs in the cover filter 2025/08/20 19:10:59 area "": 0 PCs in the cover filter 2025/08/20 19:10:59 executor cover filter: 0 PCs 2025/08/20 19:11:00 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/20 19:11:00 base: machine check complete 2025/08/20 19:11:03 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/20 19:11:03 new: machine check complete 2025/08/20 19:11:04 new: adding 81150 seeds 2025/08/20 19:11:49 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 19:12:27 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/20 19:12:27 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/20 19:12:38 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/20 19:12:38 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/20 19:12:39 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/20 19:12:39 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/20 19:12:39 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/20 19:12:39 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/20 19:12:45 runner 3 connected 2025/08/20 19:12:50 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/20 19:12:50 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/20 19:13:01 base crash: possible deadlock in ocfs2_acquire_dquot 2025/08/20 19:13:10 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 19:13:16 runner 0 connected 2025/08/20 19:13:28 runner 8 connected 2025/08/20 19:13:28 runner 7 connected 2025/08/20 19:13:29 runner 1 connected 2025/08/20 19:13:39 runner 2 connected 2025/08/20 19:13:52 runner 2 connected 2025/08/20 19:13:59 runner 6 connected 2025/08/20 19:14:10 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/20 19:14:10 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/20 19:14:21 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/20 19:14:21 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/20 19:14:28 base crash: possible deadlock in ocfs2_init_acl 2025/08/20 19:14:42 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 19:14:52 STAT { "buffer too small": 0, "candidate triage jobs": 45, "candidates": 77211, "comps overflows": 0, "corpus": 3860, "corpus [files]": 18, "corpus [symbols]": 10, "cover overflows": 2455, "coverage": 162534, "distributor delayed": 4653, "distributor undelayed": 4652, "distributor violated": 115, "exec candidate": 3939, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 6862, "exec total [new]": 17441, "exec triage": 12285, "executor restarts [base]": 55, "executor restarts [new]": 126, "fault jobs": 0, "fuzzer jobs": 45, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 165366, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 3939, "no exec duration": 45596000000, "no exec requests": 340, "pending": 7, "prog exec time": 335, "reproducing": 0, "rpc recv": 1058181352, "rpc sent": 94013776, "signal": 159845, "smash jobs": 0, "triage jobs": 0, "vm output": 2489028, "vm restarts [base]": 4, "vm restarts [new]": 17 } 2025/08/20 19:14:52 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 19:14:59 runner 7 connected 2025/08/20 19:15:05 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = true] 2025/08/20 19:15:05 scheduled a reproduction of 'WARNING in xfrm6_tunnel_net_exit' 2025/08/20 19:15:08 base crash: KASAN: slab-use-after-free Read in jfs_lazycommit 2025/08/20 19:15:16 runner 0 connected 2025/08/20 19:15:17 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/20 19:15:30 runner 5 connected 2025/08/20 19:15:44 runner 3 connected 2025/08/20 19:15:54 runner 8 connected 2025/08/20 19:15:56 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/20 19:15:56 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/20 19:15:56 runner 1 connected 2025/08/20 19:16:06 patched crashed: possible deadlock in attr_data_get_block [need repro = true] 2025/08/20 19:16:06 scheduled a reproduction of 'possible deadlock in attr_data_get_block' 2025/08/20 19:16:06 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/20 19:16:06 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/20 19:16:17 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/20 19:16:17 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/20 19:16:45 runner 7 connected 2025/08/20 19:16:51 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 19:16:55 runner 9 connected 2025/08/20 19:16:55 runner 1 connected 2025/08/20 19:17:08 runner 2 connected 2025/08/20 19:17:40 runner 4 connected 2025/08/20 19:18:12 base crash: WARNING in xfrm_state_fini 2025/08/20 19:19:29 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/20 19:19:29 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/20 19:19:29 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/20 19:19:39 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/20 19:19:39 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/20 19:19:52 STAT { "buffer too small": 0, "candidate triage jobs": 48, "candidates": 72337, "comps overflows": 0, "corpus": 8688, "corpus [files]": 42, "corpus [symbols]": 20, "cover overflows": 5608, "coverage": 205388, "distributor delayed": 10503, "distributor undelayed": 10500, "distributor violated": 126, "exec candidate": 8813, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 3, "exec seeds": 0, "exec smash": 0, "exec total [base]": 10714, "exec total [new]": 38810, "exec triage": 27382, "executor restarts [base]": 70, "executor restarts [new]": 193, "fault jobs": 0, "fuzzer jobs": 48, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 207277, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 8813, "no exec duration": 46013000000, "no exec requests": 343, "pending": 14, "prog exec time": 330, "reproducing": 0, "rpc recv": 1849218744, "rpc sent": 202757144, "signal": 202005, "smash jobs": 0, "triage jobs": 0, "vm output": 5878070, "vm restarts [base]": 6, "vm restarts [new]": 26 } 2025/08/20 19:19:52 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/20 19:19:57 base: boot error: can't ssh into the instance 2025/08/20 19:20:06 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/20 19:20:06 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/20 19:20:12 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/20 19:20:12 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/20 19:20:17 runner 1 connected 2025/08/20 19:20:17 runner 0 connected 2025/08/20 19:20:17 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/20 19:20:17 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/20 19:20:23 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/20 19:20:23 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/20 19:20:29 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/20 19:20:29 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/20 19:20:29 runner 8 connected 2025/08/20 19:20:41 runner 4 connected 2025/08/20 19:20:46 runner 3 connected 2025/08/20 19:20:55 runner 6 connected 2025/08/20 19:21:01 runner 7 connected 2025/08/20 19:21:05 runner 9 connected 2025/08/20 19:21:17 runner 3 connected 2025/08/20 19:21:58 base crash: possible deadlock in ocfs2_init_acl 2025/08/20 19:22:26 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 19:22:41 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 19:22:49 runner 3 connected 2025/08/20 19:23:14 runner 9 connected 2025/08/20 19:23:31 runner 3 connected 2025/08/20 19:24:21 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/20 19:24:23 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/20 19:24:23 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/20 19:24:26 new: boot error: can't ssh into the instance 2025/08/20 19:24:52 STAT { "buffer too small": 0, "candidate triage jobs": 45, "candidates": 67989, "comps overflows": 0, "corpus": 12990, "corpus [files]": 50, "corpus [symbols]": 25, "cover overflows": 8678, "coverage": 227531, "distributor delayed": 16485, "distributor undelayed": 16479, "distributor violated": 180, "exec candidate": 13161, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 3, "exec seeds": 0, "exec smash": 0, "exec total [base]": 15543, "exec total [new]": 59018, "exec triage": 40897, "executor restarts [base]": 88, "executor restarts [new]": 245, "fault jobs": 0, "fuzzer jobs": 45, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 229630, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 13161, "no exec duration": 46013000000, "no exec requests": 343, "pending": 20, "prog exec time": 326, "reproducing": 0, "rpc recv": 2587192620, "rpc sent": 318186328, "signal": 223772, "smash jobs": 0, "triage jobs": 0, "vm output": 8381308, "vm restarts [base]": 9, "vm restarts [new]": 35 } 2025/08/20 19:24:53 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/20 19:25:11 runner 5 connected 2025/08/20 19:25:12 runner 3 connected 2025/08/20 19:25:15 runner 0 connected 2025/08/20 19:25:23 base: boot error: can't ssh into the instance 2025/08/20 19:25:49 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/20 19:26:12 runner 2 connected 2025/08/20 19:26:45 runner 6 connected 2025/08/20 19:27:06 base crash: WARNING in xfrm_state_fini 2025/08/20 19:27:43 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/20 19:27:43 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/20 19:27:50 base crash: WARNING in xfrm_state_fini 2025/08/20 19:27:56 patched crashed: INFO: task hung in __iterate_supers [need repro = true] 2025/08/20 19:27:56 scheduled a reproduction of 'INFO: task hung in __iterate_supers' 2025/08/20 19:28:02 runner 0 connected 2025/08/20 19:28:17 base: boot error: can't ssh into the instance 2025/08/20 19:28:40 runner 2 connected 2025/08/20 19:28:40 runner 3 connected 2025/08/20 19:28:45 runner 8 connected 2025/08/20 19:29:06 runner 1 connected 2025/08/20 19:29:14 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/20 19:29:14 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/20 19:29:22 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/20 19:29:25 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/20 19:29:25 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/20 19:29:52 STAT { "buffer too small": 0, "candidate triage jobs": 38, "candidates": 63988, "comps overflows": 0, "corpus": 16935, "corpus [files]": 60, "corpus [symbols]": 26, "cover overflows": 11441, "coverage": 244720, "distributor delayed": 21754, "distributor undelayed": 21747, "distributor violated": 187, "exec candidate": 17162, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 24220, "exec total [new]": 78566, "exec triage": 53326, "executor restarts [base]": 114, "executor restarts [new]": 309, "fault jobs": 0, "fuzzer jobs": 38, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 246967, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 17162, "no exec duration": 46287000000, "no exec requests": 347, "pending": 24, "prog exec time": 275, "reproducing": 0, "rpc recv": 3215731596, "rpc sent": 434147520, "signal": 240640, "smash jobs": 0, "triage jobs": 0, "vm output": 10836546, "vm restarts [base]": 13, "vm restarts [new]": 41 } 2025/08/20 19:30:02 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 19:30:03 runner 8 connected 2025/08/20 19:30:18 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/20 19:30:18 runner 0 connected 2025/08/20 19:30:22 runner 9 connected 2025/08/20 19:30:29 new: boot error: can't ssh into the instance 2025/08/20 19:30:33 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/20 19:30:33 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/20 19:30:37 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 19:30:51 runner 1 connected 2025/08/20 19:30:53 patched crashed: KASAN: slab-use-after-free Read in xfrm_state_find [need repro = true] 2025/08/20 19:30:53 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_state_find' 2025/08/20 19:31:03 base crash: KASAN: slab-use-after-free Read in xfrm_state_find 2025/08/20 19:31:06 runner 3 connected 2025/08/20 19:31:20 runner 2 connected 2025/08/20 19:31:26 runner 1 connected 2025/08/20 19:31:42 runner 6 connected 2025/08/20 19:31:45 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/20 19:31:51 runner 2 connected 2025/08/20 19:32:42 runner 4 connected 2025/08/20 19:34:04 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/20 19:34:17 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 19:34:26 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:34:29 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 19:34:44 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/20 19:34:48 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/20 19:34:52 STAT { "buffer too small": 0, "candidate triage jobs": 35, "candidates": 59732, "comps overflows": 0, "corpus": 21153, "corpus [files]": 70, "corpus [symbols]": 30, "cover overflows": 14266, "coverage": 258517, "distributor delayed": 27172, "distributor undelayed": 27162, "distributor violated": 191, "exec candidate": 21418, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 35712, "exec total [new]": 99960, "exec triage": 66446, "executor restarts [base]": 143, "executor restarts [new]": 357, "fault jobs": 0, "fuzzer jobs": 35, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 260701, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 21418, "no exec duration": 46391000000, "no exec requests": 351, "pending": 26, "prog exec time": 225, "reproducing": 0, "rpc recv": 3870777544, "rpc sent": 561723080, "signal": 254301, "smash jobs": 0, "triage jobs": 0, "vm output": 13393447, "vm restarts [base]": 17, "vm restarts [new]": 47 } 2025/08/20 19:34:54 runner 0 connected 2025/08/20 19:34:59 new: boot error: can't ssh into the instance 2025/08/20 19:35:06 runner 8 connected 2025/08/20 19:35:15 runner 1 connected 2025/08/20 19:35:19 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 19:35:19 runner 9 connected 2025/08/20 19:35:26 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 19:35:36 runner 3 connected 2025/08/20 19:35:49 runner 7 connected 2025/08/20 19:36:09 runner 4 connected 2025/08/20 19:36:15 runner 1 connected 2025/08/20 19:36:56 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/20 19:36:56 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/20 19:36:56 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:37:13 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/20 19:37:13 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/20 19:37:23 base crash: kernel BUG in txUnlock 2025/08/20 19:37:24 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/20 19:37:24 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/20 19:37:44 runner 8 connected 2025/08/20 19:37:44 runner 3 connected 2025/08/20 19:38:01 runner 1 connected 2025/08/20 19:38:05 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/20 19:38:05 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/20 19:38:12 runner 1 connected 2025/08/20 19:38:27 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:38:38 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/20 19:38:38 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/20 19:38:55 runner 4 connected 2025/08/20 19:39:06 base crash: WARNING in xfrm_state_fini 2025/08/20 19:39:07 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 19:39:07 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 19:39:10 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/20 19:39:15 runner 7 connected 2025/08/20 19:39:19 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 19:39:19 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 19:39:26 runner 2 connected 2025/08/20 19:39:52 STAT { "buffer too small": 0, "candidate triage jobs": 29, "candidates": 56148, "comps overflows": 0, "corpus": 24690, "corpus [files]": 75, "corpus [symbols]": 33, "cover overflows": 16773, "coverage": 268502, "distributor delayed": 32803, "distributor undelayed": 32803, "distributor violated": 265, "exec candidate": 25002, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 45981, "exec total [new]": 118916, "exec triage": 77462, "executor restarts [base]": 173, "executor restarts [new]": 418, "fault jobs": 0, "fuzzer jobs": 29, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 270871, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 25002, "no exec duration": 46393000000, "no exec requests": 352, "pending": 31, "prog exec time": 227, "reproducing": 0, "rpc recv": 4613403668, "rpc sent": 694969200, "signal": 264090, "smash jobs": 0, "triage jobs": 0, "vm output": 15461149, "vm restarts [base]": 21, "vm restarts [new]": 58 } 2025/08/20 19:39:55 runner 0 connected 2025/08/20 19:39:56 runner 1 connected 2025/08/20 19:40:09 runner 8 connected 2025/08/20 19:40:39 new: boot error: can't ssh into the instance 2025/08/20 19:41:29 runner 5 connected 2025/08/20 19:41:35 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 19:41:46 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/20 19:42:08 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 19:42:20 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 19:42:23 runner 8 connected 2025/08/20 19:42:25 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:42:35 runner 6 connected 2025/08/20 19:42:36 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:42:51 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:42:55 base crash: general protection fault in pcl818_ai_cancel 2025/08/20 19:42:57 runner 1 connected 2025/08/20 19:43:02 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:43:10 runner 4 connected 2025/08/20 19:43:13 runner 5 connected 2025/08/20 19:43:24 runner 7 connected 2025/08/20 19:43:43 runner 0 connected 2025/08/20 19:43:51 runner 8 connected 2025/08/20 19:44:33 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/20 19:44:33 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/20 19:44:45 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/20 19:44:45 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/20 19:44:50 new: boot error: can't ssh into the instance 2025/08/20 19:44:52 STAT { "buffer too small": 0, "candidate triage jobs": 34, "candidates": 53162, "comps overflows": 0, "corpus": 27634, "corpus [files]": 80, "corpus [symbols]": 36, "cover overflows": 18760, "coverage": 276404, "distributor delayed": 37783, "distributor undelayed": 37772, "distributor violated": 284, "exec candidate": 27988, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 51933, "exec total [new]": 134388, "exec triage": 86567, "executor restarts [base]": 196, "executor restarts [new]": 472, "fault jobs": 0, "fuzzer jobs": 34, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 278945, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 27988, "no exec duration": 46402000000, "no exec requests": 353, "pending": 33, "prog exec time": 273, "reproducing": 0, "rpc recv": 5238256632, "rpc sent": 790121456, "signal": 271982, "smash jobs": 0, "triage jobs": 0, "vm output": 17785195, "vm restarts [base]": 24, "vm restarts [new]": 67 } 2025/08/20 19:45:22 runner 1 connected 2025/08/20 19:45:30 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 19:45:34 runner 7 connected 2025/08/20 19:45:39 runner 0 connected 2025/08/20 19:45:56 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 19:46:06 base crash: possible deadlock in ocfs2_init_acl 2025/08/20 19:46:19 runner 6 connected 2025/08/20 19:46:44 runner 1 connected 2025/08/20 19:46:54 runner 1 connected 2025/08/20 19:47:06 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 19:47:17 patched crashed: possible deadlock in ocfs2_setattr [need repro = true] 2025/08/20 19:47:17 scheduled a reproduction of 'possible deadlock in ocfs2_setattr' 2025/08/20 19:47:29 new: boot error: can't ssh into the instance 2025/08/20 19:47:44 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/20 19:47:44 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/20 19:47:55 runner 4 connected 2025/08/20 19:47:58 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:48:07 runner 6 connected 2025/08/20 19:48:09 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:48:10 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 19:48:26 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/08/20 19:48:34 runner 8 connected 2025/08/20 19:48:38 base crash: possible deadlock in ocfs2_xattr_set 2025/08/20 19:48:46 runner 0 connected 2025/08/20 19:48:57 runner 1 connected 2025/08/20 19:48:59 runner 7 connected 2025/08/20 19:49:13 new: boot error: can't ssh into the instance 2025/08/20 19:49:15 runner 5 connected 2025/08/20 19:49:16 base: boot error: can't ssh into the instance 2025/08/20 19:49:25 base: boot error: can't ssh into the instance 2025/08/20 19:49:27 runner 1 connected 2025/08/20 19:49:52 STAT { "buffer too small": 0, "candidate triage jobs": 39, "candidates": 50416, "comps overflows": 0, "corpus": 30333, "corpus [files]": 81, "corpus [symbols]": 36, "cover overflows": 20298, "coverage": 283366, "distributor delayed": 42541, "distributor undelayed": 42541, "distributor violated": 520, "exec candidate": 30734, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 56725, "exec total [new]": 148416, "exec triage": 94872, "executor restarts [base]": 217, "executor restarts [new]": 545, "fault jobs": 0, "fuzzer jobs": 39, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 285982, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 30734, "no exec duration": 46402000000, "no exec requests": 353, "pending": 35, "prog exec time": 375, "reproducing": 0, "rpc recv": 5914220472, "rpc sent": 887395840, "signal": 279041, "smash jobs": 0, "triage jobs": 0, "vm output": 20497155, "vm restarts [base]": 26, "vm restarts [new]": 79 } 2025/08/20 19:49:55 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 19:50:02 runner 3 connected 2025/08/20 19:50:05 runner 2 connected 2025/08/20 19:50:16 runner 3 connected 2025/08/20 19:50:34 patched crashed: KASAN: slab-use-after-free Read in __usb_hcd_giveback_urb [need repro = true] 2025/08/20 19:50:34 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __usb_hcd_giveback_urb' 2025/08/20 19:50:43 runner 0 connected 2025/08/20 19:51:03 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:51:06 patched crashed: possible deadlock in ntfs_fiemap [need repro = true] 2025/08/20 19:51:06 scheduled a reproduction of 'possible deadlock in ntfs_fiemap' 2025/08/20 19:51:23 runner 3 connected 2025/08/20 19:51:37 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/20 19:51:52 runner 6 connected 2025/08/20 19:51:57 runner 5 connected 2025/08/20 19:52:25 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/20 19:52:26 runner 3 connected 2025/08/20 19:52:56 new: boot error: can't ssh into the instance 2025/08/20 19:53:18 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:53:20 base crash: WARNING in xfrm_state_fini 2025/08/20 19:53:22 runner 3 connected 2025/08/20 19:53:29 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/20 19:53:45 runner 2 connected 2025/08/20 19:54:07 runner 5 connected 2025/08/20 19:54:09 runner 3 connected 2025/08/20 19:54:19 runner 6 connected 2025/08/20 19:54:52 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:54:52 STAT { "buffer too small": 0, "candidate triage jobs": 42, "candidates": 46558, "comps overflows": 0, "corpus": 34128, "corpus [files]": 95, "corpus [symbols]": 39, "cover overflows": 23510, "coverage": 290905, "distributor delayed": 47235, "distributor undelayed": 47233, "distributor violated": 528, "exec candidate": 34592, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 70364, "exec total [new]": 172099, "exec triage": 106861, "executor restarts [base]": 244, "executor restarts [new]": 588, "fault jobs": 0, "fuzzer jobs": 42, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 293731, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 34592, "no exec duration": 53502000000, "no exec requests": 368, "pending": 37, "prog exec time": 224, "reproducing": 0, "rpc recv": 6593282680, "rpc sent": 1038014216, "signal": 286329, "smash jobs": 0, "triage jobs": 0, "vm output": 22686563, "vm restarts [base]": 31, "vm restarts [new]": 87 } 2025/08/20 19:54:57 patched crashed: possible deadlock in ntfs_fiemap [need repro = true] 2025/08/20 19:54:57 scheduled a reproduction of 'possible deadlock in ntfs_fiemap' 2025/08/20 19:55:27 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/20 19:55:37 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:55:47 runner 4 connected 2025/08/20 19:55:48 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:55:48 runner 5 connected 2025/08/20 19:56:16 runner 7 connected 2025/08/20 19:56:27 runner 2 connected 2025/08/20 19:56:37 runner 6 connected 2025/08/20 19:56:45 base crash: KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings 2025/08/20 19:56:50 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/20 19:57:01 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:57:01 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/08/20 19:57:34 runner 3 connected 2025/08/20 19:57:35 new: boot error: can't ssh into the instance 2025/08/20 19:57:39 patched crashed: KASAN: slab-use-after-free Read in xfrm_state_find [need repro = false] 2025/08/20 19:57:40 runner 4 connected 2025/08/20 19:57:42 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 19:58:23 runner 9 connected 2025/08/20 19:58:31 runner 2 connected 2025/08/20 19:58:36 runner 0 connected 2025/08/20 19:59:14 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 19:59:34 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/08/20 19:59:40 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/20 19:59:52 STAT { "buffer too small": 0, "candidate triage jobs": 30, "candidates": 42435, "comps overflows": 0, "corpus": 38200, "corpus [files]": 107, "corpus [symbols]": 44, "cover overflows": 27225, "coverage": 299074, "distributor delayed": 52174, "distributor undelayed": 52173, "distributor violated": 552, "exec candidate": 38715, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 85225, "exec total [new]": 199446, "exec triage": 119785, "executor restarts [base]": 253, "executor restarts [new]": 634, "fault jobs": 0, "fuzzer jobs": 30, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 301951, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 38715, "no exec duration": 53505000000, "no exec requests": 369, "pending": 38, "prog exec time": 207, "reproducing": 0, "rpc recv": 7202623672, "rpc sent": 1214681064, "signal": 294396, "smash jobs": 0, "triage jobs": 0, "vm output": 25367535, "vm restarts [base]": 32, "vm restarts [new]": 96 } 2025/08/20 19:59:52 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/20 19:59:59 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/20 20:00:04 runner 4 connected 2025/08/20 20:00:23 runner 3 connected 2025/08/20 20:00:29 runner 1 connected 2025/08/20 20:00:41 runner 8 connected 2025/08/20 20:01:05 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/20 20:01:26 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/20 20:01:26 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/20 20:01:36 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/20 20:01:36 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/20 20:01:39 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 20:01:46 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 20:01:47 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:01:50 base crash: WARNING: suspicious RCU usage in get_callchain_entry 2025/08/20 20:01:54 runner 1 connected 2025/08/20 20:02:16 base crash: WARNING: suspicious RCU usage in get_callchain_entry 2025/08/20 20:02:24 runner 9 connected 2025/08/20 20:02:28 runner 3 connected 2025/08/20 20:02:34 runner 1 connected 2025/08/20 20:02:35 runner 0 connected 2025/08/20 20:02:39 runner 0 connected 2025/08/20 20:03:05 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:03:06 runner 1 connected 2025/08/20 20:03:16 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/08/20 20:03:17 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/20 20:03:49 base crash: possible deadlock in ocfs2_init_acl 2025/08/20 20:03:54 runner 0 connected 2025/08/20 20:04:05 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 20:04:05 runner 4 connected 2025/08/20 20:04:06 runner 8 connected 2025/08/20 20:04:15 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/20 20:04:15 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/20 20:04:33 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/20 20:04:38 runner 1 connected 2025/08/20 20:04:44 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:04:52 STAT { "buffer too small": 0, "candidate triage jobs": 19, "candidates": 40544, "comps overflows": 0, "corpus": 40063, "corpus [files]": 111, "corpus [symbols]": 46, "cover overflows": 29401, "coverage": 302972, "distributor delayed": 54928, "distributor undelayed": 54923, "distributor violated": 570, "exec candidate": 40606, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 90494, "exec total [new]": 215377, "exec triage": 125675, "executor restarts [base]": 283, "executor restarts [new]": 689, "fault jobs": 0, "fuzzer jobs": 19, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 305870, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 40606, "no exec duration": 53505000000, "no exec requests": 369, "pending": 41, "prog exec time": 309, "reproducing": 0, "rpc recv": 7831748164, "rpc sent": 1331942672, "signal": 298231, "smash jobs": 0, "triage jobs": 0, "vm output": 27873660, "vm restarts [base]": 38, "vm restarts [new]": 105 } 2025/08/20 20:04:54 runner 3 connected 2025/08/20 20:05:06 runner 0 connected 2025/08/20 20:05:13 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 20:05:23 runner 9 connected 2025/08/20 20:05:45 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:05:57 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:06:03 runner 0 connected 2025/08/20 20:06:09 base crash: possible deadlock in ocfs2_xattr_set 2025/08/20 20:06:09 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 20:06:27 base crash: possible deadlock in ocfs2_xattr_set 2025/08/20 20:06:27 runner 9 connected 2025/08/20 20:06:36 base crash: lost connection to test machine 2025/08/20 20:06:58 runner 0 connected 2025/08/20 20:06:58 runner 1 connected 2025/08/20 20:07:07 new: boot error: can't ssh into the instance 2025/08/20 20:07:07 base: boot error: can't ssh into the instance 2025/08/20 20:07:15 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:07:24 runner 0 connected 2025/08/20 20:07:25 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:07:39 base crash: possible deadlock in ocfs2_init_acl 2025/08/20 20:07:55 runner 6 connected 2025/08/20 20:07:56 runner 2 connected 2025/08/20 20:08:04 runner 8 connected 2025/08/20 20:08:08 runner 9 connected 2025/08/20 20:08:28 runner 1 connected 2025/08/20 20:08:44 base crash: kernel BUG in txUnlock 2025/08/20 20:08:57 base crash: WARNING in xfrm_state_fini 2025/08/20 20:09:21 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:09:31 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:09:33 runner 0 connected 2025/08/20 20:09:36 patched crashed: general protection fault in __xfrm_state_insert [need repro = true] 2025/08/20 20:09:36 scheduled a reproduction of 'general protection fault in __xfrm_state_insert' 2025/08/20 20:09:45 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:09:45 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:09:46 runner 2 connected 2025/08/20 20:09:52 STAT { "buffer too small": 0, "candidate triage jobs": 29, "candidates": 39553, "comps overflows": 0, "corpus": 40992, "corpus [files]": 114, "corpus [symbols]": 48, "cover overflows": 31861, "coverage": 305083, "distributor delayed": 56458, "distributor undelayed": 56429, "distributor violated": 674, "exec candidate": 41597, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 94010, "exec total [new]": 229360, "exec triage": 128827, "executor restarts [base]": 312, "executor restarts [new]": 735, "fault jobs": 0, "fuzzer jobs": 29, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 308091, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 41597, "no exec duration": 53505000000, "no exec requests": 369, "pending": 42, "prog exec time": 147, "reproducing": 0, "rpc recv": 8365719028, "rpc sent": 1431906208, "signal": 300338, "smash jobs": 0, "triage jobs": 0, "vm output": 29860596, "vm restarts [base]": 46, "vm restarts [new]": 112 } 2025/08/20 20:10:02 runner 6 connected 2025/08/20 20:10:05 new: boot error: can't ssh into the instance 2025/08/20 20:10:20 runner 9 connected 2025/08/20 20:10:25 runner 1 connected 2025/08/20 20:10:33 runner 0 connected 2025/08/20 20:10:44 base crash: possible deadlock in ocfs2_init_acl 2025/08/20 20:10:53 runner 2 connected 2025/08/20 20:11:18 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/20 20:11:20 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 20:11:25 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 20:11:31 new: boot error: can't ssh into the instance 2025/08/20 20:11:33 runner 1 connected 2025/08/20 20:11:59 base crash: possible deadlock in ocfs2_init_acl 2025/08/20 20:12:06 runner 2 connected 2025/08/20 20:12:08 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 20:12:09 runner 5 connected 2025/08/20 20:12:13 runner 2 connected 2025/08/20 20:12:19 runner 3 connected 2025/08/20 20:12:20 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 20:12:35 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 20:12:43 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 20:12:48 runner 0 connected 2025/08/20 20:12:57 runner 1 connected 2025/08/20 20:13:09 runner 9 connected 2025/08/20 20:13:23 runner 2 connected 2025/08/20 20:13:32 runner 1 connected 2025/08/20 20:13:33 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:13:43 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:14:01 base crash: possible deadlock in ocfs2_init_acl 2025/08/20 20:14:17 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/20 20:14:23 runner 3 connected 2025/08/20 20:14:32 runner 2 connected 2025/08/20 20:14:49 runner 2 connected 2025/08/20 20:14:50 new: boot error: can't ssh into the instance 2025/08/20 20:14:52 STAT { "buffer too small": 0, "candidate triage jobs": 15, "candidates": 38616, "comps overflows": 0, "corpus": 41905, "corpus [files]": 115, "corpus [symbols]": 48, "cover overflows": 34187, "coverage": 307307, "distributor delayed": 57928, "distributor undelayed": 57928, "distributor violated": 685, "exec candidate": 42534, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 99035, "exec total [new]": 242318, "exec triage": 131752, "executor restarts [base]": 336, "executor restarts [new]": 790, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 310233, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 42534, "no exec duration": 53539000000, "no exec requests": 371, "pending": 42, "prog exec time": 319, "reproducing": 0, "rpc recv": 9029193816, "rpc sent": 1548173768, "signal": 302407, "smash jobs": 0, "triage jobs": 0, "vm output": 32074677, "vm restarts [base]": 52, "vm restarts [new]": 124 } 2025/08/20 20:15:07 runner 6 connected 2025/08/20 20:15:32 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:15:47 runner 7 connected 2025/08/20 20:16:03 new: boot error: can't ssh into the instance 2025/08/20 20:16:15 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 20:16:21 runner 1 connected 2025/08/20 20:16:22 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:16:33 base: boot error: can't ssh into the instance 2025/08/20 20:16:40 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/20 20:16:47 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 20:16:53 runner 4 connected 2025/08/20 20:17:04 runner 6 connected 2025/08/20 20:17:11 runner 2 connected 2025/08/20 20:17:22 runner 3 connected 2025/08/20 20:17:29 runner 2 connected 2025/08/20 20:17:31 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:17:36 runner 5 connected 2025/08/20 20:18:04 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 20:18:27 runner 7 connected 2025/08/20 20:19:01 runner 3 connected 2025/08/20 20:19:28 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:19:39 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:19:49 base crash: INFO: task hung in bdev_open 2025/08/20 20:19:51 new: boot error: can't ssh into the instance 2025/08/20 20:19:52 STAT { "buffer too small": 0, "candidate triage jobs": 15, "candidates": 36955, "comps overflows": 0, "corpus": 43511, "corpus [files]": 121, "corpus [symbols]": 48, "cover overflows": 36910, "coverage": 311309, "distributor delayed": 59818, "distributor undelayed": 59818, "distributor violated": 686, "exec candidate": 44195, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 107971, "exec total [new]": 259391, "exec triage": 136861, "executor restarts [base]": 370, "executor restarts [new]": 855, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 314247, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44195, "no exec duration": 53539000000, "no exec requests": 371, "pending": 42, "prog exec time": 320, "reproducing": 0, "rpc recv": 9578159864, "rpc sent": 1702482264, "signal": 306229, "smash jobs": 0, "triage jobs": 0, "vm output": 35304394, "vm restarts [base]": 54, "vm restarts [new]": 133 } 2025/08/20 20:20:08 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/20 20:20:25 runner 4 connected 2025/08/20 20:20:28 runner 5 connected 2025/08/20 20:20:37 runner 0 connected 2025/08/20 20:20:47 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 20:20:48 runner 8 connected 2025/08/20 20:20:48 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:20:55 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 20:20:56 runner 1 connected 2025/08/20 20:20:57 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 20:21:02 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = false] 2025/08/20 20:21:12 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = false] 2025/08/20 20:21:13 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/20 20:21:23 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = false] 2025/08/20 20:21:35 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = false] 2025/08/20 20:21:36 runner 4 connected 2025/08/20 20:21:44 runner 0 connected 2025/08/20 20:21:46 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = false] 2025/08/20 20:21:48 runner 5 connected 2025/08/20 20:22:00 base crash: possible deadlock in ocfs2_xattr_set 2025/08/20 20:22:01 base crash: possible deadlock in ocfs2_init_acl 2025/08/20 20:22:01 runner 3 connected 2025/08/20 20:22:01 runner 6 connected 2025/08/20 20:22:03 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = false] 2025/08/20 20:22:24 runner 8 connected 2025/08/20 20:22:36 runner 1 connected 2025/08/20 20:22:36 base crash: WARNING: suspicious RCU usage in get_callchain_entry 2025/08/20 20:22:48 base crash: WARNING: suspicious RCU usage in get_callchain_entry 2025/08/20 20:22:49 runner 2 connected 2025/08/20 20:22:50 runner 1 connected 2025/08/20 20:22:53 runner 4 connected 2025/08/20 20:23:06 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:23:10 base crash: WARNING: suspicious RCU usage in get_callchain_entry 2025/08/20 20:23:18 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:23:25 runner 0 connected 2025/08/20 20:23:28 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:23:29 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:23:37 runner 3 connected 2025/08/20 20:23:42 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:23:56 runner 6 connected 2025/08/20 20:23:59 runner 2 connected 2025/08/20 20:24:06 runner 1 connected 2025/08/20 20:24:16 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/20 20:24:16 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/20 20:24:16 runner 0 connected 2025/08/20 20:24:17 runner 8 connected 2025/08/20 20:24:20 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 20:24:29 base crash: possible deadlock in ocfs2_init_acl 2025/08/20 20:24:30 runner 5 connected 2025/08/20 20:24:42 base crash: possible deadlock in ocfs2_init_acl 2025/08/20 20:24:52 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/20 20:24:52 STAT { "buffer too small": 0, "candidate triage jobs": 9, "candidates": 36358, "comps overflows": 0, "corpus": 44042, "corpus [files]": 121, "corpus [symbols]": 48, "cover overflows": 38333, "coverage": 312324, "distributor delayed": 60840, "distributor undelayed": 60840, "distributor violated": 687, "exec candidate": 44792, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 113867, "exec total [new]": 268923, "exec triage": 138600, "executor restarts [base]": 411, "executor restarts [new]": 935, "fault jobs": 0, "fuzzer jobs": 9, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 315301, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44749, "no exec duration": 53539000000, "no exec requests": 371, "pending": 43, "prog exec time": 297, "reproducing": 0, "rpc recv": 10359685752, "rpc sent": 1814175688, "signal": 307257, "smash jobs": 0, "triage jobs": 0, "vm output": 37447140, "vm restarts [base]": 62, "vm restarts [new]": 148 } 2025/08/20 20:24:59 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:25:02 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/20 20:25:04 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/20 20:25:06 runner 9 connected 2025/08/20 20:25:10 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 20:25:18 runner 0 connected 2025/08/20 20:25:31 runner 3 connected 2025/08/20 20:25:39 runner 4 connected 2025/08/20 20:25:46 runner 6 connected 2025/08/20 20:25:51 runner 1 connected 2025/08/20 20:25:52 runner 5 connected 2025/08/20 20:26:23 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:26:26 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = true] 2025/08/20 20:26:26 scheduled a reproduction of 'WARNING in ext4_xattr_inode_lookup_create' 2025/08/20 20:26:40 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 20:26:53 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/20 20:27:01 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/20 20:27:04 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:27:04 runner 5 connected 2025/08/20 20:27:14 runner 8 connected 2025/08/20 20:27:23 runner 6 connected 2025/08/20 20:27:41 runner 3 connected 2025/08/20 20:27:43 runner 0 connected 2025/08/20 20:27:43 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/20 20:27:45 runner 1 connected 2025/08/20 20:28:08 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/20 20:28:08 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:28:09 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:28:13 base crash: WARNING in ext4_xattr_inode_lookup_create 2025/08/20 20:28:20 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/20 20:28:31 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/20 20:28:32 runner 4 connected 2025/08/20 20:28:49 runner 8 connected 2025/08/20 20:28:51 runner 9 connected 2025/08/20 20:28:54 runner 3 connected 2025/08/20 20:28:56 runner 6 connected 2025/08/20 20:29:02 runner 1 connected 2025/08/20 20:29:08 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:29:12 runner 5 connected 2025/08/20 20:29:13 base crash: WARNING in xfrm_state_fini 2025/08/20 20:29:40 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:29:40 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 20:29:45 patched crashed: possible deadlock in ocfs2_setattr [need repro = true] 2025/08/20 20:29:45 scheduled a reproduction of 'possible deadlock in ocfs2_setattr' 2025/08/20 20:29:52 STAT { "buffer too small": 0, "candidate triage jobs": 28, "candidates": 35816, "comps overflows": 0, "corpus": 44505, "corpus [files]": 121, "corpus [symbols]": 48, "cover overflows": 39668, "coverage": 313233, "distributor delayed": 61930, "distributor undelayed": 61903, "distributor violated": 732, "exec candidate": 45334, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 116958, "exec total [new]": 276934, "exec triage": 140125, "executor restarts [base]": 432, "executor restarts [new]": 998, "fault jobs": 0, "fuzzer jobs": 28, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 316276, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45250, "no exec duration": 53539000000, "no exec requests": 371, "pending": 45, "prog exec time": 625, "reproducing": 0, "rpc recv": 11040405816, "rpc sent": 1905333472, "signal": 308179, "smash jobs": 0, "triage jobs": 0, "vm output": 39110911, "vm restarts [base]": 67, "vm restarts [new]": 163 } 2025/08/20 20:29:56 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 20:30:18 base crash: no output from test machine 2025/08/20 20:30:22 runner 3 connected 2025/08/20 20:30:27 runner 9 connected 2025/08/20 20:30:37 runner 8 connected 2025/08/20 20:30:54 new: boot error: can't ssh into the instance 2025/08/20 20:30:58 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/20 20:31:07 runner 2 connected 2025/08/20 20:31:08 new: boot error: can't ssh into the instance 2025/08/20 20:31:29 new: boot error: can't ssh into the instance 2025/08/20 20:31:43 runner 2 connected 2025/08/20 20:31:48 runner 3 connected 2025/08/20 20:31:56 runner 3 connected 2025/08/20 20:32:00 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/20 20:32:17 runner 7 connected 2025/08/20 20:33:00 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 20:33:05 base crash: kernel BUG in txUnlock 2025/08/20 20:33:47 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/20 20:33:49 runner 5 connected 2025/08/20 20:33:53 runner 3 connected 2025/08/20 20:34:25 base: boot error: can't ssh into the instance 2025/08/20 20:34:36 runner 7 connected 2025/08/20 20:34:49 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 20:34:52 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35301, "comps overflows": 0, "corpus": 44961, "corpus [files]": 122, "corpus [symbols]": 48, "cover overflows": 42784, "coverage": 314071, "distributor delayed": 62663, "distributor undelayed": 62662, "distributor violated": 759, "exec candidate": 45849, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 118501, "exec total [new]": 293081, "exec triage": 141680, "executor restarts [base]": 448, "executor restarts [new]": 1041, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 317164, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45721, "no exec duration": 53671000000, "no exec requests": 377, "pending": 45, "prog exec time": 201, "reproducing": 0, "rpc recv": 11439356292, "rpc sent": 1996979632, "signal": 309031, "smash jobs": 0, "triage jobs": 0, "vm output": 41010695, "vm restarts [base]": 71, "vm restarts [new]": 170 } 2025/08/20 20:34:54 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:35:05 new: boot error: can't ssh into the instance 2025/08/20 20:35:13 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 20:35:14 runner 1 connected 2025/08/20 20:35:15 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 20:35:38 runner 3 connected 2025/08/20 20:35:42 runner 5 connected 2025/08/20 20:35:53 runner 0 connected 2025/08/20 20:35:57 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/20 20:36:02 runner 3 connected 2025/08/20 20:36:04 runner 7 connected 2025/08/20 20:36:34 base crash: lost connection to test machine 2025/08/20 20:36:45 runner 8 connected 2025/08/20 20:36:51 base crash: possible deadlock in ocfs2_init_acl 2025/08/20 20:37:22 runner 3 connected 2025/08/20 20:37:40 runner 1 connected 2025/08/20 20:38:07 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/20 20:38:57 runner 6 connected 2025/08/20 20:39:00 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = false] 2025/08/20 20:39:13 new: boot error: can't ssh into the instance 2025/08/20 20:39:17 base crash: possible deadlock in ocfs2_init_acl 2025/08/20 20:39:18 base: boot error: can't ssh into the instance 2025/08/20 20:39:46 new: boot error: can't ssh into the instance 2025/08/20 20:39:48 runner 3 connected 2025/08/20 20:39:52 timed out waiting for coprus triage 2025/08/20 20:39:52 starting bug reproductions 2025/08/20 20:39:52 starting bug reproductions (max 10 VMs, 7 repros) 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/20 20:39:52 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 22575, "comps overflows": 0, "corpus": 45267, "corpus [files]": 123, "corpus [symbols]": 48, "cover overflows": 47364, "coverage": 314787, "distributor delayed": 63176, "distributor undelayed": 63176, "distributor violated": 771, "exec candidate": 58575, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 121213, "exec total [new]": 316344, "exec triage": 143041, "executor restarts [base]": 470, "executor restarts [new]": 1088, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 318091, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46105, "no exec duration": 53671000000, "no exec requests": 377, "pending": 45, "prog exec time": 249, "reproducing": 0, "rpc recv": 11799070488, "rpc sent": 2115076512, "signal": 309697, "smash jobs": 0, "triage jobs": 0, "vm output": 43044021, "vm restarts [base]": 75, "vm restarts [new]": 177 } 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "WARNING in xfrm6_tunnel_net_exit" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "KASAN: slab-use-after-free Read in __xfrm_state_lookup" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "KASAN: slab-use-after-free Read in xfrm_state_find" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/20 20:39:52 start reproducing 'INFO: task hung in __iterate_supers' 2025/08/20 20:39:52 start reproducing 'WARNING in dbAdjTree' 2025/08/20 20:39:52 failed to recv *flatrpc.InfoRequestRawT: EOF 2025/08/20 20:39:52 start reproducing 'possible deadlock in attr_data_get_block' 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/20 20:39:52 reproduction of "WARNING in ext4_xattr_inode_lookup_create" aborted: it's no longer needed 2025/08/20 20:39:52 start reproducing 'general protection fault in __xfrm_state_insert' 2025/08/20 20:39:52 start reproducing 'KASAN: slab-use-after-free Read in __usb_hcd_giveback_urb' 2025/08/20 20:39:52 start reproducing 'possible deadlock in ntfs_fiemap' 2025/08/20 20:39:52 start reproducing 'possible deadlock in ocfs2_setattr' 2025/08/20 20:40:07 runner 0 connected 2025/08/20 20:40:08 runner 3 connected 2025/08/20 20:41:09 reproducing crash 'possible deadlock in ocfs2_setattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/file.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:41:27 base crash: WARNING in xfrm_state_fini 2025/08/20 20:42:05 base: boot error: can't ssh into the instance 2025/08/20 20:42:18 runner 0 connected 2025/08/20 20:42:56 runner 2 connected 2025/08/20 20:43:03 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:44:52 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 22575, "comps overflows": 0, "corpus": 45267, "corpus [files]": 123, "corpus [symbols]": 48, "cover overflows": 47364, "coverage": 314787, "distributor delayed": 63176, "distributor undelayed": 63176, "distributor violated": 771, "exec candidate": 58575, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 125316, "exec total [new]": 316344, "exec triage": 143041, "executor restarts [base]": 492, "executor restarts [new]": 1088, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 318091, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46105, "no exec duration": 53671000000, "no exec requests": 377, "pending": 4, "prog exec time": 0, "reproducing": 7, "rpc recv": 11923716384, "rpc sent": 2130643336, "signal": 309697, "smash jobs": 0, "triage jobs": 0, "vm output": 46649740, "vm restarts [base]": 79, "vm restarts [new]": 177 } 2025/08/20 20:45:13 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:45:23 reproducing crash 'possible deadlock in ocfs2_setattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/file.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:45:41 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:46:01 reproducing crash 'possible deadlock in ocfs2_setattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/file.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:46:10 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:46:10 repro finished 'WARNING in dbAdjTree', repro=true crepro=false desc='WARNING in dbAdjTree' hub=false from_dashboard=false 2025/08/20 20:46:10 found repro for "WARNING in dbAdjTree" (orig title: "-SAME-", reliability: 1), took 6.30 minutes 2025/08/20 20:46:10 start reproducing 'WARNING in dbAdjTree' 2025/08/20 20:46:10 "WARNING in dbAdjTree": saved crash log into 1755722770.crash.log 2025/08/20 20:46:10 "WARNING in dbAdjTree": saved repro log into 1755722770.repro.log 2025/08/20 20:46:25 reproducing crash 'possible deadlock in ocfs2_setattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/file.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:46:25 repro finished 'possible deadlock in ocfs2_setattr', repro=true crepro=false desc='possible deadlock in ocfs2_setattr' hub=false from_dashboard=false 2025/08/20 20:46:25 found repro for "possible deadlock in ocfs2_setattr" (orig title: "-SAME-", reliability: 1), took 6.55 minutes 2025/08/20 20:46:25 start reproducing 'possible deadlock in ocfs2_setattr' 2025/08/20 20:46:25 "possible deadlock in ocfs2_setattr": saved crash log into 1755722785.crash.log 2025/08/20 20:46:25 "possible deadlock in ocfs2_setattr": saved repro log into 1755722785.repro.log 2025/08/20 20:46:55 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:47:24 attempt #0 to run "WARNING in dbAdjTree" on base: crashed with WARNING in dbAdjTree 2025/08/20 20:47:24 crashes both: WARNING in dbAdjTree / WARNING in dbAdjTree 2025/08/20 20:47:24 reproducing crash 'possible deadlock in ocfs2_setattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/file.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:47:40 attempt #0 to run "possible deadlock in ocfs2_setattr" on base: crashed with possible deadlock in ocfs2_setattr 2025/08/20 20:47:40 crashes both: possible deadlock in ocfs2_setattr / possible deadlock in ocfs2_setattr 2025/08/20 20:47:56 base crash: no output from test machine 2025/08/20 20:48:05 runner 0 connected 2025/08/20 20:48:07 base crash: no output from test machine 2025/08/20 20:48:28 runner 1 connected 2025/08/20 20:48:45 runner 2 connected 2025/08/20 20:48:56 runner 3 connected 2025/08/20 20:49:52 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 22575, "comps overflows": 0, "corpus": 45267, "corpus [files]": 123, "corpus [symbols]": 48, "cover overflows": 47364, "coverage": 314787, "distributor delayed": 63176, "distributor undelayed": 63176, "distributor violated": 771, "exec candidate": 58575, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 125316, "exec total [new]": 316344, "exec triage": 143041, "executor restarts [base]": 492, "executor restarts [new]": 1088, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 318091, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 11, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46105, "no exec duration": 53671000000, "no exec requests": 377, "pending": 2, "prog exec time": 0, "reproducing": 7, "rpc recv": 12047300632, "rpc sent": 2130644456, "signal": 309697, "smash jobs": 0, "triage jobs": 0, "vm output": 51117895, "vm restarts [base]": 83, "vm restarts [new]": 177 } 2025/08/20 20:50:16 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:50:54 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/quota/quota.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:51:22 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:51:26 repro finished 'possible deadlock in ntfs_fiemap', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/20 20:51:26 start reproducing 'possible deadlock in ntfs_fiemap' 2025/08/20 20:51:26 failed repro for "possible deadlock in ntfs_fiemap", err=%!s() 2025/08/20 20:51:26 "possible deadlock in ntfs_fiemap": saved crash log into 1755723086.crash.log 2025/08/20 20:51:26 "possible deadlock in ntfs_fiemap": saved repro log into 1755723086.repro.log 2025/08/20 20:52:28 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:52:28 repro finished 'WARNING in dbAdjTree', repro=true crepro=false desc='WARNING in dbAdjTree' hub=false from_dashboard=false 2025/08/20 20:52:28 reproduction of "WARNING in dbAdjTree" aborted: it's no longer needed 2025/08/20 20:52:28 found repro for "WARNING in dbAdjTree" (orig title: "-SAME-", reliability: 1), took 6.20 minutes 2025/08/20 20:52:28 "WARNING in dbAdjTree": saved crash log into 1755723148.crash.log 2025/08/20 20:52:28 "WARNING in dbAdjTree": saved repro log into 1755723148.repro.log 2025/08/20 20:52:30 repro finished 'possible deadlock in attr_data_get_block', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/20 20:52:30 failed repro for "possible deadlock in attr_data_get_block", err=%!s() 2025/08/20 20:52:30 "possible deadlock in attr_data_get_block": saved crash log into 1755723150.crash.log 2025/08/20 20:52:30 "possible deadlock in attr_data_get_block": saved repro log into 1755723150.repro.log 2025/08/20 20:53:03 repro finished 'general protection fault in __xfrm_state_insert', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/20 20:53:03 failed repro for "general protection fault in __xfrm_state_insert", err=%!s() 2025/08/20 20:53:03 "general protection fault in __xfrm_state_insert": saved crash log into 1755723183.crash.log 2025/08/20 20:53:03 "general protection fault in __xfrm_state_insert": saved repro log into 1755723183.repro.log 2025/08/20 20:53:28 base crash: no output from test machine 2025/08/20 20:53:32 attempt #0 to run "WARNING in dbAdjTree" on base: crashed with WARNING in dbAdjTree 2025/08/20 20:53:32 crashes both: WARNING in dbAdjTree / WARNING in dbAdjTree 2025/08/20 20:53:44 base crash: no output from test machine 2025/08/20 20:53:44 runner 2 connected 2025/08/20 20:53:55 base crash: no output from test machine 2025/08/20 20:54:12 reproducing crash 'possible deadlock in ocfs2_setattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/file.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:54:16 runner 1 connected 2025/08/20 20:54:21 runner 0 connected 2025/08/20 20:54:34 runner 2 connected 2025/08/20 20:54:44 runner 3 connected 2025/08/20 20:54:52 STAT { "buffer too small": 0, "candidate triage jobs": 15, "candidates": 22029, "comps overflows": 0, "corpus": 45267, "corpus [files]": 123, "corpus [symbols]": 48, "cover overflows": 47464, "coverage": 314787, "distributor delayed": 63191, "distributor undelayed": 63176, "distributor violated": 771, "exec candidate": 59121, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 125828, "exec total [new]": 316893, "exec triage": 143041, "executor restarts [base]": 503, "executor restarts [new]": 1093, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 318110, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 14, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46118, "no exec duration": 54470000000, "no exec requests": 386, "pending": 0, "prog exec time": 298, "reproducing": 4, "rpc recv": 12202076088, "rpc sent": 2136435968, "signal": 309697, "smash jobs": 0, "triage jobs": 0, "vm output": 54501138, "vm restarts [base]": 87, "vm restarts [new]": 178 } 2025/08/20 20:55:02 reproducing crash 'possible deadlock in ocfs2_setattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/file.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:55:06 runner 0 connected 2025/08/20 20:56:07 new: boot error: can't ssh into the instance 2025/08/20 20:56:08 base crash: lost connection to test machine 2025/08/20 20:56:18 reproducing crash 'possible deadlock in ocfs2_setattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/file.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 20:56:18 repro finished 'possible deadlock in ocfs2_setattr', repro=true crepro=false desc='possible deadlock in ocfs2_setattr' hub=false from_dashboard=false 2025/08/20 20:56:18 found repro for "possible deadlock in ocfs2_setattr" (orig title: "-SAME-", reliability: 1), took 9.88 minutes 2025/08/20 20:56:18 "possible deadlock in ocfs2_setattr": saved crash log into 1755723378.crash.log 2025/08/20 20:56:18 "possible deadlock in ocfs2_setattr": saved repro log into 1755723378.repro.log 2025/08/20 20:56:55 runner 5 connected 2025/08/20 20:56:57 runner 1 connected 2025/08/20 20:57:06 runner 4 connected 2025/08/20 20:57:10 new: boot error: can't ssh into the instance 2025/08/20 20:57:30 new: boot error: can't ssh into the instance 2025/08/20 20:57:42 attempt #0 to run "possible deadlock in ocfs2_setattr" on base: crashed with possible deadlock in ocfs2_setattr 2025/08/20 20:57:42 crashes both: possible deadlock in ocfs2_setattr / possible deadlock in ocfs2_setattr 2025/08/20 20:57:59 runner 1 connected 2025/08/20 20:58:32 runner 0 connected 2025/08/20 20:59:52 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 12333, "comps overflows": 0, "corpus": 45363, "corpus [files]": 123, "corpus [symbols]": 48, "cover overflows": 49336, "coverage": 314967, "distributor delayed": 63324, "distributor undelayed": 63324, "distributor violated": 790, "exec candidate": 68817, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 134170, "exec total [new]": 326974, "exec triage": 143417, "executor restarts [base]": 537, "executor restarts [new]": 1134, "fault jobs": 0, "fuzzer jobs": 0, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 318293, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 14, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46215, "no exec duration": 438316000000, "no exec requests": 1601, "pending": 0, "prog exec time": 216, "reproducing": 3, "rpc recv": 12408158092, "rpc sent": 2219681720, "signal": 309873, "smash jobs": 0, "triage jobs": 0, "vm output": 57824910, "vm restarts [base]": 89, "vm restarts [new]": 182 } 2025/08/20 21:00:49 base crash: kernel BUG in txUnlock 2025/08/20 21:01:22 triaged 91.5% of the corpus 2025/08/20 21:01:39 runner 3 connected 2025/08/20 21:02:34 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:02:36 new: boot error: can't ssh into the instance 2025/08/20 21:03:23 runner 0 connected 2025/08/20 21:03:25 runner 3 connected 2025/08/20 21:03:42 repro finished 'possible deadlock in ntfs_fiemap', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/20 21:03:42 failed repro for "possible deadlock in ntfs_fiemap", err=%!s() 2025/08/20 21:03:42 "possible deadlock in ntfs_fiemap": saved crash log into 1755723822.crash.log 2025/08/20 21:03:42 "possible deadlock in ntfs_fiemap": saved repro log into 1755723822.repro.log 2025/08/20 21:04:39 runner 6 connected 2025/08/20 21:04:52 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 6, "corpus": 45467, "corpus [files]": 123, "corpus [symbols]": 48, "cover overflows": 53420, "coverage": 315166, "distributor delayed": 63642, "distributor undelayed": 63642, "distributor violated": 792, "exec candidate": 81150, "exec collide": 666, "exec fuzz": 1355, "exec gen": 65, "exec hints": 31, "exec inject": 0, "exec minimize": 142, "exec retries": 21, "exec seeds": 20, "exec smash": 150, "exec total [base]": 148205, "exec total [new]": 342451, "exec triage": 144125, "executor restarts [base]": 558, "executor restarts [new]": 1176, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 5, "hints jobs": 1, "max signal": 318828, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 112, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46406, "no exec duration": 438515000000, "no exec requests": 1605, "pending": 0, "prog exec time": 491, "reproducing": 2, "rpc recv": 12573191292, "rpc sent": 2366801824, "signal": 310071, "smash jobs": 3, "triage jobs": 7, "vm output": 59997318, "vm restarts [base]": 90, "vm restarts [new]": 185 } 2025/08/20 21:05:01 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:05:02 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:05:13 repro finished 'KASAN: slab-use-after-free Read in __usb_hcd_giveback_urb', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/20 21:05:13 failed repro for "KASAN: slab-use-after-free Read in __usb_hcd_giveback_urb", err=%!s() 2025/08/20 21:05:13 "KASAN: slab-use-after-free Read in __usb_hcd_giveback_urb": saved crash log into 1755723913.crash.log 2025/08/20 21:05:13 "KASAN: slab-use-after-free Read in __usb_hcd_giveback_urb": saved repro log into 1755723913.repro.log 2025/08/20 21:06:00 runner 1 connected 2025/08/20 21:06:00 runner 0 connected 2025/08/20 21:06:08 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/08/20 21:06:09 runner 7 connected 2025/08/20 21:06:16 base crash: lost connection to test machine 2025/08/20 21:06:46 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/quota/quota.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:07:12 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/20 21:07:14 runner 3 connected 2025/08/20 21:08:02 runner 2 connected 2025/08/20 21:08:04 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/20 21:08:23 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 21:08:41 base crash: lost connection to test machine 2025/08/20 21:08:44 base crash: possible deadlock in ocfs2_init_acl 2025/08/20 21:08:53 runner 1 connected 2025/08/20 21:09:13 runner 7 connected 2025/08/20 21:09:30 runner 2 connected 2025/08/20 21:09:52 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 62, "corpus": 45507, "corpus [files]": 126, "corpus [symbols]": 51, "cover overflows": 58783, "coverage": 315216, "distributor delayed": 63803, "distributor undelayed": 63803, "distributor violated": 792, "exec candidate": 81150, "exec collide": 2875, "exec fuzz": 5561, "exec gen": 304, "exec hints": 620, "exec inject": 0, "exec minimize": 1227, "exec retries": 21, "exec seeds": 140, "exec smash": 1077, "exec total [base]": 154067, "exec total [new]": 352170, "exec triage": 144470, "executor restarts [base]": 599, "executor restarts [new]": 1252, "fault jobs": 0, "fuzzer jobs": 16, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 3, "max signal": 319092, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 795, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46521, "no exec duration": 450988000000, "no exec requests": 1623, "pending": 0, "prog exec time": 673, "reproducing": 1, "rpc recv": 12882145108, "rpc sent": 2579630944, "signal": 310123, "smash jobs": 5, "triage jobs": 8, "vm output": 62411403, "vm restarts [base]": 94, "vm restarts [new]": 189 } 2025/08/20 21:10:11 new: boot error: can't ssh into the instance 2025/08/20 21:10:25 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:10:44 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/quota/quota.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:11:23 runner 1 connected 2025/08/20 21:11:51 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/08/20 21:12:41 runner 0 connected 2025/08/20 21:13:49 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 21:14:01 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 21:14:33 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/20 21:14:38 runner 6 connected 2025/08/20 21:14:51 runner 1 connected 2025/08/20 21:14:52 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 102, "corpus": 45538, "corpus [files]": 126, "corpus [symbols]": 51, "cover overflows": 63365, "coverage": 315279, "distributor delayed": 63992, "distributor undelayed": 63992, "distributor violated": 792, "exec candidate": 81150, "exec collide": 5017, "exec fuzz": 9746, "exec gen": 505, "exec hints": 1225, "exec inject": 0, "exec minimize": 1882, "exec retries": 21, "exec seeds": 232, "exec smash": 1758, "exec total [base]": 159030, "exec total [new]": 361046, "exec triage": 144785, "executor restarts [base]": 639, "executor restarts [new]": 1335, "fault jobs": 0, "fuzzer jobs": 18, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 5, "max signal": 319349, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1147, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46629, "no exec duration": 461469000000, "no exec requests": 1639, "pending": 0, "prog exec time": 560, "reproducing": 1, "rpc recv": 13025517192, "rpc sent": 2779911408, "signal": 310173, "smash jobs": 4, "triage jobs": 9, "vm output": 64571950, "vm restarts [base]": 94, "vm restarts [new]": 193 } 2025/08/20 21:15:07 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/20 21:15:22 runner 0 connected 2025/08/20 21:15:44 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/20 21:15:57 runner 4 connected 2025/08/20 21:16:14 new: boot error: can't ssh into the instance 2025/08/20 21:16:40 runner 7 connected 2025/08/20 21:17:04 runner 5 connected 2025/08/20 21:17:17 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 21:17:35 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/20 21:17:36 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:48226: connect: connection refused 2025/08/20 21:17:36 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:48226: connect: connection refused 2025/08/20 21:17:46 base crash: lost connection to test machine 2025/08/20 21:17:49 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 21:18:08 runner 0 connected 2025/08/20 21:18:24 runner 2 connected 2025/08/20 21:18:35 runner 1 connected 2025/08/20 21:18:39 runner 7 connected 2025/08/20 21:18:50 base: boot error: can't ssh into the instance 2025/08/20 21:18:59 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:4909: connect: connection refused 2025/08/20 21:18:59 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:4909: connect: connection refused 2025/08/20 21:19:09 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:19:19 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/quota/quota.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:19:38 patched crashed: INFO: task hung in __iterate_supers [need repro = true] 2025/08/20 21:19:38 scheduled a reproduction of 'INFO: task hung in __iterate_supers' 2025/08/20 21:19:45 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:19:46 runner 3 connected 2025/08/20 21:19:52 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 113, "corpus": 45560, "corpus [files]": 127, "corpus [symbols]": 51, "cover overflows": 67108, "coverage": 315328, "distributor delayed": 64117, "distributor undelayed": 64117, "distributor violated": 792, "exec candidate": 81150, "exec collide": 6974, "exec fuzz": 13400, "exec gen": 701, "exec hints": 1941, "exec inject": 0, "exec minimize": 2619, "exec retries": 23, "exec seeds": 292, "exec smash": 2230, "exec total [base]": 161964, "exec total [new]": 369099, "exec triage": 145045, "executor restarts [base]": 702, "executor restarts [new]": 1444, "fault jobs": 0, "fuzzer jobs": 22, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 6, "max signal": 319548, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1701, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46717, "no exec duration": 465692000000, "no exec requests": 1646, "pending": 1, "prog exec time": 557, "reproducing": 1, "rpc recv": 13348310924, "rpc sent": 2927316144, "signal": 310208, "smash jobs": 7, "triage jobs": 9, "vm output": 67153798, "vm restarts [base]": 97, "vm restarts [new]": 199 } 2025/08/20 21:19:58 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:21045: connect: connection refused 2025/08/20 21:19:58 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:21045: connect: connection refused 2025/08/20 21:20:02 VM-5 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:34378: connect: connection refused 2025/08/20 21:20:02 VM-5 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:34378: connect: connection refused 2025/08/20 21:20:05 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:4102: connect: connection refused 2025/08/20 21:20:05 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:4102: connect: connection refused 2025/08/20 21:20:05 runner 0 connected 2025/08/20 21:20:08 base crash: lost connection to test machine 2025/08/20 21:20:12 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:23102: connect: connection refused 2025/08/20 21:20:12 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:23102: connect: connection refused 2025/08/20 21:20:12 patched crashed: general protection fault in xfrm_alloc_spi [need repro = true] 2025/08/20 21:20:12 scheduled a reproduction of 'general protection fault in xfrm_alloc_spi' 2025/08/20 21:20:12 start reproducing 'general protection fault in xfrm_alloc_spi' 2025/08/20 21:20:12 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:20:15 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:20:22 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:20:22 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:60344: connect: connection refused 2025/08/20 21:20:22 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:60344: connect: connection refused 2025/08/20 21:20:31 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13651: connect: connection refused 2025/08/20 21:20:31 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13651: connect: connection refused 2025/08/20 21:20:32 base crash: lost connection to test machine 2025/08/20 21:20:33 runner 1 connected 2025/08/20 21:20:41 base crash: lost connection to test machine 2025/08/20 21:20:57 runner 1 connected 2025/08/20 21:20:57 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:57624: connect: connection refused 2025/08/20 21:20:57 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:57624: connect: connection refused 2025/08/20 21:21:02 runner 3 connected 2025/08/20 21:21:04 runner 7 connected 2025/08/20 21:21:07 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:21:11 runner 2 connected 2025/08/20 21:21:20 runner 2 connected 2025/08/20 21:21:22 base crash: INFO: task hung in __iterate_supers 2025/08/20 21:21:29 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:60259: connect: connection refused 2025/08/20 21:21:29 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:60259: connect: connection refused 2025/08/20 21:21:30 runner 3 connected 2025/08/20 21:21:39 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:21:53 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:41522: connect: connection refused 2025/08/20 21:21:53 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:41522: connect: connection refused 2025/08/20 21:21:55 runner 1 connected 2025/08/20 21:21:56 patched crashed: INFO: task hung in __iterate_supers [need repro = false] 2025/08/20 21:22:03 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:22:10 runner 0 connected 2025/08/20 21:22:18 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:35262: connect: connection refused 2025/08/20 21:22:18 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:35262: connect: connection refused 2025/08/20 21:22:25 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/quota/quota.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:22:27 runner 3 connected 2025/08/20 21:22:28 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:22:34 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:4679: connect: connection refused 2025/08/20 21:22:34 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:4679: connect: connection refused 2025/08/20 21:22:35 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:20829: connect: connection refused 2025/08/20 21:22:35 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:20829: connect: connection refused 2025/08/20 21:22:44 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:39398: connect: connection refused 2025/08/20 21:22:44 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:39398: connect: connection refused 2025/08/20 21:22:44 base crash: lost connection to test machine 2025/08/20 21:22:45 base crash: lost connection to test machine 2025/08/20 21:22:45 runner 4 connected 2025/08/20 21:22:52 runner 7 connected 2025/08/20 21:22:54 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:23:17 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:50536: connect: connection refused 2025/08/20 21:23:17 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:50536: connect: connection refused 2025/08/20 21:23:18 runner 2 connected 2025/08/20 21:23:27 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:23:28 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:35333: connect: connection refused 2025/08/20 21:23:28 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:35333: connect: connection refused 2025/08/20 21:23:33 runner 3 connected 2025/08/20 21:23:34 runner 2 connected 2025/08/20 21:23:38 base crash: lost connection to test machine 2025/08/20 21:23:42 runner 1 connected 2025/08/20 21:23:50 VM-4 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:24847: connect: connection refused 2025/08/20 21:23:50 VM-4 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:24847: connect: connection refused 2025/08/20 21:23:52 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:9363: connect: connection refused 2025/08/20 21:23:52 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:9363: connect: connection refused 2025/08/20 21:24:00 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:24:02 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:24:05 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:19378: connect: connection refused 2025/08/20 21:24:05 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:19378: connect: connection refused 2025/08/20 21:24:05 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:26663: connect: connection refused 2025/08/20 21:24:05 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:26663: connect: connection refused 2025/08/20 21:24:08 runner 3 connected 2025/08/20 21:24:15 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:24:15 base crash: lost connection to test machine 2025/08/20 21:24:17 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:7710: connect: connection refused 2025/08/20 21:24:17 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:7710: connect: connection refused 2025/08/20 21:24:27 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:24:27 runner 1 connected 2025/08/20 21:24:31 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:56644: connect: connection refused 2025/08/20 21:24:31 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:56644: connect: connection refused 2025/08/20 21:24:40 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:30085: connect: connection refused 2025/08/20 21:24:40 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:30085: connect: connection refused 2025/08/20 21:24:41 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:24:44 runner 2 connected 2025/08/20 21:24:49 runner 4 connected 2025/08/20 21:24:50 base crash: lost connection to test machine 2025/08/20 21:24:52 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 124, "corpus": 45570, "corpus [files]": 128, "corpus [symbols]": 52, "cover overflows": 67879, "coverage": 315340, "distributor delayed": 64180, "distributor undelayed": 64179, "distributor violated": 792, "exec candidate": 81150, "exec collide": 7343, "exec fuzz": 14081, "exec gen": 754, "exec hints": 2473, "exec inject": 0, "exec minimize": 2866, "exec retries": 23, "exec seeds": 317, "exec smash": 2395, "exec total [base]": 164701, "exec total [new]": 371237, "exec triage": 145111, "executor restarts [base]": 771, "executor restarts [new]": 1500, "fault jobs": 0, "fuzzer jobs": 17, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 1, "hints jobs": 10, "max signal": 319573, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1858, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46735, "no exec duration": 465692000000, "no exec requests": 1646, "pending": 1, "prog exec time": 0, "reproducing": 2, "rpc recv": 14017856280, "rpc sent": 3029661168, "signal": 310219, "smash jobs": 3, "triage jobs": 4, "vm output": 68599915, "vm restarts [base]": 104, "vm restarts [new]": 213 } 2025/08/20 21:24:52 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:33870: connect: connection refused 2025/08/20 21:24:52 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:33870: connect: connection refused 2025/08/20 21:24:56 runner 0 connected 2025/08/20 21:24:57 runner 1 connected 2025/08/20 21:25:02 base crash: lost connection to test machine 2025/08/20 21:25:08 runner 7 connected 2025/08/20 21:25:09 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:62263: connect: connection refused 2025/08/20 21:25:09 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:62263: connect: connection refused 2025/08/20 21:25:19 base crash: lost connection to test machine 2025/08/20 21:25:30 runner 3 connected 2025/08/20 21:25:39 runner 3 connected 2025/08/20 21:25:50 runner 2 connected 2025/08/20 21:26:10 runner 1 connected 2025/08/20 21:26:11 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:57645: connect: connection refused 2025/08/20 21:26:11 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:57645: connect: connection refused 2025/08/20 21:26:21 base crash: lost connection to test machine 2025/08/20 21:26:21 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:26:23 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:26:24 base crash: kernel BUG in jfs_evict_inode 2025/08/20 21:26:52 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 21:27:10 runner 3 connected 2025/08/20 21:27:10 runner 2 connected 2025/08/20 21:27:14 runner 2 connected 2025/08/20 21:27:42 runner 1 connected 2025/08/20 21:28:01 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:18702: connect: connection refused 2025/08/20 21:28:01 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:18702: connect: connection refused 2025/08/20 21:28:04 base crash: WARNING in xfrm_state_fini 2025/08/20 21:28:11 base crash: lost connection to test machine 2025/08/20 21:29:00 runner 2 connected 2025/08/20 21:29:08 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/08/20 21:29:43 new: boot error: can't ssh into the instance 2025/08/20 21:29:52 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 142, "corpus": 45585, "corpus [files]": 129, "corpus [symbols]": 52, "cover overflows": 69646, "coverage": 315357, "distributor delayed": 64237, "distributor undelayed": 64231, "distributor violated": 792, "exec candidate": 81150, "exec collide": 8269, "exec fuzz": 15875, "exec gen": 853, "exec hints": 3531, "exec inject": 0, "exec minimize": 3302, "exec retries": 23, "exec seeds": 360, "exec smash": 2701, "exec total [base]": 167920, "exec total [new]": 375998, "exec triage": 145201, "executor restarts [base]": 830, "executor restarts [new]": 1590, "fault jobs": 0, "fuzzer jobs": 22, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 8, "max signal": 319632, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2217, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46772, "no exec duration": 466288000000, "no exec requests": 1651, "pending": 1, "prog exec time": 549, "reproducing": 2, "rpc recv": 14446991960, "rpc sent": 3150058968, "signal": 310235, "smash jobs": 5, "triage jobs": 9, "vm output": 73527284, "vm restarts [base]": 112, "vm restarts [new]": 217 } 2025/08/20 21:30:05 runner 3 connected 2025/08/20 21:30:17 new: boot error: can't ssh into the instance 2025/08/20 21:30:33 runner 6 connected 2025/08/20 21:31:22 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:4249: connect: connection refused 2025/08/20 21:31:22 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:4249: connect: connection refused 2025/08/20 21:31:22 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/quota/quota.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:31:30 repro finished 'general protection fault in xfrm_alloc_spi', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/20 21:31:30 failed repro for "general protection fault in xfrm_alloc_spi", err=%!s() 2025/08/20 21:31:30 "general protection fault in xfrm_alloc_spi": saved crash log into 1755725490.crash.log 2025/08/20 21:31:30 "general protection fault in xfrm_alloc_spi": saved repro log into 1755725490.repro.log 2025/08/20 21:31:32 base crash: lost connection to test machine 2025/08/20 21:32:12 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/20 21:32:21 runner 2 connected 2025/08/20 21:33:03 runner 7 connected 2025/08/20 21:33:35 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 21:34:24 runner 6 connected 2025/08/20 21:34:45 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 21:34:52 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 183, "corpus": 45623, "corpus [files]": 130, "corpus [symbols]": 52, "cover overflows": 71972, "coverage": 315408, "distributor delayed": 64343, "distributor undelayed": 64342, "distributor violated": 792, "exec candidate": 81150, "exec collide": 9168, "exec fuzz": 17626, "exec gen": 944, "exec hints": 4842, "exec inject": 0, "exec minimize": 3879, "exec retries": 23, "exec seeds": 469, "exec smash": 3512, "exec total [base]": 173213, "exec total [new]": 381728, "exec triage": 145382, "executor restarts [base]": 866, "executor restarts [new]": 1691, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 6, "max signal": 319731, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2602, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46837, "no exec duration": 469288000000, "no exec requests": 1654, "pending": 1, "prog exec time": 522, "reproducing": 1, "rpc recv": 14648264360, "rpc sent": 3305835736, "signal": 310280, "smash jobs": 2, "triage jobs": 7, "vm output": 77358053, "vm restarts [base]": 113, "vm restarts [new]": 221 } 2025/08/20 21:35:34 runner 7 connected 2025/08/20 21:35:43 base crash: INFO: task hung in user_get_super 2025/08/20 21:36:09 new: boot error: can't ssh into the instance 2025/08/20 21:36:28 new: boot error: can't ssh into the instance 2025/08/20 21:36:34 runner 0 connected 2025/08/20 21:37:19 runner 4 connected 2025/08/20 21:38:10 base: boot error: can't ssh into the instance 2025/08/20 21:38:25 base crash: KASAN: slab-use-after-free Read in xfrm_state_find 2025/08/20 21:38:58 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 21:38:59 runner 3 connected 2025/08/20 21:39:13 runner 1 connected 2025/08/20 21:39:41 base crash: possible deadlock in ocfs2_init_acl 2025/08/20 21:39:48 runner 3 connected 2025/08/20 21:39:52 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 188, "corpus": 45637, "corpus [files]": 130, "corpus [symbols]": 52, "cover overflows": 74971, "coverage": 315429, "distributor delayed": 64437, "distributor undelayed": 64437, "distributor violated": 792, "exec candidate": 81150, "exec collide": 10681, "exec fuzz": 20537, "exec gen": 1106, "exec hints": 6034, "exec inject": 0, "exec minimize": 4593, "exec retries": 24, "exec seeds": 505, "exec smash": 3813, "exec total [base]": 177588, "exec total [new]": 388720, "exec triage": 145540, "executor restarts [base]": 925, "executor restarts [new]": 1807, "fault jobs": 0, "fuzzer jobs": 10, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 5, "max signal": 320031, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3143, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46890, "no exec duration": 481186000000, "no exec requests": 1675, "pending": 1, "prog exec time": 518, "reproducing": 1, "rpc recv": 14835623352, "rpc sent": 3470643200, "signal": 310301, "smash jobs": 1, "triage jobs": 4, "vm output": 80382278, "vm restarts [base]": 116, "vm restarts [new]": 224 } 2025/08/20 21:40:09 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 21:40:19 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/quota/quota.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:40:23 new: boot error: can't ssh into the instance 2025/08/20 21:40:31 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/20 21:40:31 runner 3 connected 2025/08/20 21:40:32 patched crashed: lost connection to test machine [need repro = false] 2025/08/20 21:40:43 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/20 21:40:58 runner 7 connected 2025/08/20 21:41:12 runner 5 connected 2025/08/20 21:41:20 runner 1 connected 2025/08/20 21:41:23 runner 2 connected 2025/08/20 21:41:32 runner 4 connected 2025/08/20 21:43:04 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 21:43:21 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/quota/quota.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:43:54 runner 6 connected 2025/08/20 21:44:20 patched crashed: possible deadlock in kernfs_remove [need repro = true] 2025/08/20 21:44:20 scheduled a reproduction of 'possible deadlock in kernfs_remove' 2025/08/20 21:44:20 start reproducing 'possible deadlock in kernfs_remove' 2025/08/20 21:44:52 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 208, "corpus": 45653, "corpus [files]": 130, "corpus [symbols]": 52, "cover overflows": 78174, "coverage": 315478, "distributor delayed": 64523, "distributor undelayed": 64523, "distributor violated": 792, "exec candidate": 81150, "exec collide": 12321, "exec fuzz": 23694, "exec gen": 1283, "exec hints": 6559, "exec inject": 0, "exec minimize": 5057, "exec retries": 24, "exec seeds": 548, "exec smash": 4203, "exec total [base]": 182431, "exec total [new]": 395294, "exec triage": 145715, "executor restarts [base]": 1001, "executor restarts [new]": 1933, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 5, "hints jobs": 4, "max signal": 320240, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3483, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46954, "no exec duration": 482042000000, "no exec requests": 1676, "pending": 1, "prog exec time": 722, "reproducing": 2, "rpc recv": 15123212188, "rpc sent": 3637145752, "signal": 310348, "smash jobs": 1, "triage jobs": 10, "vm output": 84016246, "vm restarts [base]": 118, "vm restarts [new]": 229 } 2025/08/20 21:44:55 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:45:01 patched crashed: general protection fault in lmLogSync [need repro = true] 2025/08/20 21:45:01 scheduled a reproduction of 'general protection fault in lmLogSync' 2025/08/20 21:45:01 start reproducing 'general protection fault in lmLogSync' 2025/08/20 21:45:16 runner 6 connected 2025/08/20 21:45:24 base crash: possible deadlock in ocfs2_xattr_set 2025/08/20 21:45:58 runner 7 connected 2025/08/20 21:46:13 runner 0 connected 2025/08/20 21:46:15 new: boot error: can't ssh into the instance 2025/08/20 21:46:22 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:47:31 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/20 21:47:39 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:48:01 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/20 21:48:06 base crash: lost connection to test machine 2025/08/20 21:48:09 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:48:28 runner 4 connected 2025/08/20 21:48:50 runner 7 connected 2025/08/20 21:48:55 runner 1 connected 2025/08/20 21:48:56 base crash: INFO: task hung in __iterate_supers 2025/08/20 21:49:07 base crash: general protection fault in pcl818_ai_cancel 2025/08/20 21:49:14 patched crashed: WARNING in __udf_add_aext [need repro = true] 2025/08/20 21:49:14 scheduled a reproduction of 'WARNING in __udf_add_aext' 2025/08/20 21:49:14 start reproducing 'WARNING in __udf_add_aext' 2025/08/20 21:49:15 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:49:40 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/20 21:49:45 runner 2 connected 2025/08/20 21:49:52 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 213, "corpus": 45666, "corpus [files]": 130, "corpus [symbols]": 52, "cover overflows": 80950, "coverage": 315497, "distributor delayed": 64622, "distributor undelayed": 64620, "distributor violated": 792, "exec candidate": 81150, "exec collide": 13940, "exec fuzz": 26597, "exec gen": 1440, "exec hints": 6841, "exec inject": 0, "exec minimize": 5560, "exec retries": 24, "exec seeds": 581, "exec smash": 4450, "exec total [base]": 186377, "exec total [new]": 401187, "exec triage": 145865, "executor restarts [base]": 1055, "executor restarts [new]": 2043, "fault jobs": 0, "fuzzer jobs": 7, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 1, "max signal": 320362, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3860, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47010, "no exec duration": 485051000000, "no exec requests": 1680, "pending": 1, "prog exec time": 608, "reproducing": 4, "rpc recv": 15334897148, "rpc sent": 3782855360, "signal": 310366, "smash jobs": 0, "triage jobs": 6, "vm output": 86900593, "vm restarts [base]": 121, "vm restarts [new]": 233 } 2025/08/20 21:50:04 runner 3 connected 2025/08/20 21:50:30 runner 5 connected 2025/08/20 21:50:34 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:50:44 reproducing crash 'WARNING in __udf_add_aext': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/udf/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:51:29 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 21:51:50 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/20 21:52:04 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:52:07 reproducing crash 'WARNING in __udf_add_aext': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/udf/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:52:19 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/quota/quota.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:52:20 runner 1 connected 2025/08/20 21:52:39 runner 0 connected 2025/08/20 21:53:23 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:53:39 reproducing crash 'WARNING in __udf_add_aext': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/udf/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:54:44 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:54:52 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 222, "corpus": 45672, "corpus [files]": 130, "corpus [symbols]": 52, "cover overflows": 83303, "coverage": 315521, "distributor delayed": 64676, "distributor undelayed": 64676, "distributor violated": 793, "exec candidate": 81150, "exec collide": 15124, "exec fuzz": 28929, "exec gen": 1539, "exec hints": 7184, "exec inject": 0, "exec minimize": 5740, "exec retries": 24, "exec seeds": 605, "exec smash": 4595, "exec total [base]": 191412, "exec total [new]": 405576, "exec triage": 145945, "executor restarts [base]": 1140, "executor restarts [new]": 2119, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 3, "max signal": 320426, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3996, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47038, "no exec duration": 485051000000, "no exec requests": 1680, "pending": 1, "prog exec time": 733, "reproducing": 4, "rpc recv": 15507192424, "rpc sent": 3918451744, "signal": 310376, "smash jobs": 2, "triage jobs": 3, "vm output": 89251848, "vm restarts [base]": 124, "vm restarts [new]": 234 } 2025/08/20 21:56:04 reproducing crash 'WARNING in __udf_add_aext': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/udf/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:56:18 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:56:26 patched crashed: kernel BUG in close_ctree [need repro = true] 2025/08/20 21:56:26 scheduled a reproduction of 'kernel BUG in close_ctree' 2025/08/20 21:56:26 start reproducing 'kernel BUG in close_ctree' 2025/08/20 21:57:06 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/08/20 21:57:14 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/quota/quota.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:57:17 runner 7 connected 2025/08/20 21:57:33 reproducing crash 'WARNING in __udf_add_aext': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/udf/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:57:54 runner 6 connected 2025/08/20 21:58:00 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:58:15 new: boot error: can't ssh into the instance 2025/08/20 21:59:06 base crash: lost connection to test machine 2025/08/20 21:59:19 new: boot error: can't ssh into the instance 2025/08/20 21:59:21 reproducing crash 'WARNING in __udf_add_aext': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/udf/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:59:32 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 21:59:52 STAT { "buffer too small": 4, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 237, "corpus": 45685, "corpus [files]": 130, "corpus [symbols]": 52, "cover overflows": 84888, "coverage": 315542, "distributor delayed": 64753, "distributor undelayed": 64753, "distributor violated": 800, "exec candidate": 81150, "exec collide": 15813, "exec fuzz": 30214, "exec gen": 1610, "exec hints": 7623, "exec inject": 0, "exec minimize": 6285, "exec retries": 25, "exec seeds": 635, "exec smash": 4845, "exec total [base]": 196749, "exec total [new]": 409015, "exec triage": 146078, "executor restarts [base]": 1222, "executor restarts [new]": 2180, "fault jobs": 0, "fuzzer jobs": 20, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 3, "max signal": 320522, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4391, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47086, "no exec duration": 486677000000, "no exec requests": 1683, "pending": 1, "prog exec time": 856, "reproducing": 5, "rpc recv": 15600546736, "rpc sent": 4040124632, "signal": 310397, "smash jobs": 3, "triage jobs": 14, "vm output": 91805222, "vm restarts [base]": 124, "vm restarts [new]": 236 } 2025/08/20 21:59:56 runner 1 connected 2025/08/20 22:00:13 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/20 22:00:35 reproducing crash 'WARNING in __udf_add_aext': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/udf/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:00:39 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:00:51 patched crashed: kernel BUG in may_open [need repro = true] 2025/08/20 22:00:51 scheduled a reproduction of 'kernel BUG in may_open' 2025/08/20 22:00:51 start reproducing 'kernel BUG in may_open' 2025/08/20 22:01:04 runner 2 connected 2025/08/20 22:01:16 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/quota/quota.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:01:36 reproducing crash 'WARNING in __udf_add_aext': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/udf/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:01:41 runner 7 connected 2025/08/20 22:01:52 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:02:02 new: boot error: can't ssh into the instance 2025/08/20 22:02:09 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/20 22:02:15 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:02:37 reproducing crash 'WARNING in __udf_add_aext': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/udf/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:02:47 base crash: lost connection to test machine 2025/08/20 22:02:52 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:02:58 runner 3 connected 2025/08/20 22:03:06 base crash: kernel BUG in may_open 2025/08/20 22:03:19 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:03:36 reproducing crash 'WARNING in __udf_add_aext': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/udf/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:03:36 runner 0 connected 2025/08/20 22:03:37 base crash: KASAN: slab-use-after-free Read in jfs_syncpt 2025/08/20 22:03:55 runner 1 connected 2025/08/20 22:03:56 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:04:25 runner 2 connected 2025/08/20 22:04:35 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:04:36 reproducing crash 'WARNING in __udf_add_aext': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/udf/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:04:43 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/quota/quota.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:04:52 STAT { "buffer too small": 4, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 240, "corpus": 45696, "corpus [files]": 130, "corpus [symbols]": 52, "cover overflows": 85483, "coverage": 315587, "distributor delayed": 64800, "distributor undelayed": 64787, "distributor violated": 807, "exec candidate": 81150, "exec collide": 16048, "exec fuzz": 30722, "exec gen": 1630, "exec hints": 7849, "exec inject": 0, "exec minimize": 6673, "exec retries": 25, "exec seeds": 659, "exec smash": 5218, "exec total [base]": 199722, "exec total [new]": 410837, "exec triage": 146124, "executor restarts [base]": 1280, "executor restarts [new]": 2206, "fault jobs": 0, "fuzzer jobs": 26, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 4, "max signal": 320560, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4671, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47114, "no exec duration": 486677000000, "no exec requests": 1683, "pending": 1, "prog exec time": 527, "reproducing": 6, "rpc recv": 15831219812, "rpc sent": 4099333072, "signal": 310428, "smash jobs": 2, "triage jobs": 20, "vm output": 93696008, "vm restarts [base]": 130, "vm restarts [new]": 237 } 2025/08/20 22:05:31 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:05:53 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/20 22:05:53 reproducing crash 'WARNING in __udf_add_aext': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/udf/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:06:41 runner 6 connected 2025/08/20 22:06:41 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:06:46 reproducing crash 'WARNING in __udf_add_aext': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/udf/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:07:57 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:07:58 reproducing crash 'WARNING in __udf_add_aext': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/udf/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:08:27 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/quota/quota.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:09:03 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:09:12 reproducing crash 'WARNING in __udf_add_aext': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/udf/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:09:17 reproducing crash 'possible deadlock in kernfs_remove': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/dir.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:09:48 status reporting terminated 2025/08/20 22:09:48 repro finished 'possible deadlock in kernfs_remove', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/20 22:09:48 bug reporting terminated 2025/08/20 22:09:48 repro finished 'kernel BUG in close_ctree', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/20 22:09:48 repro finished 'WARNING in __udf_add_aext', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/20 22:09:48 syz-diff (base): kernel context loop terminated 2025/08/20 22:09:49 repro finished 'general protection fault in lmLogSync', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/20 22:10:04 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/20 22:10:04 repro finished 'kernel BUG in may_open', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/20 22:16:48 reproducing crash 'INFO: task hung in __iterate_supers': concatenation step failed with context deadline exceeded 2025/08/20 22:16:48 repro finished 'INFO: task hung in __iterate_supers', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/20 22:16:48 syz-diff (new): kernel context loop terminated 2025/08/20 22:16:48 diff fuzzing terminated 2025/08/20 22:16:48 fuzzing is finished 2025/08/20 22:16:48 status at the end: Title On-Base On-Patched INFO: task hung in __iterate_supers 2 crashes 3 crashes INFO: task hung in bdev_open 1 crashes INFO: task hung in user_get_super 1 crashes KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings 1 crashes KASAN: slab-use-after-free Read in __usb_hcd_giveback_urb 1 crashes KASAN: slab-use-after-free Read in __xfrm_state_lookup 2 crashes 1 crashes KASAN: slab-use-after-free Read in jfs_lazycommit 1 crashes KASAN: slab-use-after-free Read in jfs_syncpt 1 crashes KASAN: slab-use-after-free Read in xfrm_alloc_spi 4 crashes 8 crashes KASAN: slab-use-after-free Read in xfrm_state_find 2 crashes 2 crashes WARNING in __udf_add_aext 1 crashes WARNING in dbAdjTree 2 crashes 3 crashes[reproduced] WARNING in ext4_xattr_inode_lookup_create 1 crashes 2 crashes WARNING in xfrm6_tunnel_net_exit 1 crashes 1 crashes WARNING in xfrm_state_fini 9 crashes 14 crashes WARNING: suspicious RCU usage in get_callchain_entry 5 crashes 11 crashes general protection fault in __xfrm_state_insert 1 crashes general protection fault in lmLogSync 1 crashes general protection fault in pcl818_ai_cancel 2 crashes 3 crashes general protection fault in xfrm_alloc_spi 1 crashes kernel BUG in close_ctree 1 crashes kernel BUG in jfs_evict_inode 1 crashes kernel BUG in may_open 1 crashes 1 crashes kernel BUG in txUnlock 4 crashes 4 crashes lost connection to test machine 22 crashes 36 crashes no output from test machine 6 crashes possible deadlock in attr_data_get_block 1 crashes possible deadlock in kernfs_remove 1 crashes possible deadlock in ntfs_fiemap 2 crashes possible deadlock in ocfs2_acquire_dquot 1 crashes possible deadlock in ocfs2_init_acl 15 crashes 38 crashes possible deadlock in ocfs2_reserve_suballoc_bits 13 crashes 41 crashes possible deadlock in ocfs2_setattr 2 crashes 2 crashes[reproduced] possible deadlock in ocfs2_try_remove_refcount_tree 16 crashes 23 crashes possible deadlock in ocfs2_xattr_set 5 crashes 17 crashes unregister_netdevice: waiting for DEV to become free 5 crashes 3 crashes