2025/08/13 11:49:20 extracted 303683 symbol hashes for base and 303683 for patched 2025/08/13 11:49:20 adding modified_functions to focus areas: ["nvmet_execute_disc_identify"] 2025/08/13 11:49:20 adding directly modified files to focus areas: ["rust/kernel/alloc.rs" "rust/kernel/alloc/allocator.rs" "rust/kernel/alloc/kbox.rs" "rust/kernel/sync/arc.rs"] 2025/08/13 11:49:22 downloaded the corpus from https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db 2025/08/13 11:50:20 runner 1 connected 2025/08/13 11:50:20 runner 0 connected 2025/08/13 11:50:20 runner 0 connected 2025/08/13 11:50:20 runner 1 connected 2025/08/13 11:50:20 runner 8 connected 2025/08/13 11:50:20 runner 6 connected 2025/08/13 11:50:20 runner 2 connected 2025/08/13 11:50:21 runner 7 connected 2025/08/13 11:50:21 runner 9 connected 2025/08/13 11:50:21 runner 4 connected 2025/08/13 11:50:21 runner 3 connected 2025/08/13 11:50:27 executor cover filter: 0 PCs 2025/08/13 11:50:27 initializing coverage information... 2025/08/13 11:50:33 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/13 11:50:33 base: machine check complete 2025/08/13 11:50:33 discovered 7697 source files, 338543 symbols 2025/08/13 11:50:34 coverage filter: nvmet_execute_disc_identify: [nvmet_execute_disc_identify] 2025/08/13 11:50:34 coverage filter: rust/kernel/alloc.rs: [] 2025/08/13 11:50:34 coverage filter: rust/kernel/alloc/allocator.rs: [] 2025/08/13 11:50:34 coverage filter: rust/kernel/alloc/kbox.rs: [] 2025/08/13 11:50:34 coverage filter: rust/kernel/sync/arc.rs: [] 2025/08/13 11:50:34 area "symbols": 15 PCs in the cover filter 2025/08/13 11:50:34 area "files": 0 PCs in the cover filter 2025/08/13 11:50:34 area "": 0 PCs in the cover filter 2025/08/13 11:50:34 executor cover filter: 0 PCs 2025/08/13 11:50:37 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/13 11:50:37 new: machine check complete 2025/08/13 11:50:39 new: adding 78567 seeds 2025/08/13 11:51:39 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 11:51:47 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/13 11:51:47 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/13 11:52:28 runner 0 connected 2025/08/13 11:52:42 base crash: WARNING in xfrm_state_fini 2025/08/13 11:53:39 runner 1 connected 2025/08/13 11:54:07 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 11:54:23 STAT { "buffer too small": 0, "candidate triage jobs": 31, "candidates": 75287, "comps overflows": 0, "corpus": 3231, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 2247, "coverage": 146386, "distributor delayed": 4705, "distributor undelayed": 4703, "distributor violated": 10, "exec candidate": 3280, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 10304, "exec total [new]": 14874, "exec triage": 10315, "executor restarts": 95, "fault jobs": 0, "fuzzer jobs": 31, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 147397, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 3280, "no exec duration": 38659000000, "no exec requests": 312, "pending": 1, "prog exec time": 186, "reproducing": 0, "rpc recv": 722011324, "rpc sent": 73309432, "signal": 144588, "smash jobs": 0, "triage jobs": 0, "vm output": 1595313, "vm restarts [base]": 5, "vm restarts [new]": 8 } 2025/08/13 11:56:34 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = true] 2025/08/13 11:56:34 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_alloc_spi' 2025/08/13 11:57:02 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/13 11:57:23 runner 8 connected 2025/08/13 11:57:51 runner 0 connected 2025/08/13 11:59:23 STAT { "buffer too small": 0, "candidate triage jobs": 67, "candidates": 71371, "comps overflows": 0, "corpus": 7077, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 4869, "coverage": 183644, "distributor delayed": 12246, "distributor undelayed": 12214, "distributor violated": 102, "exec candidate": 7196, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 2, "exec seeds": 0, "exec smash": 0, "exec total [base]": 24061, "exec total [new]": 33017, "exec triage": 22594, "executor restarts": 110, "fault jobs": 0, "fuzzer jobs": 67, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 185659, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 7196, "no exec duration": 38663000000, "no exec requests": 313, "pending": 2, "prog exec time": 485, "reproducing": 0, "rpc recv": 1104215476, "rpc sent": 173343440, "signal": 180715, "smash jobs": 0, "triage jobs": 0, "vm output": 3013082, "vm restarts [base]": 6, "vm restarts [new]": 9 } 2025/08/13 11:59:29 new: boot error: can't ssh into the instance 2025/08/13 11:59:29 new: boot error: can't ssh into the instance 2025/08/13 11:59:29 new: boot error: can't ssh into the instance 2025/08/13 11:59:49 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 12:00:17 runner 5 connected 2025/08/13 12:00:20 runner 3 connected 2025/08/13 12:00:25 runner 2 connected 2025/08/13 12:00:27 patched crashed: KASAN: use-after-free Read in xfrm_alloc_spi [need repro = true] 2025/08/13 12:00:27 scheduled a reproduction of 'KASAN: use-after-free Read in xfrm_alloc_spi' 2025/08/13 12:00:37 runner 8 connected 2025/08/13 12:00:49 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/08/13 12:01:04 base crash: possible deadlock in ocfs2_acquire_dquot 2025/08/13 12:01:06 patched crashed: possible deadlock in ocfs2_acquire_dquot [need repro = false] 2025/08/13 12:01:24 runner 4 connected 2025/08/13 12:01:38 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/13 12:01:39 runner 1 connected 2025/08/13 12:01:53 new: boot error: can't ssh into the instance 2025/08/13 12:01:55 runner 1 connected 2025/08/13 12:02:01 runner 3 connected 2025/08/13 12:02:29 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/13 12:02:34 runner 2 connected 2025/08/13 12:02:44 runner 9 connected 2025/08/13 12:03:26 runner 0 connected 2025/08/13 12:03:38 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 12:03:52 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 12:04:03 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 12:04:04 patched crashed: KASAN: slab-use-after-free Read in xfrm_state_find [need repro = true] 2025/08/13 12:04:04 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_state_find' 2025/08/13 12:04:13 new: boot error: can't ssh into the instance 2025/08/13 12:04:18 patched crashed: no output from test machine [need repro = false] 2025/08/13 12:04:23 STAT { "buffer too small": 0, "candidate triage jobs": 37, "candidates": 67428, "comps overflows": 0, "corpus": 11011, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 7247, "coverage": 209926, "distributor delayed": 18134, "distributor undelayed": 18118, "distributor violated": 387, "exec candidate": 11139, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 2, "exec seeds": 0, "exec smash": 0, "exec total [base]": 31364, "exec total [new]": 51147, "exec triage": 34796, "executor restarts": 149, "fault jobs": 0, "fuzzer jobs": 37, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 211889, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 11139, "no exec duration": 46742000000, "no exec requests": 329, "pending": 4, "prog exec time": 92, "reproducing": 0, "rpc recv": 1764500512, "rpc sent": 254367384, "signal": 206536, "smash jobs": 0, "triage jobs": 0, "vm output": 5111556, "vm restarts [base]": 10, "vm restarts [new]": 16 } 2025/08/13 12:04:35 patched crashed: KASAN: slab-use-after-free Write in __xfrm_state_delete [need repro = true] 2025/08/13 12:04:35 scheduled a reproduction of 'KASAN: slab-use-after-free Write in __xfrm_state_delete' 2025/08/13 12:04:35 runner 5 connected 2025/08/13 12:04:48 runner 8 connected 2025/08/13 12:05:00 runner 2 connected 2025/08/13 12:05:02 runner 4 connected 2025/08/13 12:05:10 runner 7 connected 2025/08/13 12:05:32 runner 9 connected 2025/08/13 12:06:36 base crash: WARNING in xfrm_state_fini 2025/08/13 12:07:22 base crash: general protection fault in xfrm_state_find 2025/08/13 12:07:29 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 12:07:31 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 12:07:34 runner 3 connected 2025/08/13 12:08:01 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 12:08:19 runner 0 connected 2025/08/13 12:08:24 base crash: WARNING in xfrm_state_fini 2025/08/13 12:08:26 runner 8 connected 2025/08/13 12:08:28 runner 1 connected 2025/08/13 12:08:59 runner 5 connected 2025/08/13 12:09:12 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 12:09:21 runner 1 connected 2025/08/13 12:09:22 base crash: lost connection to test machine 2025/08/13 12:09:23 STAT { "buffer too small": 0, "candidate triage jobs": 37, "candidates": 62597, "comps overflows": 0, "corpus": 15798, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 9879, "coverage": 232400, "distributor delayed": 23255, "distributor undelayed": 23254, "distributor violated": 514, "exec candidate": 15970, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 41367, "exec total [new]": 73531, "exec triage": 49596, "executor restarts": 206, "fault jobs": 0, "fuzzer jobs": 37, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 234346, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 15970, "no exec duration": 47708000000, "no exec requests": 332, "pending": 5, "prog exec time": 380, "reproducing": 0, "rpc recv": 2497710188, "rpc sent": 375141032, "signal": 228398, "smash jobs": 0, "triage jobs": 0, "vm output": 7919735, "vm restarts [base]": 13, "vm restarts [new]": 25 } 2025/08/13 12:09:23 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 12:09:33 patched crashed: KASAN: slab-use-after-free Read in xfrm_state_find [need repro = true] 2025/08/13 12:09:33 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_state_find' 2025/08/13 12:10:09 runner 9 connected 2025/08/13 12:10:09 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = true] 2025/08/13 12:10:09 scheduled a reproduction of 'unregister_netdevice: waiting for DEV to become free' 2025/08/13 12:10:18 runner 3 connected 2025/08/13 12:10:43 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/13 12:10:46 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 12:10:48 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/13 12:10:58 runner 6 connected 2025/08/13 12:11:32 runner 0 connected 2025/08/13 12:11:36 runner 2 connected 2025/08/13 12:11:37 runner 2 connected 2025/08/13 12:13:14 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 12:13:22 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/13 12:13:33 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/13 12:13:39 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/13 12:13:39 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/13 12:13:48 patched crashed: general protection fault in pcl818_ai_cancel [need repro = true] 2025/08/13 12:13:48 scheduled a reproduction of 'general protection fault in pcl818_ai_cancel' 2025/08/13 12:13:52 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/13 12:13:52 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/13 12:13:58 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/13 12:13:58 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/13 12:13:59 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/13 12:13:59 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/13 12:14:00 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/13 12:14:00 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/13 12:14:03 runner 8 connected 2025/08/13 12:14:12 runner 3 connected 2025/08/13 12:14:23 runner 1 connected 2025/08/13 12:14:23 STAT { "buffer too small": 0, "candidate triage jobs": 64, "candidates": 58789, "comps overflows": 0, "corpus": 19540, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 12058, "coverage": 245902, "distributor delayed": 28102, "distributor undelayed": 28075, "distributor violated": 515, "exec candidate": 19778, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 51119, "exec total [new]": 91740, "exec triage": 61211, "executor restarts": 254, "fault jobs": 0, "fuzzer jobs": 64, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 247957, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 19778, "no exec duration": 47711000000, "no exec requests": 333, "pending": 13, "prog exec time": 0, "reproducing": 0, "rpc recv": 3090002916, "rpc sent": 474924328, "signal": 241839, "smash jobs": 0, "triage jobs": 0, "vm output": 10181899, "vm restarts [base]": 18, "vm restarts [new]": 29 } 2025/08/13 12:14:24 new: boot error: can't ssh into the instance 2025/08/13 12:14:29 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/13 12:14:29 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/13 12:14:29 runner 9 connected 2025/08/13 12:14:38 runner 4 connected 2025/08/13 12:14:40 runner 1 connected 2025/08/13 12:14:47 runner 5 connected 2025/08/13 12:14:47 runner 2 connected 2025/08/13 12:14:50 runner 6 connected 2025/08/13 12:14:59 base crash: WARNING in xfrm_state_fini 2025/08/13 12:15:09 base crash: WARNING in dbAdjTree 2025/08/13 12:15:12 runner 0 connected 2025/08/13 12:15:18 runner 8 connected 2025/08/13 12:15:27 base crash: WARNING in dbAdjTree 2025/08/13 12:15:55 runner 2 connected 2025/08/13 12:16:06 runner 3 connected 2025/08/13 12:16:17 runner 1 connected 2025/08/13 12:17:48 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/13 12:17:48 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/13 12:17:57 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/08/13 12:17:57 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/08/13 12:18:00 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/08/13 12:18:00 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/08/13 12:18:45 runner 8 connected 2025/08/13 12:18:46 runner 6 connected 2025/08/13 12:18:49 runner 5 connected 2025/08/13 12:19:15 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/13 12:19:15 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/13 12:19:23 STAT { "buffer too small": 0, "candidate triage jobs": 42, "candidates": 55005, "comps overflows": 0, "corpus": 23309, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 14102, "coverage": 258672, "distributor delayed": 32342, "distributor undelayed": 32342, "distributor violated": 517, "exec candidate": 23562, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 61183, "exec total [new]": 109717, "exec triage": 72693, "executor restarts": 333, "fault jobs": 0, "fuzzer jobs": 42, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 261086, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 23562, "no exec duration": 47723000000, "no exec requests": 334, "pending": 18, "prog exec time": 228, "reproducing": 0, "rpc recv": 3887621068, "rpc sent": 593484528, "signal": 254640, "smash jobs": 0, "triage jobs": 0, "vm output": 13244545, "vm restarts [base]": 21, "vm restarts [new]": 40 } 2025/08/13 12:19:29 new: boot error: can't ssh into the instance 2025/08/13 12:19:38 new: boot error: can't ssh into the instance 2025/08/13 12:20:04 runner 0 connected 2025/08/13 12:20:18 runner 3 connected 2025/08/13 12:20:35 runner 7 connected 2025/08/13 12:20:37 base crash: possible deadlock in ocfs2_init_acl 2025/08/13 12:21:25 runner 1 connected 2025/08/13 12:22:08 patched crashed: possible deadlock in attr_data_get_block [need repro = true] 2025/08/13 12:22:08 scheduled a reproduction of 'possible deadlock in attr_data_get_block' 2025/08/13 12:23:05 runner 0 connected 2025/08/13 12:23:33 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/13 12:24:12 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 12:24:16 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/13 12:24:23 STAT { "buffer too small": 0, "candidate triage jobs": 33, "candidates": 49902, "comps overflows": 0, "corpus": 28356, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 17216, "coverage": 271954, "distributor delayed": 36499, "distributor undelayed": 36498, "distributor violated": 517, "exec candidate": 28665, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 73221, "exec total [new]": 136503, "exec triage": 88322, "executor restarts": 389, "fault jobs": 0, "fuzzer jobs": 33, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 274727, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 28665, "no exec duration": 47786000000, "no exec requests": 337, "pending": 19, "prog exec time": 256, "reproducing": 0, "rpc recv": 4444627196, "rpc sent": 737859472, "signal": 267677, "smash jobs": 0, "triage jobs": 0, "vm output": 16545293, "vm restarts [base]": 22, "vm restarts [new]": 44 } 2025/08/13 12:24:30 runner 2 connected 2025/08/13 12:24:45 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/13 12:25:09 runner 5 connected 2025/08/13 12:25:12 runner 3 connected 2025/08/13 12:25:44 runner 8 connected 2025/08/13 12:26:06 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 12:26:27 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 12:26:54 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 12:27:04 runner 5 connected 2025/08/13 12:27:24 runner 0 connected 2025/08/13 12:27:40 base crash: possible deadlock in __netdev_update_features 2025/08/13 12:27:51 runner 4 connected 2025/08/13 12:27:59 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/08/13 12:28:37 runner 1 connected 2025/08/13 12:28:53 base crash: WARNING in __ww_mutex_wound 2025/08/13 12:28:56 runner 3 connected 2025/08/13 12:29:23 STAT { "buffer too small": 0, "candidate triage jobs": 55, "candidates": 44882, "comps overflows": 0, "corpus": 33255, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 20543, "coverage": 283701, "distributor delayed": 41085, "distributor undelayed": 41085, "distributor violated": 517, "exec candidate": 33685, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 6, "exec seeds": 0, "exec smash": 0, "exec total [base]": 85834, "exec total [new]": 165457, "exec triage": 103912, "executor restarts": 450, "fault jobs": 0, "fuzzer jobs": 55, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 10, "hints jobs": 0, "max signal": 286671, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 33685, "no exec duration": 48468000000, "no exec requests": 346, "pending": 19, "prog exec time": 225, "reproducing": 0, "rpc recv": 5095556548, "rpc sent": 895246040, "signal": 278874, "smash jobs": 0, "triage jobs": 0, "vm output": 19516313, "vm restarts [base]": 24, "vm restarts [new]": 51 } 2025/08/13 12:29:50 runner 0 connected 2025/08/13 12:30:12 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 12:30:17 patched crashed: general protection fault in pcl818_ai_cancel [need repro = true] 2025/08/13 12:30:17 scheduled a reproduction of 'general protection fault in pcl818_ai_cancel' 2025/08/13 12:30:25 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 12:31:11 runner 8 connected 2025/08/13 12:31:14 runner 4 connected 2025/08/13 12:31:22 runner 6 connected 2025/08/13 12:31:28 base crash: WARNING in xfrm_state_fini 2025/08/13 12:31:40 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = false] 2025/08/13 12:32:03 patched crashed: INFO: task hung in v9fs_evict_inode [need repro = true] 2025/08/13 12:32:04 scheduled a reproduction of 'INFO: task hung in v9fs_evict_inode' 2025/08/13 12:32:07 patched crashed: possible deadlock in run_unpack_ex [need repro = true] 2025/08/13 12:32:07 scheduled a reproduction of 'possible deadlock in run_unpack_ex' 2025/08/13 12:32:25 runner 1 connected 2025/08/13 12:32:37 runner 1 connected 2025/08/13 12:32:50 patched crashed: possible deadlock in mark_as_free_ex [need repro = true] 2025/08/13 12:32:50 scheduled a reproduction of 'possible deadlock in mark_as_free_ex' 2025/08/13 12:32:56 base crash: INFO: task hung in v9fs_evict_inode 2025/08/13 12:33:00 runner 5 connected 2025/08/13 12:33:04 runner 9 connected 2025/08/13 12:33:27 base crash: INFO: task hung in v9fs_evict_inode 2025/08/13 12:33:44 patched crashed: WARNING in io_ring_exit_work [need repro = true] 2025/08/13 12:33:44 scheduled a reproduction of 'WARNING in io_ring_exit_work' 2025/08/13 12:33:47 runner 4 connected 2025/08/13 12:33:54 runner 3 connected 2025/08/13 12:34:22 patched crashed: possible deadlock in mark_as_free_ex [need repro = true] 2025/08/13 12:34:22 scheduled a reproduction of 'possible deadlock in mark_as_free_ex' 2025/08/13 12:34:23 STAT { "buffer too small": 0, "candidate triage jobs": 47, "candidates": 40217, "comps overflows": 0, "corpus": 37794, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 23774, "coverage": 292538, "distributor delayed": 45439, "distributor undelayed": 45439, "distributor violated": 517, "exec candidate": 38350, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 6, "exec seeds": 0, "exec smash": 0, "exec total [base]": 96598, "exec total [new]": 195505, "exec triage": 118556, "executor restarts": 514, "fault jobs": 0, "fuzzer jobs": 47, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 295693, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 38350, "no exec duration": 48687000000, "no exec requests": 352, "pending": 25, "prog exec time": 345, "reproducing": 0, "rpc recv": 5736275156, "rpc sent": 1044553336, "signal": 287574, "smash jobs": 0, "triage jobs": 0, "vm output": 22041750, "vm restarts [base]": 27, "vm restarts [new]": 58 } 2025/08/13 12:34:26 runner 0 connected 2025/08/13 12:34:34 patched crashed: possible deadlock in mark_as_free_ex [need repro = true] 2025/08/13 12:34:34 scheduled a reproduction of 'possible deadlock in mark_as_free_ex' 2025/08/13 12:34:42 runner 7 connected 2025/08/13 12:35:19 runner 2 connected 2025/08/13 12:35:56 base crash: possible deadlock in input_inject_event 2025/08/13 12:36:22 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/13 12:36:53 runner 2 connected 2025/08/13 12:37:11 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 12:37:21 runner 0 connected 2025/08/13 12:38:10 runner 5 connected 2025/08/13 12:38:22 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/08/13 12:39:16 patched crashed: WARNING in io_ring_exit_work [need repro = true] 2025/08/13 12:39:16 scheduled a reproduction of 'WARNING in io_ring_exit_work' 2025/08/13 12:39:18 runner 4 connected 2025/08/13 12:39:23 patched crashed: possible deadlock in mark_as_free_ex [need repro = true] 2025/08/13 12:39:23 scheduled a reproduction of 'possible deadlock in mark_as_free_ex' 2025/08/13 12:39:23 STAT { "buffer too small": 0, "candidate triage jobs": 30, "candidates": 37894, "comps overflows": 0, "corpus": 40006, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 27867, "coverage": 297323, "distributor delayed": 47671, "distributor undelayed": 47663, "distributor violated": 517, "exec candidate": 40673, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 106982, "exec total [new]": 223368, "exec triage": 126221, "executor restarts": 554, "fault jobs": 0, "fuzzer jobs": 30, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 301068, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 40673, "no exec duration": 48699000000, "no exec requests": 353, "pending": 28, "prog exec time": 254, "reproducing": 0, "rpc recv": 6134208816, "rpc sent": 1201577416, "signal": 292237, "smash jobs": 0, "triage jobs": 0, "vm output": 24831659, "vm restarts [base]": 30, "vm restarts [new]": 62 } 2025/08/13 12:39:30 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 12:39:45 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 12:40:13 runner 8 connected 2025/08/13 12:40:19 runner 6 connected 2025/08/13 12:40:27 runner 0 connected 2025/08/13 12:40:42 runner 9 connected 2025/08/13 12:42:17 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/08/13 12:43:14 runner 6 connected 2025/08/13 12:43:19 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 12:43:36 base crash: possible deadlock in ocfs2_init_acl 2025/08/13 12:44:16 runner 5 connected 2025/08/13 12:44:23 STAT { "buffer too small": 0, "candidate triage jobs": 7, "candidates": 36113, "comps overflows": 0, "corpus": 41722, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 31885, "coverage": 301635, "distributor delayed": 49436, "distributor undelayed": 49435, "distributor violated": 527, "exec candidate": 42454, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 118545, "exec total [new]": 248157, "exec triage": 132014, "executor restarts": 599, "fault jobs": 0, "fuzzer jobs": 7, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 305545, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 42454, "no exec duration": 48807000000, "no exec requests": 355, "pending": 28, "prog exec time": 281, "reproducing": 0, "rpc recv": 6501084460, "rpc sent": 1362501352, "signal": 296356, "smash jobs": 0, "triage jobs": 0, "vm output": 27440564, "vm restarts [base]": 30, "vm restarts [new]": 68 } 2025/08/13 12:44:34 runner 3 connected 2025/08/13 12:44:40 new: boot error: can't ssh into the instance 2025/08/13 12:45:37 runner 3 connected 2025/08/13 12:47:02 base crash: lost connection to test machine 2025/08/13 12:48:02 patched crashed: possible deadlock in attr_data_get_block [need repro = true] 2025/08/13 12:48:02 scheduled a reproduction of 'possible deadlock in attr_data_get_block' 2025/08/13 12:48:06 runner 3 connected 2025/08/13 12:48:44 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/13 12:48:46 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/13 12:48:58 runner 4 connected 2025/08/13 12:49:23 STAT { "buffer too small": 0, "candidate triage jobs": 13, "candidates": 34750, "comps overflows": 0, "corpus": 42955, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 36710, "coverage": 304541, "distributor delayed": 50559, "distributor undelayed": 50558, "distributor violated": 527, "exec candidate": 43817, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 128632, "exec total [new]": 277352, "exec triage": 136464, "executor restarts": 633, "fault jobs": 0, "fuzzer jobs": 13, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 308640, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43797, "no exec duration": 48847000000, "no exec requests": 358, "pending": 29, "prog exec time": 275, "reproducing": 0, "rpc recv": 6793072460, "rpc sent": 1520522640, "signal": 299207, "smash jobs": 0, "triage jobs": 0, "vm output": 30063400, "vm restarts [base]": 32, "vm restarts [new]": 70 } 2025/08/13 12:49:41 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/08/13 12:49:44 runner 6 connected 2025/08/13 12:50:38 runner 2 connected 2025/08/13 12:51:00 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = true] 2025/08/13 12:51:00 scheduled a reproduction of 'unregister_netdevice: waiting for DEV to become free' 2025/08/13 12:51:13 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = true] 2025/08/13 12:51:13 scheduled a reproduction of 'unregister_netdevice: waiting for DEV to become free' 2025/08/13 12:51:57 runner 0 connected 2025/08/13 12:52:31 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/08/13 12:53:13 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 12:53:55 runner 6 connected 2025/08/13 12:54:23 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 24231, "comps overflows": 0, "corpus": 44007, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 41000, "coverage": 306707, "distributor delayed": 51577, "distributor undelayed": 51577, "distributor violated": 527, "exec candidate": 54336, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 139073, "exec total [new]": 304736, "exec triage": 140164, "executor restarts": 688, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 310864, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44934, "no exec duration": 48847000000, "no exec requests": 358, "pending": 31, "prog exec time": 218, "reproducing": 0, "rpc recv": 7039955336, "rpc sent": 1670873080, "signal": 301273, "smash jobs": 0, "triage jobs": 0, "vm output": 33043836, "vm restarts [base]": 32, "vm restarts [new]": 74 } 2025/08/13 12:55:18 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/08/13 12:55:18 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/08/13 12:55:43 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 12:56:07 runner 6 connected 2025/08/13 12:56:31 runner 0 connected 2025/08/13 12:57:19 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 12:57:43 patched crashed: possible deadlock in ntfs_fiemap [need repro = true] 2025/08/13 12:57:43 scheduled a reproduction of 'possible deadlock in ntfs_fiemap' 2025/08/13 12:57:53 triaged 90.0% of the corpus 2025/08/13 12:57:53 starting bug reproductions 2025/08/13 12:57:53 starting bug reproductions (max 10 VMs, 7 repros) 2025/08/13 12:57:53 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/13 12:57:53 reproduction of "KASAN: slab-use-after-free Read in xfrm_alloc_spi" aborted: it's no longer needed 2025/08/13 12:57:53 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/13 12:57:53 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/13 12:57:53 reproduction of "WARNING in dbAdjTree" aborted: it's no longer needed 2025/08/13 12:57:53 reproduction of "WARNING in dbAdjTree" aborted: it's no longer needed 2025/08/13 12:57:53 start reproducing 'KASAN: use-after-free Read in xfrm_alloc_spi' 2025/08/13 12:57:53 start reproducing 'general protection fault in pcl818_ai_cancel' 2025/08/13 12:57:53 start reproducing 'KASAN: slab-use-after-free Write in __xfrm_state_delete' 2025/08/13 12:57:53 start reproducing 'unregister_netdevice: waiting for DEV to become free' 2025/08/13 12:57:53 start reproducing 'KASAN: slab-use-after-free Read in xfrm_state_find' 2025/08/13 12:57:53 reproduction of "WARNING in dbAdjTree" aborted: it's no longer needed 2025/08/13 12:57:53 reproduction of "WARNING in dbAdjTree" aborted: it's no longer needed 2025/08/13 12:57:53 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/13 12:57:53 start reproducing 'kernel BUG in jfs_evict_inode' 2025/08/13 12:57:53 start reproducing 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/13 12:58:08 base crash: possible deadlock in attr_data_get_block 2025/08/13 12:58:50 base: boot error: can't ssh into the instance 2025/08/13 12:58:57 runner 1 connected 2025/08/13 12:59:23 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 7845, "comps overflows": 0, "corpus": 44429, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43905, "coverage": 307575, "distributor delayed": 52111, "distributor undelayed": 52111, "distributor violated": 527, "exec candidate": 70722, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 151134, "exec total [new]": 322643, "exec triage": 141682, "executor restarts": 723, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311831, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45397, "no exec duration": 48847000000, "no exec requests": 358, "pending": 16, "prog exec time": 0, "reproducing": 7, "rpc recv": 7185946028, "rpc sent": 1775110112, "signal": 302076, "smash jobs": 0, "triage jobs": 0, "vm output": 34867341, "vm restarts [base]": 33, "vm restarts [new]": 76 } 2025/08/13 12:59:39 runner 2 connected 2025/08/13 13:01:18 new: boot error: can't ssh into the instance 2025/08/13 13:02:37 new: boot error: can't ssh into the instance 2025/08/13 13:04:23 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 7845, "comps overflows": 0, "corpus": 44429, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43905, "coverage": 307575, "distributor delayed": 52111, "distributor undelayed": 52111, "distributor violated": 527, "exec candidate": 70722, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 152472, "exec total [new]": 322643, "exec triage": 141682, "executor restarts": 723, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311831, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45397, "no exec duration": 48847000000, "no exec requests": 358, "pending": 16, "prog exec time": 0, "reproducing": 7, "rpc recv": 7217175608, "rpc sent": 1779234672, "signal": 302076, "smash jobs": 0, "triage jobs": 0, "vm output": 36298824, "vm restarts [base]": 34, "vm restarts [new]": 76 } 2025/08/13 13:04:56 base crash: no output from test machine 2025/08/13 13:04:58 base crash: no output from test machine 2025/08/13 13:05:00 base crash: no output from test machine 2025/08/13 13:05:07 base crash: no output from test machine 2025/08/13 13:05:39 runner 2 connected 2025/08/13 13:05:44 runner 3 connected 2025/08/13 13:05:48 runner 0 connected 2025/08/13 13:07:59 new: boot error: can't ssh into the instance 2025/08/13 13:07:59 new: boot error: can't ssh into the instance 2025/08/13 13:09:23 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 7845, "comps overflows": 0, "corpus": 44429, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43905, "coverage": 307575, "distributor delayed": 52111, "distributor undelayed": 52111, "distributor violated": 527, "exec candidate": 70722, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 152472, "exec total [new]": 322643, "exec triage": 141682, "executor restarts": 723, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311831, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45397, "no exec duration": 48847000000, "no exec requests": 358, "pending": 16, "prog exec time": 0, "reproducing": 7, "rpc recv": 7309827648, "rpc sent": 1779235512, "signal": 302076, "smash jobs": 0, "triage jobs": 0, "vm output": 39405670, "vm restarts [base]": 37, "vm restarts [new]": 76 } 2025/08/13 13:10:39 base crash: no output from test machine 2025/08/13 13:10:44 base crash: no output from test machine 2025/08/13 13:10:48 base crash: no output from test machine 2025/08/13 13:11:04 repro finished 'KASAN: use-after-free Read in xfrm_alloc_spi', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 13:11:04 failed repro for "KASAN: use-after-free Read in xfrm_alloc_spi", err=%!s() 2025/08/13 13:11:04 reproduction of "INFO: task hung in v9fs_evict_inode" aborted: it's no longer needed 2025/08/13 13:11:04 start reproducing 'possible deadlock in attr_data_get_block' 2025/08/13 13:11:04 "KASAN: use-after-free Read in xfrm_alloc_spi": saved crash log into 1755090664.crash.log 2025/08/13 13:11:04 "KASAN: use-after-free Read in xfrm_alloc_spi": saved repro log into 1755090664.repro.log 2025/08/13 13:11:06 repro finished 'general protection fault in pcl818_ai_cancel', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 13:11:06 failed repro for "general protection fault in pcl818_ai_cancel", err=%!s() 2025/08/13 13:11:06 start reproducing 'possible deadlock in run_unpack_ex' 2025/08/13 13:11:06 "general protection fault in pcl818_ai_cancel": saved crash log into 1755090666.crash.log 2025/08/13 13:11:06 "general protection fault in pcl818_ai_cancel": saved repro log into 1755090666.repro.log 2025/08/13 13:11:23 repro finished 'possible deadlock in ocfs2_reserve_suballoc_bits', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 13:11:23 start reproducing 'possible deadlock in mark_as_free_ex' 2025/08/13 13:11:23 failed repro for "possible deadlock in ocfs2_reserve_suballoc_bits", err=%!s() 2025/08/13 13:11:23 "possible deadlock in ocfs2_reserve_suballoc_bits": saved crash log into 1755090683.crash.log 2025/08/13 13:11:23 "possible deadlock in ocfs2_reserve_suballoc_bits": saved repro log into 1755090683.repro.log 2025/08/13 13:11:25 runner 3 connected 2025/08/13 13:11:27 runner 2 connected 2025/08/13 13:11:36 runner 0 connected 2025/08/13 13:12:35 reproducing crash 'unregister_netdevice: waiting for DEV to become free': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 13:13:53 reproducing crash 'unregister_netdevice: waiting for DEV to become free': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 13:14:23 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 7845, "comps overflows": 0, "corpus": 44429, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43905, "coverage": 307575, "distributor delayed": 52111, "distributor undelayed": 52111, "distributor violated": 527, "exec candidate": 70722, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 152472, "exec total [new]": 322643, "exec triage": 141682, "executor restarts": 723, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311831, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45397, "no exec duration": 48847000000, "no exec requests": 358, "pending": 12, "prog exec time": 0, "reproducing": 7, "rpc recv": 7402479688, "rpc sent": 1779236352, "signal": 302076, "smash jobs": 0, "triage jobs": 0, "vm output": 43282822, "vm restarts [base]": 40, "vm restarts [new]": 76 } 2025/08/13 13:14:26 reproducing crash 'unregister_netdevice: waiting for DEV to become free': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 13:15:13 base: boot error: can't ssh into the instance 2025/08/13 13:15:15 reproducing crash 'unregister_netdevice: waiting for DEV to become free': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 13:15:28 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/xfrm/xfrm_state.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 13:15:48 reproducing crash 'unregister_netdevice: waiting for DEV to become free': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 13:15:48 repro finished 'unregister_netdevice: waiting for DEV to become free', repro=true crepro=false desc='general protection fault in lmLogSync' hub=false from_dashboard=false 2025/08/13 13:15:48 found repro for "general protection fault in lmLogSync" (orig title: "unregister_netdevice: waiting for DEV to become free", reliability: 1), took 17.90 minutes 2025/08/13 13:15:48 start reproducing 'WARNING in io_ring_exit_work' 2025/08/13 13:15:48 "general protection fault in lmLogSync": saved crash log into 1755090948.crash.log 2025/08/13 13:15:48 "general protection fault in lmLogSync": saved repro log into 1755090948.repro.log 2025/08/13 13:15:55 runner 1 connected 2025/08/13 13:16:25 base crash: no output from test machine 2025/08/13 13:16:27 base crash: no output from test machine 2025/08/13 13:16:59 attempt #0 to run "general protection fault in lmLogSync" on base: crashed with general protection fault in lmLogSync 2025/08/13 13:16:59 crashes both: general protection fault in lmLogSync / general protection fault in lmLogSync 2025/08/13 13:17:06 runner 3 connected 2025/08/13 13:17:16 runner 2 connected 2025/08/13 13:17:48 runner 0 connected 2025/08/13 13:19:23 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 7845, "comps overflows": 0, "corpus": 44429, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43905, "coverage": 307575, "distributor delayed": 52111, "distributor undelayed": 52111, "distributor violated": 527, "exec candidate": 70722, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 152472, "exec total [new]": 322643, "exec triage": 141682, "executor restarts": 723, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311831, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45397, "no exec duration": 48847000000, "no exec requests": 358, "pending": 11, "prog exec time": 0, "reproducing": 7, "rpc recv": 7526015744, "rpc sent": 1779237472, "signal": 302076, "smash jobs": 0, "triage jobs": 0, "vm output": 49100169, "vm restarts [base]": 44, "vm restarts [new]": 76 } 2025/08/13 13:19:59 new: boot error: can't ssh into the instance 2025/08/13 13:20:54 base crash: no output from test machine 2025/08/13 13:21:43 runner 1 connected 2025/08/13 13:22:06 base crash: no output from test machine 2025/08/13 13:22:16 base crash: no output from test machine 2025/08/13 13:22:31 repro finished 'possible deadlock in run_unpack_ex', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 13:22:31 failed repro for "possible deadlock in run_unpack_ex", err=%!s() 2025/08/13 13:22:31 start reproducing 'possible deadlock in ntfs_fiemap' 2025/08/13 13:22:31 "possible deadlock in run_unpack_ex": saved crash log into 1755091351.crash.log 2025/08/13 13:22:31 "possible deadlock in run_unpack_ex": saved repro log into 1755091351.repro.log 2025/08/13 13:22:47 base crash: no output from test machine 2025/08/13 13:22:55 runner 3 connected 2025/08/13 13:23:07 repro finished 'possible deadlock in attr_data_get_block', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 13:23:07 reproduction of "possible deadlock in attr_data_get_block" aborted: it's no longer needed 2025/08/13 13:23:07 failed repro for "possible deadlock in attr_data_get_block", err=%!s() 2025/08/13 13:23:07 start reproducing 'general protection fault in pcl818_ai_cancel' 2025/08/13 13:23:07 "possible deadlock in attr_data_get_block": saved crash log into 1755091387.crash.log 2025/08/13 13:23:07 "possible deadlock in attr_data_get_block": saved repro log into 1755091387.repro.log 2025/08/13 13:23:09 repro finished 'possible deadlock in mark_as_free_ex', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 13:23:09 failed repro for "possible deadlock in mark_as_free_ex", err=%!s() 2025/08/13 13:23:09 start reproducing 'unregister_netdevice: waiting for DEV to become free' 2025/08/13 13:23:09 "possible deadlock in mark_as_free_ex": saved crash log into 1755091389.crash.log 2025/08/13 13:23:09 "possible deadlock in mark_as_free_ex": saved repro log into 1755091389.repro.log 2025/08/13 13:23:13 runner 2 connected 2025/08/13 13:23:36 runner 0 connected 2025/08/13 13:24:23 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 7845, "comps overflows": 0, "corpus": 44429, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43905, "coverage": 307575, "distributor delayed": 52111, "distributor undelayed": 52111, "distributor violated": 527, "exec candidate": 70722, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 152472, "exec total [new]": 322643, "exec triage": 141682, "executor restarts": 723, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311831, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45397, "no exec duration": 48847000000, "no exec requests": 358, "pending": 7, "prog exec time": 0, "reproducing": 7, "rpc recv": 7649551800, "rpc sent": 1779238592, "signal": 302076, "smash jobs": 0, "triage jobs": 0, "vm output": 51345831, "vm restarts [base]": 48, "vm restarts [new]": 76 } 2025/08/13 13:26:43 base crash: no output from test machine 2025/08/13 13:27:32 runner 1 connected 2025/08/13 13:27:54 base crash: no output from test machine 2025/08/13 13:28:12 base crash: no output from test machine 2025/08/13 13:28:36 base crash: no output from test machine 2025/08/13 13:28:51 runner 3 connected 2025/08/13 13:29:09 runner 2 connected 2025/08/13 13:29:23 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 7845, "comps overflows": 0, "corpus": 44429, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43905, "coverage": 307575, "distributor delayed": 52111, "distributor undelayed": 52111, "distributor violated": 527, "exec candidate": 70722, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 152472, "exec total [new]": 322643, "exec triage": 141682, "executor restarts": 723, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311831, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45397, "no exec duration": 48847000000, "no exec requests": 358, "pending": 7, "prog exec time": 0, "reproducing": 7, "rpc recv": 7742203848, "rpc sent": 1779239432, "signal": 302076, "smash jobs": 0, "triage jobs": 0, "vm output": 52959064, "vm restarts [base]": 51, "vm restarts [new]": 76 } 2025/08/13 13:29:33 runner 0 connected 2025/08/13 13:30:43 new: boot error: can't ssh into the instance 2025/08/13 13:32:04 new: boot error: can't ssh into the instance 2025/08/13 13:32:32 base crash: no output from test machine 2025/08/13 13:32:39 new: boot error: can't ssh into the instance 2025/08/13 13:33:05 new: boot error: can't ssh into the instance 2025/08/13 13:33:29 runner 1 connected 2025/08/13 13:33:51 base crash: no output from test machine 2025/08/13 13:34:08 base crash: no output from test machine 2025/08/13 13:34:23 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 7845, "comps overflows": 0, "corpus": 44429, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43905, "coverage": 307575, "distributor delayed": 52111, "distributor undelayed": 52111, "distributor violated": 527, "exec candidate": 70722, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 152472, "exec total [new]": 322643, "exec triage": 141682, "executor restarts": 723, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311831, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45397, "no exec duration": 48847000000, "no exec requests": 358, "pending": 7, "prog exec time": 0, "reproducing": 7, "rpc recv": 7803971872, "rpc sent": 1779239992, "signal": 302076, "smash jobs": 0, "triage jobs": 0, "vm output": 55490277, "vm restarts [base]": 53, "vm restarts [new]": 76 } 2025/08/13 13:34:32 base crash: no output from test machine 2025/08/13 13:34:48 runner 3 connected 2025/08/13 13:35:05 runner 2 connected 2025/08/13 13:35:22 runner 0 connected 2025/08/13 13:36:43 repro finished 'possible deadlock in ntfs_fiemap', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 13:36:43 failed repro for "possible deadlock in ntfs_fiemap", err=%!s() 2025/08/13 13:36:43 start reproducing 'possible deadlock in mark_as_free_ex' 2025/08/13 13:36:43 "possible deadlock in ntfs_fiemap": saved crash log into 1755092203.crash.log 2025/08/13 13:36:43 "possible deadlock in ntfs_fiemap": saved repro log into 1755092203.repro.log 2025/08/13 13:36:43 repro finished 'general protection fault in pcl818_ai_cancel', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 13:36:43 failed repro for "general protection fault in pcl818_ai_cancel", err=%!s() 2025/08/13 13:36:43 "general protection fault in pcl818_ai_cancel": saved crash log into 1755092203.crash.log 2025/08/13 13:36:43 "general protection fault in pcl818_ai_cancel": saved repro log into 1755092203.repro.log 2025/08/13 13:38:03 repro finished 'KASAN: slab-use-after-free Write in __xfrm_state_delete', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 13:38:03 failed repro for "KASAN: slab-use-after-free Write in __xfrm_state_delete", err=%!s() 2025/08/13 13:38:03 "KASAN: slab-use-after-free Write in __xfrm_state_delete": saved crash log into 1755092283.crash.log 2025/08/13 13:38:03 "KASAN: slab-use-after-free Write in __xfrm_state_delete": saved repro log into 1755092283.repro.log 2025/08/13 13:38:28 base crash: no output from test machine 2025/08/13 13:38:52 runner 0 connected 2025/08/13 13:39:05 runner 1 connected 2025/08/13 13:39:23 STAT { "buffer too small": 0, "candidate triage jobs": 9, "candidates": 7689, "comps overflows": 0, "corpus": 44431, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43943, "coverage": 307577, "distributor delayed": 52129, "distributor undelayed": 52122, "distributor violated": 527, "exec candidate": 70878, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 152612, "exec total [new]": 322814, "exec triage": 141692, "executor restarts": 729, "fault jobs": 0, "fuzzer jobs": 9, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 311860, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45405, "no exec duration": 68800000000, "no exec requests": 413, "pending": 7, "prog exec time": 357, "reproducing": 5, "rpc recv": 7959276716, "rpc sent": 1786073888, "signal": 302078, "smash jobs": 0, "triage jobs": 0, "vm output": 58098038, "vm restarts [base]": 56, "vm restarts [new]": 78 } 2025/08/13 13:39:25 runner 1 connected 2025/08/13 13:41:41 runner 2 connected 2025/08/13 13:44:21 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 13:44:23 STAT { "buffer too small": 0, "candidate triage jobs": 11, "candidates": 1071, "comps overflows": 0, "corpus": 44578, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 45050, "coverage": 307856, "distributor delayed": 52322, "distributor undelayed": 52311, "distributor violated": 562, "exec candidate": 77496, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 159761, "exec total [new]": 329964, "exec triage": 142218, "executor restarts": 753, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 312176, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 8, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45569, "no exec duration": 1233542000000, "no exec requests": 4570, "pending": 7, "prog exec time": 262, "reproducing": 5, "rpc recv": 8044196672, "rpc sent": 1833069408, "signal": 302355, "smash jobs": 0, "triage jobs": 0, "vm output": 60181713, "vm restarts [base]": 57, "vm restarts [new]": 79 } 2025/08/13 13:46:00 new: boot error: can't ssh into the instance 2025/08/13 13:46:49 new: boot error: can't ssh into the instance 2025/08/13 13:46:51 repro finished 'KASAN: slab-use-after-free Read in xfrm_state_find', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 13:46:51 failed repro for "KASAN: slab-use-after-free Read in xfrm_state_find", err=%!s() 2025/08/13 13:46:51 start reproducing 'KASAN: slab-use-after-free Read in xfrm_state_find' 2025/08/13 13:46:51 "KASAN: slab-use-after-free Read in xfrm_state_find": saved crash log into 1755092811.crash.log 2025/08/13 13:46:51 "KASAN: slab-use-after-free Read in xfrm_state_find": saved repro log into 1755092811.repro.log 2025/08/13 13:49:06 repro finished 'possible deadlock in mark_as_free_ex', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 13:49:06 failed repro for "possible deadlock in mark_as_free_ex", err=%!s() 2025/08/13 13:49:06 start reproducing 'possible deadlock in mark_as_free_ex' 2025/08/13 13:49:06 "possible deadlock in mark_as_free_ex": saved crash log into 1755092946.crash.log 2025/08/13 13:49:06 "possible deadlock in mark_as_free_ex": saved repro log into 1755092946.repro.log 2025/08/13 13:49:23 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 12, "corpus": 44630, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 45986, "coverage": 308153, "distributor delayed": 52383, "distributor undelayed": 52383, "distributor violated": 586, "exec candidate": 78567, "exec collide": 247, "exec fuzz": 519, "exec gen": 24, "exec hints": 118, "exec inject": 0, "exec minimize": 413, "exec retries": 20, "exec seeds": 54, "exec smash": 236, "exec total [base]": 162646, "exec total [new]": 332859, "exec triage": 142430, "executor restarts": 759, "fault jobs": 0, "fuzzer jobs": 31, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 8, "max signal": 312517, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 245, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45632, "no exec duration": 2049795000000, "no exec requests": 6773, "pending": 5, "prog exec time": 571, "reproducing": 5, "rpc recv": 8069985896, "rpc sent": 1887816368, "signal": 302592, "smash jobs": 18, "triage jobs": 5, "vm output": 62999818, "vm restarts [base]": 57, "vm restarts [new]": 79 } 2025/08/13 13:49:53 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/13 13:50:50 runner 0 connected 2025/08/13 13:54:23 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 30, "corpus": 44654, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 46710, "coverage": 308209, "distributor delayed": 52422, "distributor undelayed": 52422, "distributor violated": 586, "exec candidate": 78567, "exec collide": 531, "exec fuzz": 980, "exec gen": 45, "exec hints": 289, "exec inject": 0, "exec minimize": 914, "exec retries": 20, "exec seeds": 118, "exec smash": 774, "exec total [base]": 164822, "exec total [new]": 335037, "exec triage": 142566, "executor restarts": 769, "fault jobs": 0, "fuzzer jobs": 44, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 14, "max signal": 312650, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 522, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45679, "no exec duration": 2821562000000, "no exec requests": 8596, "pending": 5, "prog exec time": 867, "reproducing": 5, "rpc recv": 8128441332, "rpc sent": 1950579288, "signal": 302643, "smash jobs": 25, "triage jobs": 5, "vm output": 66123610, "vm restarts [base]": 57, "vm restarts [new]": 80 } 2025/08/13 13:54:26 new: boot error: can't ssh into the instance 2025/08/13 13:55:16 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/13 13:55:19 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/13 13:55:26 runner 1 connected 2025/08/13 13:56:09 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/xfrm/xfrm_state.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 13:56:13 runner 3 connected 2025/08/13 13:56:16 runner 2 connected 2025/08/13 13:57:26 base crash: INFO: task hung in __iterate_supers 2025/08/13 13:57:26 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/xfrm/xfrm_state.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 13:58:08 repro finished 'KASAN: slab-use-after-free Read in xfrm_state_find', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 13:58:08 failed repro for "KASAN: slab-use-after-free Read in xfrm_state_find", err=%!s() 2025/08/13 13:58:08 "KASAN: slab-use-after-free Read in xfrm_state_find": saved crash log into 1755093488.crash.log 2025/08/13 13:58:08 "KASAN: slab-use-after-free Read in xfrm_state_find": saved repro log into 1755093488.repro.log 2025/08/13 13:58:22 runner 0 connected 2025/08/13 13:58:47 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/xfrm/xfrm_state.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 13:58:47 repro finished 'kernel BUG in jfs_evict_inode', repro=true crepro=false desc='WARNING in xfrm_state_fini' hub=false from_dashboard=false 2025/08/13 13:58:47 start reproducing 'kernel BUG in jfs_evict_inode' 2025/08/13 13:58:47 found repro for "WARNING in xfrm_state_fini" (orig title: "kernel BUG in jfs_evict_inode", reliability: 1), took 60.88 minutes 2025/08/13 13:58:47 "WARNING in xfrm_state_fini": saved crash log into 1755093527.crash.log 2025/08/13 13:58:47 "WARNING in xfrm_state_fini": saved repro log into 1755093527.repro.log 2025/08/13 13:58:47 patched crashed: INFO: rcu detected stall in corrupted [need repro = false] 2025/08/13 13:58:47 base crash: INFO: rcu detected stall in corrupted 2025/08/13 13:59:05 runner 3 connected 2025/08/13 13:59:23 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 57, "corpus": 44678, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 47568, "coverage": 308284, "distributor delayed": 52507, "distributor undelayed": 52507, "distributor violated": 586, "exec candidate": 78567, "exec collide": 877, "exec fuzz": 1628, "exec gen": 85, "exec hints": 664, "exec inject": 0, "exec minimize": 1301, "exec retries": 20, "exec seeds": 167, "exec smash": 1379, "exec total [base]": 167433, "exec total [new]": 337647, "exec triage": 142722, "executor restarts": 788, "fault jobs": 0, "fuzzer jobs": 64, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 18, "max signal": 312776, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 707, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45739, "no exec duration": 3109576000000, "no exec requests": 9329, "pending": 4, "prog exec time": 784, "reproducing": 4, "rpc recv": 8311064200, "rpc sent": 2040229192, "signal": 302739, "smash jobs": 24, "triage jobs": 22, "vm output": 69458291, "vm restarts [base]": 59, "vm restarts [new]": 83 } 2025/08/13 13:59:44 runner 3 connected 2025/08/13 14:00:23 repro finished 'possible deadlock in mark_as_free_ex', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 14:00:23 start reproducing 'possible deadlock in mark_as_free_ex' 2025/08/13 14:00:23 failed repro for "possible deadlock in mark_as_free_ex", err=%!s() 2025/08/13 14:00:23 "possible deadlock in mark_as_free_ex": saved crash log into 1755093623.crash.log 2025/08/13 14:00:23 "possible deadlock in mark_as_free_ex": saved repro log into 1755093623.repro.log 2025/08/13 14:00:58 attempt #0 to run "WARNING in xfrm_state_fini" on base: crashed with WARNING in xfrm_state_fini 2025/08/13 14:00:58 crashes both: WARNING in xfrm_state_fini / WARNING in xfrm_state_fini 2025/08/13 14:01:34 base crash: kernel BUG in may_open 2025/08/13 14:01:36 patched crashed: kernel BUG in may_open [need repro = false] 2025/08/13 14:01:57 runner 0 connected 2025/08/13 14:02:23 runner 1 connected 2025/08/13 14:02:25 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:48749: connect: connection refused 2025/08/13 14:02:25 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:48749: connect: connection refused 2025/08/13 14:02:33 runner 3 connected 2025/08/13 14:02:35 base crash: lost connection to test machine 2025/08/13 14:02:55 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:48663: connect: connection refused 2025/08/13 14:02:55 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:48663: connect: connection refused 2025/08/13 14:03:00 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:19621: connect: connection refused 2025/08/13 14:03:00 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:19621: connect: connection refused 2025/08/13 14:03:00 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:5264: connect: connection refused 2025/08/13 14:03:00 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:5264: connect: connection refused 2025/08/13 14:03:04 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:20281: connect: connection refused 2025/08/13 14:03:04 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:20281: connect: connection refused 2025/08/13 14:03:05 base crash: lost connection to test machine 2025/08/13 14:03:07 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13771: connect: connection refused 2025/08/13 14:03:07 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13771: connect: connection refused 2025/08/13 14:03:10 base crash: lost connection to test machine 2025/08/13 14:03:10 base crash: lost connection to test machine 2025/08/13 14:03:14 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:03:17 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:03:24 runner 2 connected 2025/08/13 14:03:55 runner 0 connected 2025/08/13 14:03:57 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:44154: connect: connection refused 2025/08/13 14:03:57 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:44154: connect: connection refused 2025/08/13 14:04:06 runner 1 connected 2025/08/13 14:04:06 runner 2 connected 2025/08/13 14:04:07 base crash: lost connection to test machine 2025/08/13 14:04:11 runner 3 connected 2025/08/13 14:04:23 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 83, "corpus": 44711, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 48255, "coverage": 308365, "distributor delayed": 52578, "distributor undelayed": 52574, "distributor violated": 586, "exec candidate": 78567, "exec collide": 1148, "exec fuzz": 2198, "exec gen": 107, "exec hints": 945, "exec inject": 0, "exec minimize": 2091, "exec retries": 22, "exec seeds": 236, "exec smash": 1893, "exec total [base]": 169665, "exec total [new]": 340291, "exec triage": 142846, "executor restarts": 805, "fault jobs": 0, "fuzzer jobs": 72, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 3, "hints jobs": 23, "max signal": 312953, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1098, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45782, "no exec duration": 3167821000000, "no exec requests": 9523, "pending": 3, "prog exec time": 696, "reproducing": 4, "rpc recv": 8619311612, "rpc sent": 2109082296, "signal": 302815, "smash jobs": 38, "triage jobs": 11, "vm output": 72205048, "vm restarts [base]": 65, "vm restarts [new]": 86 } 2025/08/13 14:04:29 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:34130: connect: connection refused 2025/08/13 14:04:29 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:34130: connect: connection refused 2025/08/13 14:04:34 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:16428: connect: connection refused 2025/08/13 14:04:34 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:16428: connect: connection refused 2025/08/13 14:04:39 base crash: lost connection to test machine 2025/08/13 14:04:44 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:04:52 new: boot error: can't ssh into the instance 2025/08/13 14:04:56 runner 2 connected 2025/08/13 14:04:59 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:63495: connect: connection refused 2025/08/13 14:04:59 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:63495: connect: connection refused 2025/08/13 14:05:09 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:05:36 runner 0 connected 2025/08/13 14:05:43 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:39550: connect: connection refused 2025/08/13 14:05:43 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:39550: connect: connection refused 2025/08/13 14:05:53 base crash: lost connection to test machine 2025/08/13 14:06:00 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:2917: connect: connection refused 2025/08/13 14:06:00 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:2917: connect: connection refused 2025/08/13 14:06:05 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:60298: connect: connection refused 2025/08/13 14:06:05 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:60298: connect: connection refused 2025/08/13 14:06:05 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:45277: connect: connection refused 2025/08/13 14:06:05 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:45277: connect: connection refused 2025/08/13 14:06:06 runner 0 connected 2025/08/13 14:06:10 base crash: lost connection to test machine 2025/08/13 14:06:15 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:06:15 base crash: lost connection to test machine 2025/08/13 14:06:31 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:64260: connect: connection refused 2025/08/13 14:06:31 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:64260: connect: connection refused 2025/08/13 14:06:41 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:06:42 runner 1 connected 2025/08/13 14:07:05 runner 2 connected 2025/08/13 14:07:07 runner 0 connected 2025/08/13 14:07:11 runner 3 connected 2025/08/13 14:07:33 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:33750: connect: connection refused 2025/08/13 14:07:33 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:33750: connect: connection refused 2025/08/13 14:07:37 runner 0 connected 2025/08/13 14:07:43 base crash: lost connection to test machine 2025/08/13 14:07:58 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:53288: connect: connection refused 2025/08/13 14:07:58 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:53288: connect: connection refused 2025/08/13 14:08:08 base crash: lost connection to test machine 2025/08/13 14:08:10 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:48478: connect: connection refused 2025/08/13 14:08:10 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:48478: connect: connection refused 2025/08/13 14:08:16 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:08:20 base crash: lost connection to test machine 2025/08/13 14:08:38 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:47887: connect: connection refused 2025/08/13 14:08:38 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:47887: connect: connection refused 2025/08/13 14:08:39 runner 0 connected 2025/08/13 14:08:48 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:08:52 new: boot error: can't ssh into the instance 2025/08/13 14:08:52 new: boot error: can't ssh into the instance 2025/08/13 14:08:58 runner 1 connected 2025/08/13 14:09:01 new: boot error: can't ssh into the instance 2025/08/13 14:09:06 runner 3 connected 2025/08/13 14:09:09 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:6540: connect: connection refused 2025/08/13 14:09:09 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:6540: connect: connection refused 2025/08/13 14:09:09 runner 2 connected 2025/08/13 14:09:19 base crash: lost connection to test machine 2025/08/13 14:09:23 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 89, "corpus": 44716, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 48404, "coverage": 308430, "distributor delayed": 52596, "distributor undelayed": 52587, "distributor violated": 586, "exec candidate": 78567, "exec collide": 1227, "exec fuzz": 2342, "exec gen": 115, "exec hints": 1019, "exec inject": 0, "exec minimize": 2192, "exec retries": 22, "exec seeds": 255, "exec smash": 2030, "exec total [base]": 170398, "exec total [new]": 340885, "exec triage": 142877, "executor restarts": 827, "fault jobs": 0, "fuzzer jobs": 64, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 1, "hints jobs": 23, "max signal": 312970, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1144, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45791, "no exec duration": 3170821000000, "no exec requests": 9526, "pending": 3, "prog exec time": 979, "reproducing": 4, "rpc recv": 8997613976, "rpc sent": 2139703384, "signal": 302831, "smash jobs": 31, "triage jobs": 10, "vm output": 74042119, "vm restarts [base]": 73, "vm restarts [new]": 90 } 2025/08/13 14:09:24 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:34154: connect: connection refused 2025/08/13 14:09:24 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:34154: connect: connection refused 2025/08/13 14:09:34 base crash: lost connection to test machine 2025/08/13 14:09:38 runner 0 connected 2025/08/13 14:09:42 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:16376: connect: connection refused 2025/08/13 14:09:42 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:16376: connect: connection refused 2025/08/13 14:09:49 runner 1 connected 2025/08/13 14:09:52 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:10:05 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:44831: connect: connection refused 2025/08/13 14:10:05 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:44831: connect: connection refused 2025/08/13 14:10:09 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:26467: connect: connection refused 2025/08/13 14:10:09 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:26467: connect: connection refused 2025/08/13 14:10:15 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:10:17 runner 0 connected 2025/08/13 14:10:19 base crash: lost connection to test machine 2025/08/13 14:10:29 new: boot error: can't ssh into the instance 2025/08/13 14:10:32 runner 1 connected 2025/08/13 14:10:42 runner 3 connected 2025/08/13 14:10:43 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:1550: connect: connection refused 2025/08/13 14:10:43 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:1550: connect: connection refused 2025/08/13 14:10:49 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:32639: connect: connection refused 2025/08/13 14:10:49 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:32639: connect: connection refused 2025/08/13 14:10:53 base crash: lost connection to test machine 2025/08/13 14:10:59 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:11:04 runner 0 connected 2025/08/13 14:11:07 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:62392: connect: connection refused 2025/08/13 14:11:07 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:62392: connect: connection refused 2025/08/13 14:11:12 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:24714: connect: connection refused 2025/08/13 14:11:12 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:24714: connect: connection refused 2025/08/13 14:11:15 runner 2 connected 2025/08/13 14:11:17 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:11:22 base crash: lost connection to test machine 2025/08/13 14:11:35 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:59594: connect: connection refused 2025/08/13 14:11:35 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:59594: connect: connection refused 2025/08/13 14:11:42 runner 0 connected 2025/08/13 14:11:45 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:11:45 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:17072: connect: connection refused 2025/08/13 14:11:45 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:17072: connect: connection refused 2025/08/13 14:11:55 base crash: lost connection to test machine 2025/08/13 14:11:56 runner 1 connected 2025/08/13 14:12:08 runner 3 connected 2025/08/13 14:12:19 runner 1 connected 2025/08/13 14:12:31 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:60816: connect: connection refused 2025/08/13 14:12:31 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:60816: connect: connection refused 2025/08/13 14:12:41 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:12:42 runner 0 connected 2025/08/13 14:12:52 runner 2 connected 2025/08/13 14:12:54 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:28718: connect: connection refused 2025/08/13 14:12:54 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:28718: connect: connection refused 2025/08/13 14:12:59 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:33234: connect: connection refused 2025/08/13 14:12:59 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:33234: connect: connection refused 2025/08/13 14:13:04 base crash: lost connection to test machine 2025/08/13 14:13:09 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:13:11 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13454: connect: connection refused 2025/08/13 14:13:11 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13454: connect: connection refused 2025/08/13 14:13:16 base: boot error: can't ssh into the instance 2025/08/13 14:13:21 base crash: lost connection to test machine 2025/08/13 14:13:29 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:19136: connect: connection refused 2025/08/13 14:13:29 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:19136: connect: connection refused 2025/08/13 14:13:31 runner 1 connected 2025/08/13 14:13:38 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:63128: connect: connection refused 2025/08/13 14:13:38 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:63128: connect: connection refused 2025/08/13 14:13:39 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:13:48 base crash: lost connection to test machine 2025/08/13 14:13:53 runner 1 connected 2025/08/13 14:14:06 runner 3 connected 2025/08/13 14:14:13 runner 3 connected 2025/08/13 14:14:18 runner 0 connected 2025/08/13 14:14:23 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 94, "corpus": 44722, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 48509, "coverage": 308470, "distributor delayed": 52632, "distributor undelayed": 52622, "distributor violated": 586, "exec candidate": 78567, "exec collide": 1291, "exec fuzz": 2454, "exec gen": 119, "exec hints": 1089, "exec inject": 0, "exec minimize": 2356, "exec retries": 22, "exec seeds": 264, "exec smash": 2131, "exec total [base]": 171038, "exec total [new]": 341462, "exec triage": 142915, "executor restarts": 855, "fault jobs": 0, "fuzzer jobs": 73, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 25, "max signal": 313000, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1253, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45807, "no exec duration": 3170821000000, "no exec requests": 9526, "pending": 3, "prog exec time": 2159, "reproducing": 4, "rpc recv": 9531491076, "rpc sent": 2183818008, "signal": 302847, "smash jobs": 31, "triage jobs": 17, "vm output": 77016863, "vm restarts [base]": 82, "vm restarts [new]": 99 } 2025/08/13 14:14:27 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:62354: connect: connection refused 2025/08/13 14:14:27 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:62354: connect: connection refused 2025/08/13 14:14:36 runner 0 connected 2025/08/13 14:14:37 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:14:39 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:15261: connect: connection refused 2025/08/13 14:14:39 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:15261: connect: connection refused 2025/08/13 14:14:45 runner 2 connected 2025/08/13 14:14:48 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:14683: connect: connection refused 2025/08/13 14:14:48 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:14683: connect: connection refused 2025/08/13 14:14:49 base crash: lost connection to test machine 2025/08/13 14:14:50 new: boot error: can't ssh into the instance 2025/08/13 14:14:51 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:33939: connect: connection refused 2025/08/13 14:14:51 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:33939: connect: connection refused 2025/08/13 14:14:58 base crash: lost connection to test machine 2025/08/13 14:15:01 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:15:05 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:40051: connect: connection refused 2025/08/13 14:15:05 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:40051: connect: connection refused 2025/08/13 14:15:07 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:45494: connect: connection refused 2025/08/13 14:15:07 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:45494: connect: connection refused 2025/08/13 14:15:15 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:15:17 base crash: lost connection to test machine 2025/08/13 14:15:32 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:38936: connect: connection refused 2025/08/13 14:15:32 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:38936: connect: connection refused 2025/08/13 14:15:33 runner 1 connected 2025/08/13 14:15:42 base crash: lost connection to test machine 2025/08/13 14:15:46 runner 1 connected 2025/08/13 14:15:47 runner 2 connected 2025/08/13 14:15:55 runner 3 connected 2025/08/13 14:15:58 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:64709: connect: connection refused 2025/08/13 14:15:58 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:64709: connect: connection refused 2025/08/13 14:16:04 runner 0 connected 2025/08/13 14:16:08 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:16:14 runner 0 connected 2025/08/13 14:16:20 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:42711: connect: connection refused 2025/08/13 14:16:20 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:42711: connect: connection refused 2025/08/13 14:16:29 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:26878: connect: connection refused 2025/08/13 14:16:29 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:26878: connect: connection refused 2025/08/13 14:16:30 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:16:39 base crash: lost connection to test machine 2025/08/13 14:16:40 runner 2 connected 2025/08/13 14:17:05 runner 1 connected 2025/08/13 14:17:27 runner 2 connected 2025/08/13 14:17:31 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:46910: connect: connection refused 2025/08/13 14:17:31 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:46910: connect: connection refused 2025/08/13 14:17:36 runner 3 connected 2025/08/13 14:17:41 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:18:08 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:1420: connect: connection refused 2025/08/13 14:18:08 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:1420: connect: connection refused 2025/08/13 14:18:09 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:48216: connect: connection refused 2025/08/13 14:18:09 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:48216: connect: connection refused 2025/08/13 14:18:10 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:31764: connect: connection refused 2025/08/13 14:18:10 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:31764: connect: connection refused 2025/08/13 14:18:18 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:18:19 base crash: lost connection to test machine 2025/08/13 14:18:20 base crash: lost connection to test machine 2025/08/13 14:18:29 repro finished 'possible deadlock in mark_as_free_ex', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 14:18:29 failed repro for "possible deadlock in mark_as_free_ex", err=%!s() 2025/08/13 14:18:29 "possible deadlock in mark_as_free_ex": saved crash log into 1755094709.crash.log 2025/08/13 14:18:29 "possible deadlock in mark_as_free_ex": saved repro log into 1755094709.repro.log 2025/08/13 14:18:34 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:9862: connect: connection refused 2025/08/13 14:18:34 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:9862: connect: connection refused 2025/08/13 14:18:38 runner 1 connected 2025/08/13 14:18:44 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:19:03 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:50280: connect: connection refused 2025/08/13 14:19:03 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:50280: connect: connection refused 2025/08/13 14:19:13 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:19:15 runner 0 connected 2025/08/13 14:19:15 runner 1 connected 2025/08/13 14:19:23 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 105, "corpus": 44732, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 48637, "coverage": 308490, "distributor delayed": 52658, "distributor undelayed": 52652, "distributor violated": 586, "exec candidate": 78567, "exec collide": 1375, "exec fuzz": 2600, "exec gen": 124, "exec hints": 1171, "exec inject": 0, "exec minimize": 2567, "exec retries": 23, "exec seeds": 288, "exec smash": 2260, "exec total [base]": 171934, "exec total [new]": 342179, "exec triage": 142946, "executor restarts": 879, "fault jobs": 0, "fuzzer jobs": 69, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 0, "hints jobs": 27, "max signal": 313035, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1414, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45817, "no exec duration": 3365858000000, "no exec requests": 9947, "pending": 3, "prog exec time": 765, "reproducing": 3, "rpc recv": 10005712640, "rpc sent": 2230173144, "signal": 302864, "smash jobs": 33, "triage jobs": 9, "vm output": 79196844, "vm restarts [base]": 89, "vm restarts [new]": 107 } 2025/08/13 14:19:33 runner 2 connected 2025/08/13 14:19:44 runner 5 connected 2025/08/13 14:19:51 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:20:09 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:19820: connect: connection refused 2025/08/13 14:20:09 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:19820: connect: connection refused 2025/08/13 14:20:09 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:8244: connect: connection refused 2025/08/13 14:20:09 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:8244: connect: connection refused 2025/08/13 14:20:09 runner 1 connected 2025/08/13 14:20:19 base crash: lost connection to test machine 2025/08/13 14:20:19 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:20:48 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13290: connect: connection refused 2025/08/13 14:20:48 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13290: connect: connection refused 2025/08/13 14:20:48 runner 0 connected 2025/08/13 14:20:58 base crash: lost connection to test machine 2025/08/13 14:21:09 runner 2 connected 2025/08/13 14:21:12 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:33187: connect: connection refused 2025/08/13 14:21:12 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:33187: connect: connection refused 2025/08/13 14:21:17 runner 1 connected 2025/08/13 14:21:22 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:21:43 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:64278: connect: connection refused 2025/08/13 14:21:43 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:64278: connect: connection refused 2025/08/13 14:21:47 runner 3 connected 2025/08/13 14:21:48 base crash: kernel BUG in may_open 2025/08/13 14:21:50 patched crashed: kernel BUG in may_open [need repro = false] 2025/08/13 14:21:53 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:21:57 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:46982: connect: connection refused 2025/08/13 14:21:57 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:46982: connect: connection refused 2025/08/13 14:22:07 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:22:18 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:57019: connect: connection refused 2025/08/13 14:22:18 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:57019: connect: connection refused 2025/08/13 14:22:21 runner 1 connected 2025/08/13 14:22:28 base crash: lost connection to test machine 2025/08/13 14:22:32 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:31922: connect: connection refused 2025/08/13 14:22:32 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:31922: connect: connection refused 2025/08/13 14:22:39 runner 5 connected 2025/08/13 14:22:42 base crash: lost connection to test machine 2025/08/13 14:22:50 runner 2 connected 2025/08/13 14:22:55 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:31834: connect: connection refused 2025/08/13 14:22:55 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:31834: connect: connection refused 2025/08/13 14:23:04 runner 0 connected 2025/08/13 14:23:05 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:23:11 new: boot error: can't ssh into the instance 2025/08/13 14:23:25 runner 2 connected 2025/08/13 14:23:40 runner 3 connected 2025/08/13 14:24:02 runner 1 connected 2025/08/13 14:24:07 runner 4 connected 2025/08/13 14:24:23 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 134, "corpus": 44754, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 49065, "coverage": 308628, "distributor delayed": 52740, "distributor undelayed": 52739, "distributor violated": 588, "exec candidate": 78567, "exec collide": 1565, "exec fuzz": 2953, "exec gen": 144, "exec hints": 1409, "exec inject": 0, "exec minimize": 2936, "exec retries": 24, "exec seeds": 339, "exec smash": 2536, "exec total [base]": 172967, "exec total [new]": 343796, "exec triage": 143047, "executor restarts": 925, "fault jobs": 0, "fuzzer jobs": 92, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 42, "max signal": 313241, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1628, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45857, "no exec duration": 3435447000000, "no exec requests": 10083, "pending": 3, "prog exec time": 877, "reproducing": 3, "rpc recv": 10527621524, "rpc sent": 2307009768, "signal": 302942, "smash jobs": 37, "triage jobs": 13, "vm output": 82061330, "vm restarts [base]": 93, "vm restarts [new]": 118 } 2025/08/13 14:25:07 new: boot error: can't ssh into the instance 2025/08/13 14:25:35 patched crashed: INFO: task hung in bch2_copygc_stop [need repro = true] 2025/08/13 14:25:35 scheduled a reproduction of 'INFO: task hung in bch2_copygc_stop' 2025/08/13 14:25:35 start reproducing 'INFO: task hung in bch2_copygc_stop' 2025/08/13 14:25:37 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/13 14:26:05 runner 3 connected 2025/08/13 14:26:32 runner 5 connected 2025/08/13 14:26:34 runner 4 connected 2025/08/13 14:28:26 base: boot error: can't ssh into the instance 2025/08/13 14:28:35 new: boot error: can't ssh into the instance 2025/08/13 14:29:23 runner 0 connected 2025/08/13 14:29:23 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 206, "corpus": 44793, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 49993, "coverage": 308698, "distributor delayed": 52857, "distributor undelayed": 52857, "distributor violated": 589, "exec candidate": 78567, "exec collide": 1921, "exec fuzz": 3708, "exec gen": 178, "exec hints": 1855, "exec inject": 0, "exec minimize": 3595, "exec retries": 24, "exec seeds": 456, "exec smash": 3118, "exec total [base]": 175157, "exec total [new]": 346934, "exec triage": 143230, "executor restarts": 960, "fault jobs": 0, "fuzzer jobs": 121, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 51, "max signal": 313436, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2091, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45927, "no exec duration": 3435447000000, "no exec requests": 10083, "pending": 3, "prog exec time": 922, "reproducing": 4, "rpc recv": 10684802096, "rpc sent": 2407421536, "signal": 303005, "smash jobs": 64, "triage jobs": 6, "vm output": 85917508, "vm restarts [base]": 94, "vm restarts [new]": 121 } 2025/08/13 14:31:54 base: boot error: can't ssh into the instance 2025/08/13 14:32:06 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = true] 2025/08/13 14:32:06 scheduled a reproduction of 'unregister_netdevice: waiting for DEV to become free' 2025/08/13 14:32:50 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/13 14:32:50 runner 1 connected 2025/08/13 14:33:02 runner 3 connected 2025/08/13 14:33:47 runner 0 connected 2025/08/13 14:34:23 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 223, "corpus": 44831, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 50987, "coverage": 308781, "distributor delayed": 52959, "distributor undelayed": 52959, "distributor violated": 589, "exec candidate": 78567, "exec collide": 2403, "exec fuzz": 4554, "exec gen": 227, "exec hints": 2363, "exec inject": 0, "exec minimize": 4344, "exec retries": 24, "exec seeds": 562, "exec smash": 3878, "exec total [base]": 178362, "exec total [new]": 350616, "exec triage": 143412, "executor restarts": 992, "fault jobs": 0, "fuzzer jobs": 146, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 59, "max signal": 313664, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2551, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45996, "no exec duration": 3435447000000, "no exec requests": 10083, "pending": 4, "prog exec time": 696, "reproducing": 4, "rpc recv": 10854698412, "rpc sent": 2547187272, "signal": 303085, "smash jobs": 80, "triage jobs": 7, "vm output": 89973259, "vm restarts [base]": 96, "vm restarts [new]": 122 } 2025/08/13 14:34:35 patched crashed: INFO: task hung in bch2_copygc_stop [need repro = true] 2025/08/13 14:34:35 scheduled a reproduction of 'INFO: task hung in bch2_copygc_stop' 2025/08/13 14:34:53 base crash: possible deadlock in ocfs2_init_acl 2025/08/13 14:35:23 reproducing crash 'INFO: task hung in bch2_copygc_stop': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/time/sleep_timeout.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 14:35:24 runner 5 connected 2025/08/13 14:35:40 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/08/13 14:35:50 runner 0 connected 2025/08/13 14:36:15 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/13 14:36:29 runner 2 connected 2025/08/13 14:36:44 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/13 14:36:57 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:37:12 runner 3 connected 2025/08/13 14:37:34 runner 2 connected 2025/08/13 14:37:47 runner 4 connected 2025/08/13 14:39:04 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:39:09 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/13 14:39:23 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 258, "corpus": 44863, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 51787, "coverage": 308862, "distributor delayed": 53041, "distributor undelayed": 53039, "distributor violated": 589, "exec candidate": 78567, "exec collide": 2866, "exec fuzz": 5327, "exec gen": 263, "exec hints": 2854, "exec inject": 0, "exec minimize": 5022, "exec retries": 24, "exec seeds": 648, "exec smash": 4576, "exec total [base]": 182268, "exec total [new]": 353961, "exec triage": 143537, "executor restarts": 1039, "fault jobs": 0, "fuzzer jobs": 141, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 65, "max signal": 313818, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2954, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46046, "no exec duration": 3435447000000, "no exec requests": 10083, "pending": 5, "prog exec time": 788, "reproducing": 4, "rpc recv": 11083261804, "rpc sent": 2691371408, "signal": 303159, "smash jobs": 70, "triage jobs": 6, "vm output": 94036096, "vm restarts [base]": 99, "vm restarts [new]": 125 } 2025/08/13 14:39:52 runner 2 connected 2025/08/13 14:39:58 runner 5 connected 2025/08/13 14:40:38 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/13 14:40:46 base crash: lost connection to test machine 2025/08/13 14:41:03 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/13 14:41:28 runner 4 connected 2025/08/13 14:41:28 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/13 14:41:36 runner 2 connected 2025/08/13 14:41:53 runner 0 connected 2025/08/13 14:42:18 runner 3 connected 2025/08/13 14:43:52 base crash: possible deadlock in ocfs2_init_acl 2025/08/13 14:44:08 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/08/13 14:44:23 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 284, "corpus": 44892, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 52846, "coverage": 308950, "distributor delayed": 53156, "distributor undelayed": 53152, "distributor violated": 589, "exec candidate": 78567, "exec collide": 3367, "exec fuzz": 6294, "exec gen": 313, "exec hints": 3522, "exec inject": 0, "exec minimize": 5697, "exec retries": 24, "exec seeds": 731, "exec smash": 5339, "exec total [base]": 185748, "exec total [new]": 357839, "exec triage": 143704, "executor restarts": 1061, "fault jobs": 0, "fuzzer jobs": 147, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 70, "max signal": 313942, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3336, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46107, "no exec duration": 3435518000000, "no exec requests": 10084, "pending": 5, "prog exec time": 836, "reproducing": 4, "rpc recv": 11318044844, "rpc sent": 2850991880, "signal": 303245, "smash jobs": 67, "triage jobs": 10, "vm output": 97959799, "vm restarts [base]": 102, "vm restarts [new]": 128 } 2025/08/13 14:44:49 runner 2 connected 2025/08/13 14:44:59 patched crashed: lost connection to test machine [need repro = false] 2025/08/13 14:45:05 runner 3 connected 2025/08/13 14:45:09 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/13 14:45:49 runner 5 connected 2025/08/13 14:46:06 runner 1 connected 2025/08/13 14:46:07 reproducing crash 'INFO: task hung in bch2_copygc_stop': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/time/sleep_timeout.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/13 14:46:30 base crash: lost connection to test machine 2025/08/13 14:47:26 runner 0 connected 2025/08/13 14:47:34 base crash: INFO: trying to register non-static key in ocfs2_dlm_shutdown 2025/08/13 14:47:43 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/13 14:48:24 runner 3 connected 2025/08/13 14:48:32 runner 4 connected 2025/08/13 14:48:44 new: boot error: can't ssh into the instance 2025/08/13 14:49:19 status reporting terminated 2025/08/13 14:49:19 bug reporting terminated 2025/08/13 14:49:19 repro finished 'unregister_netdevice: waiting for DEV to become free', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 14:49:19 syz-diff (base): kernel context loop terminated 2025/08/13 14:51:07 repro finished 'kernel BUG in jfs_evict_inode', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 14:52:12 repro finished 'INFO: task hung in bch2_copygc_stop', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 14:54:13 repro finished 'WARNING in io_ring_exit_work', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/13 14:58:50 syz-diff (new): kernel context loop terminated 2025/08/13 14:58:50 diff fuzzing terminated 2025/08/13 14:58:50 fuzzing is finished 2025/08/13 14:58:50 status at the end: Title On-Base On-Patched INFO: rcu detected stall in corrupted 1 crashes 1 crashes INFO: task hung in __iterate_supers 1 crashes INFO: task hung in bch2_copygc_stop 2 crashes INFO: task hung in v9fs_evict_inode 2 crashes 1 crashes INFO: trying to register non-static key in ocfs2_dlm_shutdown 1 crashes KASAN: slab-use-after-free Read in __xfrm_state_lookup 1 crashes 1 crashes KASAN: slab-use-after-free Read in xfrm_alloc_spi 6 crashes 14 crashes KASAN: slab-use-after-free Read in xfrm_state_find 2 crashes KASAN: slab-use-after-free Write in __xfrm_state_delete 1 crashes KASAN: use-after-free Read in xfrm_alloc_spi 1 crashes WARNING in __ww_mutex_wound 1 crashes WARNING in dbAdjTree 2 crashes 4 crashes WARNING in io_ring_exit_work 2 crashes WARNING in xfrm6_tunnel_net_exit 3 crashes 5 crashes WARNING in xfrm_state_fini 6 crashes 7 crashes[reproduced] general protection fault in lmLogSync 1 crashes [reproduced] general protection fault in pcl818_ai_cancel 2 crashes general protection fault in xfrm_state_find 1 crashes kernel BUG in jfs_evict_inode 3 crashes kernel BUG in may_open 2 crashes 2 crashes lost connection to test machine 36 crashes 44 crashes no output from test machine 22 crashes 1 crashes possible deadlock in __netdev_update_features 1 crashes possible deadlock in attr_data_get_block 1 crashes 2 crashes possible deadlock in input_inject_event 1 crashes possible deadlock in mark_as_free_ex 4 crashes possible deadlock in ntfs_fiemap 1 crashes possible deadlock in ocfs2_acquire_dquot 1 crashes 1 crashes possible deadlock in ocfs2_init_acl 4 crashes 5 crashes possible deadlock in ocfs2_reserve_suballoc_bits 1 crashes possible deadlock in ocfs2_try_remove_refcount_tree 4 crashes 5 crashes possible deadlock in run_unpack_ex 1 crashes unregister_netdevice: waiting for DEV to become free 4 crashes 6 crashes