2025/08/19 12:56:33 extracted 303749 symbol hashes for base and 303749 for patched 2025/08/19 12:56:33 adding modified_functions to focus areas: ["__mas_set_range"] 2025/08/19 12:56:33 adding directly modified files to focus areas: ["MAINTAINERS" "include/linux/maple_tree.h" "rust/helpers/helpers.c" "rust/helpers/maple_tree.c" "rust/kernel/lib.rs" "rust/kernel/maple_tree.rs"] 2025/08/19 12:56:34 downloaded the corpus from https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db 2025/08/19 12:57:32 runner 2 connected 2025/08/19 12:57:32 runner 1 connected 2025/08/19 12:57:32 runner 3 connected 2025/08/19 12:57:32 runner 7 connected 2025/08/19 12:57:32 runner 1 connected 2025/08/19 12:57:32 runner 3 connected 2025/08/19 12:57:32 runner 2 connected 2025/08/19 12:57:32 runner 6 connected 2025/08/19 12:57:32 runner 5 connected 2025/08/19 12:57:33 runner 9 connected 2025/08/19 12:57:33 runner 8 connected 2025/08/19 12:57:39 runner 4 connected 2025/08/19 12:57:40 executor cover filter: 0 PCs 2025/08/19 12:57:40 initializing coverage information... 2025/08/19 12:57:45 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/19 12:57:45 base: machine check complete 2025/08/19 12:57:46 discovered 7699 source files, 338618 symbols 2025/08/19 12:57:46 coverage filter: __mas_set_range: [__mas_set_range __mas_set_range __mas_set_range] 2025/08/19 12:57:46 coverage filter: MAINTAINERS: [] 2025/08/19 12:57:46 coverage filter: include/linux/maple_tree.h: [] 2025/08/19 12:57:46 coverage filter: rust/helpers/helpers.c: [] 2025/08/19 12:57:46 coverage filter: rust/helpers/maple_tree.c: [] 2025/08/19 12:57:46 coverage filter: rust/kernel/lib.rs: [] 2025/08/19 12:57:46 coverage filter: rust/kernel/maple_tree.rs: [] 2025/08/19 12:57:46 area "symbols": 42 PCs in the cover filter 2025/08/19 12:57:46 area "files": 0 PCs in the cover filter 2025/08/19 12:57:46 area "": 0 PCs in the cover filter 2025/08/19 12:57:46 executor cover filter: 0 PCs 2025/08/19 12:57:50 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/19 12:57:50 new: machine check complete 2025/08/19 12:57:51 new: adding 81150 seeds 2025/08/19 12:58:58 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/19 12:58:58 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/19 12:59:10 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/19 12:59:10 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/19 12:59:19 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/19 12:59:19 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/19 12:59:31 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/19 12:59:31 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/19 12:59:42 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/19 12:59:42 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/19 13:00:08 runner 4 connected 2025/08/19 13:00:17 runner 1 connected 2025/08/19 13:00:27 runner 9 connected 2025/08/19 13:00:32 runner 5 connected 2025/08/19 13:01:36 STAT { "buffer too small": 0, "candidate triage jobs": 51, "candidates": 77776, "comps overflows": 0, "corpus": 3294, "corpus [files]": 0, "corpus [symbols]": 283, "cover overflows": 2175, "coverage": 153867, "distributor delayed": 4210, "distributor undelayed": 4206, "distributor violated": 44, "exec candidate": 3374, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 2, "exec seeds": 0, "exec smash": 0, "exec total [base]": 7992, "exec total [new]": 15203, "exec triage": 10604, "executor restarts": 108, "fault jobs": 0, "fuzzer jobs": 51, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 155334, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 3374, "no exec duration": 41570000000, "no exec requests": 314, "pending": 5, "prog exec time": 192, "reproducing": 0, "rpc recv": 852261584, "rpc sent": 96393896, "signal": 151524, "smash jobs": 0, "triage jobs": 0, "vm output": 1957086, "vm restarts [base]": 3, "vm restarts [new]": 13 } 2025/08/19 13:01:53 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 13:01:56 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/19 13:01:56 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/19 13:02:50 runner 1 connected 2025/08/19 13:02:53 runner 4 connected 2025/08/19 13:04:09 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 13:04:14 base crash: WARNING in dbAdjTree 2025/08/19 13:04:19 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/19 13:04:19 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/19 13:04:59 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/19 13:04:59 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/19 13:05:11 runner 1 connected 2025/08/19 13:05:16 runner 1 connected 2025/08/19 13:05:42 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:05:56 runner 6 connected 2025/08/19 13:06:22 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/19 13:06:22 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/19 13:06:31 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/19 13:06:31 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/19 13:06:36 STAT { "buffer too small": 0, "candidate triage jobs": 46, "candidates": 74324, "comps overflows": 0, "corpus": 6713, "corpus [files]": 0, "corpus [symbols]": 459, "cover overflows": 4721, "coverage": 189303, "distributor delayed": 10111, "distributor undelayed": 10102, "distributor violated": 597, "exec candidate": 6826, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 2, "exec seeds": 0, "exec smash": 0, "exec total [base]": 13126, "exec total [new]": 31093, "exec triage": 21488, "executor restarts": 143, "fault jobs": 0, "fuzzer jobs": 46, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 191097, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 6825, "no exec duration": 42202000000, "no exec requests": 316, "pending": 10, "prog exec time": 246, "reproducing": 0, "rpc recv": 1302551472, "rpc sent": 172848000, "signal": 186033, "smash jobs": 0, "triage jobs": 0, "vm output": 3721186, "vm restarts [base]": 5, "vm restarts [new]": 16 } 2025/08/19 13:06:38 runner 9 connected 2025/08/19 13:06:41 base: boot error: can't ssh into the instance 2025/08/19 13:06:41 new: boot error: can't ssh into the instance 2025/08/19 13:06:48 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 13:06:59 patched crashed: INFO: task hung in tun_chr_close [need repro = true] 2025/08/19 13:06:59 scheduled a reproduction of 'INFO: task hung in tun_chr_close' 2025/08/19 13:07:08 base crash: WARNING in xfrm_state_fini 2025/08/19 13:07:18 runner 1 connected 2025/08/19 13:07:28 runner 5 connected 2025/08/19 13:07:32 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/19 13:07:32 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/19 13:07:33 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 13:07:38 runner 2 connected 2025/08/19 13:07:38 runner 0 connected 2025/08/19 13:07:38 runner 0 connected 2025/08/19 13:07:57 runner 3 connected 2025/08/19 13:08:23 runner 1 connected 2025/08/19 13:08:29 runner 3 connected 2025/08/19 13:08:48 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/19 13:08:48 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/19 13:09:00 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/19 13:09:00 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/19 13:09:04 new: boot error: can't ssh into the instance 2025/08/19 13:09:28 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 13:09:38 base crash: WARNING in xfrm_state_fini 2025/08/19 13:09:45 runner 0 connected 2025/08/19 13:09:59 runner 5 connected 2025/08/19 13:10:01 runner 8 connected 2025/08/19 13:10:24 runner 3 connected 2025/08/19 13:10:35 runner 0 connected 2025/08/19 13:11:08 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/19 13:11:08 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/19 13:11:09 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/19 13:11:09 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/19 13:11:11 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/19 13:11:11 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/19 13:11:22 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/19 13:11:22 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/19 13:11:36 STAT { "buffer too small": 0, "candidate triage jobs": 32, "candidates": 70931, "comps overflows": 0, "corpus": 10077, "corpus [files]": 0, "corpus [symbols]": 541, "cover overflows": 6851, "coverage": 211321, "distributor delayed": 14619, "distributor undelayed": 14616, "distributor violated": 693, "exec candidate": 10219, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 2, "exec seeds": 0, "exec smash": 0, "exec total [base]": 17285, "exec total [new]": 45790, "exec triage": 31826, "executor restarts": 219, "fault jobs": 0, "fuzzer jobs": 32, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 213081, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 10218, "no exec duration": 42202000000, "no exec requests": 316, "pending": 18, "prog exec time": 219, "reproducing": 0, "rpc recv": 2063664320, "rpc sent": 271518144, "signal": 208246, "smash jobs": 0, "triage jobs": 0, "vm output": 6760155, "vm restarts [base]": 10, "vm restarts [new]": 25 } 2025/08/19 13:11:52 base crash: kernel BUG in txUnlock 2025/08/19 13:12:05 runner 2 connected 2025/08/19 13:12:06 runner 5 connected 2025/08/19 13:12:07 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/19 13:12:07 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/19 13:12:08 runner 8 connected 2025/08/19 13:12:21 runner 6 connected 2025/08/19 13:12:50 runner 3 connected 2025/08/19 13:13:04 runner 4 connected 2025/08/19 13:13:24 base crash: lost connection to test machine 2025/08/19 13:14:09 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 13:14:15 base: boot error: can't ssh into the instance 2025/08/19 13:14:21 runner 0 connected 2025/08/19 13:14:23 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:14:34 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:15:02 base crash: kernel BUG in txUnlock 2025/08/19 13:15:06 runner 9 connected 2025/08/19 13:15:20 runner 6 connected 2025/08/19 13:15:31 runner 0 connected 2025/08/19 13:16:01 runner 0 connected 2025/08/19 13:16:36 STAT { "buffer too small": 0, "candidate triage jobs": 53, "candidates": 66695, "comps overflows": 0, "corpus": 14261, "corpus [files]": 0, "corpus [symbols]": 607, "cover overflows": 9184, "coverage": 232137, "distributor delayed": 19087, "distributor undelayed": 19087, "distributor violated": 695, "exec candidate": 14455, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 6, "exec seeds": 0, "exec smash": 0, "exec total [base]": 22568, "exec total [new]": 64722, "exec triage": 44732, "executor restarts": 275, "fault jobs": 0, "fuzzer jobs": 53, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 233812, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 14454, "no exec duration": 42202000000, "no exec requests": 316, "pending": 19, "prog exec time": 295, "reproducing": 0, "rpc recv": 2787376804, "rpc sent": 392759992, "signal": 228703, "smash jobs": 0, "triage jobs": 0, "vm output": 9913841, "vm restarts [base]": 13, "vm restarts [new]": 33 } 2025/08/19 13:17:05 new: boot error: can't ssh into the instance 2025/08/19 13:17:10 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 13:19:28 patched crashed: INFO: task hung in __iterate_supers [need repro = true] 2025/08/19 13:19:28 scheduled a reproduction of 'INFO: task hung in __iterate_supers' 2025/08/19 13:19:28 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:20:24 runner 1 connected 2025/08/19 13:20:29 patched crashed: INFO: task hung in __iterate_supers [need repro = true] 2025/08/19 13:20:29 scheduled a reproduction of 'INFO: task hung in __iterate_supers' 2025/08/19 13:20:34 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 13:20:52 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 13:21:19 runner 3 connected 2025/08/19 13:21:36 STAT { "buffer too small": 0, "candidate triage jobs": 28, "candidates": 61919, "comps overflows": 0, "corpus": 18991, "corpus [files]": 0, "corpus [symbols]": 768, "cover overflows": 12797, "coverage": 249102, "distributor delayed": 24308, "distributor undelayed": 24307, "distributor violated": 699, "exec candidate": 19231, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 6, "exec seeds": 0, "exec smash": 0, "exec total [base]": 28776, "exec total [new]": 88674, "exec triage": 59604, "executor restarts": 331, "fault jobs": 0, "fuzzer jobs": 28, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 251086, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 19230, "no exec duration": 42287000000, "no exec requests": 318, "pending": 21, "prog exec time": 240, "reproducing": 0, "rpc recv": 3212084732, "rpc sent": 502244912, "signal": 245363, "smash jobs": 0, "triage jobs": 0, "vm output": 12304711, "vm restarts [base]": 13, "vm restarts [new]": 35 } 2025/08/19 13:21:45 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = true] 2025/08/19 13:21:45 scheduled a reproduction of 'unregister_netdevice: waiting for DEV to become free' 2025/08/19 13:21:50 runner 1 connected 2025/08/19 13:22:08 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/19 13:22:57 runner 0 connected 2025/08/19 13:23:50 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 13:23:50 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 13:24:00 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 13:24:00 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 13:24:11 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = true] 2025/08/19 13:24:11 scheduled a reproduction of 'WARNING in xfrm6_tunnel_net_exit' 2025/08/19 13:24:12 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 13:24:12 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 13:24:21 base: boot error: can't ssh into the instance 2025/08/19 13:24:22 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 13:24:22 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 13:24:36 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 13:24:36 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 13:24:39 runner 9 connected 2025/08/19 13:24:49 runner 8 connected 2025/08/19 13:25:00 runner 1 connected 2025/08/19 13:25:00 runner 3 connected 2025/08/19 13:25:03 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 13:25:12 runner 2 connected 2025/08/19 13:25:12 runner 2 connected 2025/08/19 13:25:16 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 13:25:25 runner 5 connected 2025/08/19 13:26:01 runner 1 connected 2025/08/19 13:26:06 runner 0 connected 2025/08/19 13:26:36 STAT { "buffer too small": 0, "candidate triage jobs": 38, "candidates": 58636, "comps overflows": 0, "corpus": 22125, "corpus [files]": 0, "corpus [symbols]": 841, "cover overflows": 15436, "coverage": 258610, "distributor delayed": 29089, "distributor undelayed": 29089, "distributor violated": 710, "exec candidate": 22514, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 6, "exec seeds": 0, "exec smash": 0, "exec total [base]": 33995, "exec total [new]": 106998, "exec triage": 69937, "executor restarts": 374, "fault jobs": 0, "fuzzer jobs": 38, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 261159, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 22513, "no exec duration": 42308000000, "no exec requests": 320, "pending": 28, "prog exec time": 393, "reproducing": 0, "rpc recv": 3801947596, "rpc sent": 593796288, "signal": 254569, "smash jobs": 0, "triage jobs": 0, "vm output": 13881266, "vm restarts [base]": 18, "vm restarts [new]": 41 } 2025/08/19 13:26:46 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/08/19 13:26:46 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/08/19 13:27:11 new: boot error: can't ssh into the instance 2025/08/19 13:27:15 base: boot error: can't ssh into the instance 2025/08/19 13:27:38 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 13:27:43 runner 9 connected 2025/08/19 13:28:07 runner 7 connected 2025/08/19 13:28:12 runner 3 connected 2025/08/19 13:28:34 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 13:28:34 runner 5 connected 2025/08/19 13:29:29 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 13:29:31 runner 0 connected 2025/08/19 13:29:34 new: boot error: can't ssh into the instance 2025/08/19 13:29:40 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 13:30:26 runner 1 connected 2025/08/19 13:30:32 runner 0 connected 2025/08/19 13:30:33 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 13:30:38 runner 8 connected 2025/08/19 13:30:39 new: boot error: can't ssh into the instance 2025/08/19 13:31:30 runner 2 connected 2025/08/19 13:31:36 STAT { "buffer too small": 0, "candidate triage jobs": 51, "candidates": 55517, "comps overflows": 0, "corpus": 25198, "corpus [files]": 0, "corpus [symbols]": 923, "cover overflows": 17195, "coverage": 268039, "distributor delayed": 33199, "distributor undelayed": 33199, "distributor violated": 715, "exec candidate": 25633, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 6, "exec seeds": 0, "exec smash": 0, "exec total [base]": 43048, "exec total [new]": 122247, "exec triage": 79409, "executor restarts": 425, "fault jobs": 0, "fuzzer jobs": 51, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 270540, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 25632, "no exec duration": 42308000000, "no exec requests": 320, "pending": 29, "prog exec time": 294, "reproducing": 0, "rpc recv": 4323748940, "rpc sent": 700558064, "signal": 263966, "smash jobs": 0, "triage jobs": 0, "vm output": 16547420, "vm restarts [base]": 21, "vm restarts [new]": 47 } 2025/08/19 13:31:36 runner 4 connected 2025/08/19 13:31:51 new: boot error: can't ssh into the instance 2025/08/19 13:32:34 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 13:32:48 runner 6 connected 2025/08/19 13:32:56 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 13:32:56 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 13:33:06 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 13:33:27 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 13:33:33 runner 2 connected 2025/08/19 13:33:54 runner 4 connected 2025/08/19 13:33:55 runner 1 connected 2025/08/19 13:34:06 runner 8 connected 2025/08/19 13:34:24 patched crashed: WARNING in __ww_mutex_wound [need repro = true] 2025/08/19 13:34:24 scheduled a reproduction of 'WARNING in __ww_mutex_wound' 2025/08/19 13:34:24 runner 1 connected 2025/08/19 13:34:36 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 13:35:06 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 13:35:06 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 13:35:12 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:35:18 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 13:35:19 base crash: WARNING in xfrm_state_fini 2025/08/19 13:35:22 runner 9 connected 2025/08/19 13:35:23 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:35:27 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 13:35:33 runner 4 connected 2025/08/19 13:36:03 runner 6 connected 2025/08/19 13:36:03 runner 3 connected 2025/08/19 13:36:10 runner 2 connected 2025/08/19 13:36:16 runner 1 connected 2025/08/19 13:36:17 runner 5 connected 2025/08/19 13:36:20 runner 7 connected 2025/08/19 13:36:24 runner 0 connected 2025/08/19 13:36:36 STAT { "buffer too small": 0, "candidate triage jobs": 51, "candidates": 52092, "comps overflows": 0, "corpus": 28583, "corpus [files]": 0, "corpus [symbols]": 996, "cover overflows": 19310, "coverage": 277190, "distributor delayed": 37058, "distributor undelayed": 37058, "distributor violated": 750, "exec candidate": 29058, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 6, "exec seeds": 0, "exec smash": 0, "exec total [base]": 51553, "exec total [new]": 139438, "exec triage": 89808, "executor restarts": 507, "fault jobs": 0, "fuzzer jobs": 51, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 279573, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 29057, "no exec duration": 42315000000, "no exec requests": 322, "pending": 30, "prog exec time": 324, "reproducing": 0, "rpc recv": 5154820788, "rpc sent": 830662944, "signal": 273102, "smash jobs": 0, "triage jobs": 0, "vm output": 19828233, "vm restarts [base]": 24, "vm restarts [new]": 60 } 2025/08/19 13:36:44 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:36:55 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:37:06 base crash: WARNING in xfrm_state_fini 2025/08/19 13:37:28 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 13:37:39 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 13:37:41 runner 1 connected 2025/08/19 13:37:51 runner 2 connected 2025/08/19 13:38:02 runner 3 connected 2025/08/19 13:38:29 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 13:38:37 runner 2 connected 2025/08/19 13:38:49 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 13:39:14 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 13:39:25 runner 4 connected 2025/08/19 13:39:27 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 13:39:39 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 13:39:46 runner 6 connected 2025/08/19 13:39:50 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 13:40:11 runner 7 connected 2025/08/19 13:40:11 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 13:40:19 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 13:40:24 runner 0 connected 2025/08/19 13:40:32 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = true] 2025/08/19 13:40:32 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_alloc_spi' 2025/08/19 13:40:36 runner 1 connected 2025/08/19 13:40:37 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 13:40:46 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/19 13:40:47 runner 8 connected 2025/08/19 13:40:50 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/19 13:40:50 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 13:40:56 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 13:41:02 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 13:41:08 runner 1 connected 2025/08/19 13:41:13 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 13:41:16 runner 5 connected 2025/08/19 13:41:29 runner 3 connected 2025/08/19 13:41:34 runner 2 connected 2025/08/19 13:41:36 STAT { "buffer too small": 0, "candidate triage jobs": 26, "candidates": 48399, "comps overflows": 0, "corpus": 32243, "corpus [files]": 0, "corpus [symbols]": 1093, "cover overflows": 22085, "coverage": 285781, "distributor delayed": 41130, "distributor undelayed": 41130, "distributor violated": 750, "exec candidate": 32751, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 57323, "exec total [new]": 160146, "exec triage": 101191, "executor restarts": 558, "fault jobs": 0, "fuzzer jobs": 26, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 288467, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 32750, "no exec duration": 42420000000, "no exec requests": 325, "pending": 32, "prog exec time": 341, "reproducing": 0, "rpc recv": 5831095484, "rpc sent": 944764320, "signal": 281612, "smash jobs": 0, "triage jobs": 0, "vm output": 22624575, "vm restarts [base]": 27, "vm restarts [new]": 71 } 2025/08/19 13:41:43 runner 2 connected 2025/08/19 13:41:47 runner 6 connected 2025/08/19 13:41:48 runner 3 connected 2025/08/19 13:41:52 runner 7 connected 2025/08/19 13:42:02 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 13:42:10 runner 9 connected 2025/08/19 13:42:11 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 13:42:23 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 13:42:42 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 13:42:59 runner 8 connected 2025/08/19 13:43:10 runner 1 connected 2025/08/19 13:43:29 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:43:40 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:43:40 runner 1 connected 2025/08/19 13:43:50 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 13:43:53 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:44:06 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:44:11 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 13:44:16 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:44:26 runner 7 connected 2025/08/19 13:44:37 runner 4 connected 2025/08/19 13:44:40 runner 3 connected 2025/08/19 13:44:48 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 13:44:50 runner 1 connected 2025/08/19 13:44:55 runner 3 connected 2025/08/19 13:45:13 runner 9 connected 2025/08/19 13:45:47 runner 0 connected 2025/08/19 13:45:55 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:46:10 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:46:15 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:46:21 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 13:46:22 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:46:36 STAT { "buffer too small": 0, "candidate triage jobs": 34, "candidates": 45254, "comps overflows": 0, "corpus": 35321, "corpus [files]": 0, "corpus [symbols]": 1150, "cover overflows": 23902, "coverage": 292447, "distributor delayed": 44879, "distributor undelayed": 44868, "distributor violated": 750, "exec candidate": 35896, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 8, "exec seeds": 0, "exec smash": 0, "exec total [base]": 61514, "exec total [new]": 177114, "exec triage": 110644, "executor restarts": 665, "fault jobs": 0, "fuzzer jobs": 34, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 295357, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 35895, "no exec duration": 43649000000, "no exec requests": 330, "pending": 32, "prog exec time": 329, "reproducing": 0, "rpc recv": 6629735288, "rpc sent": 1064286736, "signal": 288290, "smash jobs": 0, "triage jobs": 0, "vm output": 26018469, "vm restarts [base]": 31, "vm restarts [new]": 82 } 2025/08/19 13:46:50 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 13:46:51 runner 4 connected 2025/08/19 13:46:52 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 13:46:59 runner 5 connected 2025/08/19 13:47:03 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 13:47:03 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 13:47:11 runner 8 connected 2025/08/19 13:47:14 runner 2 connected 2025/08/19 13:47:34 base: boot error: can't ssh into the instance 2025/08/19 13:47:39 runner 0 connected 2025/08/19 13:47:41 runner 1 connected 2025/08/19 13:47:45 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 13:47:51 runner 9 connected 2025/08/19 13:47:53 runner 3 connected 2025/08/19 13:48:02 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 13:48:09 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 13:48:09 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 13:48:22 runner 0 connected 2025/08/19 13:48:23 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 13:48:58 runner 8 connected 2025/08/19 13:49:01 runner 2 connected 2025/08/19 13:49:06 runner 5 connected 2025/08/19 13:49:48 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 13:50:34 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 13:50:39 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 13:50:44 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = true] 2025/08/19 13:50:44 scheduled a reproduction of 'WARNING in ext4_xattr_inode_lookup_create' 2025/08/19 13:50:44 runner 1 connected 2025/08/19 13:51:06 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 13:51:17 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 13:51:31 runner 1 connected 2025/08/19 13:51:33 runner 2 connected 2025/08/19 13:51:36 STAT { "buffer too small": 0, "candidate triage jobs": 152, "candidates": 42743, "comps overflows": 0, "corpus": 37684, "corpus [files]": 0, "corpus [symbols]": 1203, "cover overflows": 25725, "coverage": 297832, "distributor delayed": 48918, "distributor undelayed": 48787, "distributor violated": 802, "exec candidate": 38407, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 9, "exec seeds": 0, "exec smash": 0, "exec total [base]": 64853, "exec total [new]": 191988, "exec triage": 118259, "executor restarts": 723, "fault jobs": 0, "fuzzer jobs": 152, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 300879, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 38406, "no exec duration": 44066000000, "no exec requests": 333, "pending": 33, "prog exec time": 183, "reproducing": 0, "rpc recv": 7235393740, "rpc sent": 1164820152, "signal": 293520, "smash jobs": 0, "triage jobs": 0, "vm output": 28273068, "vm restarts [base]": 33, "vm restarts [new]": 95 } 2025/08/19 13:51:37 runner 0 connected 2025/08/19 13:51:38 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 13:51:54 runner 8 connected 2025/08/19 13:52:07 runner 5 connected 2025/08/19 13:52:07 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/19 13:52:26 runner 1 connected 2025/08/19 13:52:29 base: boot error: can't ssh into the instance 2025/08/19 13:52:49 base crash: WARNING in ext4_xattr_inode_lookup_create 2025/08/19 13:52:56 patched crashed: KASAN: slab-use-after-free Read in xfrm_state_find [need repro = true] 2025/08/19 13:52:56 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_state_find' 2025/08/19 13:52:57 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/19 13:53:19 runner 2 connected 2025/08/19 13:53:23 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/19 13:53:23 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 13:53:38 runner 1 connected 2025/08/19 13:53:39 patched crashed: KASAN: slab-use-after-free Write in __xfrm_state_delete [need repro = true] 2025/08/19 13:53:39 scheduled a reproduction of 'KASAN: slab-use-after-free Write in __xfrm_state_delete' 2025/08/19 13:53:44 runner 0 connected 2025/08/19 13:53:46 runner 3 connected 2025/08/19 13:53:58 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 13:54:00 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/19 13:54:13 runner 1 connected 2025/08/19 13:54:17 new: boot error: can't ssh into the instance 2025/08/19 13:54:19 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 13:54:29 runner 8 connected 2025/08/19 13:54:32 base crash: WARNING in xfrm_state_fini 2025/08/19 13:54:46 runner 5 connected 2025/08/19 13:54:50 runner 2 connected 2025/08/19 13:55:06 runner 6 connected 2025/08/19 13:55:16 runner 3 connected 2025/08/19 13:55:23 runner 2 connected 2025/08/19 13:56:10 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 13:56:27 new: boot error: can't ssh into the instance 2025/08/19 13:56:28 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 13:56:36 STAT { "buffer too small": 0, "candidate triage jobs": 27, "candidates": 40553, "comps overflows": 0, "corpus": 39961, "corpus [files]": 0, "corpus [symbols]": 1256, "cover overflows": 27230, "coverage": 302471, "distributor delayed": 52515, "distributor undelayed": 52512, "distributor violated": 900, "exec candidate": 40597, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 9, "exec seeds": 0, "exec smash": 0, "exec total [base]": 71653, "exec total [new]": 205439, "exec triage": 125088, "executor restarts": 784, "fault jobs": 0, "fuzzer jobs": 27, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 305391, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 40596, "no exec duration": 44085000000, "no exec requests": 334, "pending": 36, "prog exec time": 198, "reproducing": 0, "rpc recv": 7946140704, "rpc sent": 1269468248, "signal": 297964, "smash jobs": 0, "triage jobs": 0, "vm output": 30427000, "vm restarts [base]": 38, "vm restarts [new]": 105 } 2025/08/19 13:56:36 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 13:57:02 base crash: WARNING in xfrm_state_fini 2025/08/19 13:57:07 runner 5 connected 2025/08/19 13:57:18 runner 2 connected 2025/08/19 13:57:23 runner 7 connected 2025/08/19 13:57:27 runner 1 connected 2025/08/19 13:57:28 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 13:57:51 base: boot error: can't ssh into the instance 2025/08/19 13:57:51 runner 0 connected 2025/08/19 13:58:21 base crash: WARNING in xfrm_state_fini 2025/08/19 13:58:25 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 13:58:25 runner 2 connected 2025/08/19 13:58:29 new: boot error: can't ssh into the instance 2025/08/19 13:58:41 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 13:58:48 runner 3 connected 2025/08/19 13:59:15 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 13:59:16 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/08/19 13:59:16 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/08/19 13:59:19 runner 1 connected 2025/08/19 13:59:23 runner 0 connected 2025/08/19 13:59:25 runner 4 connected 2025/08/19 13:59:31 runner 5 connected 2025/08/19 14:00:12 runner 8 connected 2025/08/19 14:00:13 runner 7 connected 2025/08/19 14:00:36 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 14:00:58 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/08/19 14:00:58 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/08/19 14:01:33 runner 2 connected 2025/08/19 14:01:36 STAT { "buffer too small": 0, "candidate triage jobs": 29, "candidates": 38608, "comps overflows": 0, "corpus": 41842, "corpus [files]": 0, "corpus [symbols]": 1355, "cover overflows": 30619, "coverage": 306475, "distributor delayed": 54829, "distributor undelayed": 54829, "distributor violated": 905, "exec candidate": 42542, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 78023, "exec total [new]": 226490, "exec triage": 131302, "executor restarts": 839, "fault jobs": 0, "fuzzer jobs": 29, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 309566, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 42541, "no exec duration": 44464000000, "no exec requests": 337, "pending": 38, "prog exec time": 238, "reproducing": 0, "rpc recv": 8524592136, "rpc sent": 1425656192, "signal": 301811, "smash jobs": 0, "triage jobs": 0, "vm output": 33220890, "vm restarts [base]": 44, "vm restarts [new]": 113 } 2025/08/19 14:01:43 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 14:01:55 runner 5 connected 2025/08/19 14:02:03 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:02:07 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 14:02:12 new: boot error: can't ssh into the instance 2025/08/19 14:02:35 base crash: possible deadlock in ocfs2_truncate_file 2025/08/19 14:02:40 runner 2 connected 2025/08/19 14:02:52 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:02:58 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:03:01 runner 4 connected 2025/08/19 14:03:10 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/19 14:03:10 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 14:03:11 runner 9 connected 2025/08/19 14:03:25 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:03:39 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:03:41 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 14:03:49 runner 1 connected 2025/08/19 14:03:55 runner 5 connected 2025/08/19 14:03:59 runner 7 connected 2025/08/19 14:04:28 runner 4 connected 2025/08/19 14:04:38 runner 9 connected 2025/08/19 14:04:39 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 14:04:48 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 14:04:56 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:05:09 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:05:29 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 14:05:36 runner 0 connected 2025/08/19 14:05:45 runner 0 connected 2025/08/19 14:05:46 runner 2 connected 2025/08/19 14:05:51 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/19 14:05:54 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 14:06:06 runner 1 connected 2025/08/19 14:06:26 runner 5 connected 2025/08/19 14:06:36 STAT { "buffer too small": 0, "candidate triage jobs": 9, "candidates": 37840, "comps overflows": 0, "corpus": 42542, "corpus [files]": 0, "corpus [symbols]": 1395, "cover overflows": 34589, "coverage": 308069, "distributor delayed": 55925, "distributor undelayed": 55925, "distributor violated": 913, "exec candidate": 43310, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 83342, "exec total [new]": 246415, "exec triage": 133921, "executor restarts": 889, "fault jobs": 0, "fuzzer jobs": 9, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 311285, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43309, "no exec duration": 44514000000, "no exec requests": 338, "pending": 39, "prog exec time": 253, "reproducing": 0, "rpc recv": 9076496000, "rpc sent": 1559285008, "signal": 303386, "smash jobs": 0, "triage jobs": 0, "vm output": 35120198, "vm restarts [base]": 45, "vm restarts [new]": 126 } 2025/08/19 14:06:41 base crash: WARNING in xfrm_state_fini 2025/08/19 14:06:44 runner 4 connected 2025/08/19 14:06:48 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/19 14:06:48 runner 3 connected 2025/08/19 14:06:59 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 14:07:25 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/19 14:07:28 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/19 14:07:29 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/19 14:07:36 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:07:38 runner 0 connected 2025/08/19 14:07:44 runner 6 connected 2025/08/19 14:07:49 runner 3 connected 2025/08/19 14:08:03 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = false] 2025/08/19 14:08:15 runner 7 connected 2025/08/19 14:08:19 runner 1 connected 2025/08/19 14:08:20 runner 0 connected 2025/08/19 14:08:28 runner 9 connected 2025/08/19 14:08:30 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:08:41 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:08:44 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:08:59 runner 4 connected 2025/08/19 14:09:20 runner 5 connected 2025/08/19 14:09:23 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 14:09:31 runner 3 connected 2025/08/19 14:09:41 runner 2 connected 2025/08/19 14:09:55 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:10:20 runner 0 connected 2025/08/19 14:10:27 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 14:10:28 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 14:10:52 runner 3 connected 2025/08/19 14:11:19 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 14:11:21 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 14:11:24 runner 3 connected 2025/08/19 14:11:25 runner 6 connected 2025/08/19 14:11:28 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 14:11:36 STAT { "buffer too small": 0, "candidate triage jobs": 11, "candidates": 36596, "comps overflows": 0, "corpus": 43744, "corpus [files]": 0, "corpus [symbols]": 1430, "cover overflows": 36876, "coverage": 310694, "distributor delayed": 57590, "distributor undelayed": 57589, "distributor violated": 913, "exec candidate": 44554, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 85997, "exec total [new]": 261590, "exec triage": 137760, "executor restarts": 961, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 314066, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44550, "no exec duration": 44514000000, "no exec requests": 338, "pending": 39, "prog exec time": 286, "reproducing": 0, "rpc recv": 9741210740, "rpc sent": 1683160656, "signal": 306208, "smash jobs": 0, "triage jobs": 0, "vm output": 38145475, "vm restarts [base]": 49, "vm restarts [new]": 139 } 2025/08/19 14:12:06 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 14:12:13 base: boot error: can't ssh into the instance 2025/08/19 14:12:16 patched crashed: INFO: task hung in v9fs_evict_inode [need repro = true] 2025/08/19 14:12:16 scheduled a reproduction of 'INFO: task hung in v9fs_evict_inode' 2025/08/19 14:12:16 runner 9 connected 2025/08/19 14:12:20 runner 0 connected 2025/08/19 14:12:25 runner 7 connected 2025/08/19 14:12:30 patched crashed: INFO: task hung in v9fs_evict_inode [need repro = true] 2025/08/19 14:12:30 scheduled a reproduction of 'INFO: task hung in v9fs_evict_inode' 2025/08/19 14:12:37 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 14:12:41 base: boot error: can't ssh into the instance 2025/08/19 14:12:56 runner 0 connected 2025/08/19 14:13:12 runner 1 connected 2025/08/19 14:13:14 runner 5 connected 2025/08/19 14:13:28 runner 4 connected 2025/08/19 14:13:31 new: boot error: can't ssh into the instance 2025/08/19 14:13:33 runner 3 connected 2025/08/19 14:13:38 runner 2 connected 2025/08/19 14:13:56 base crash: WARNING in xfrm_state_fini 2025/08/19 14:14:01 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:14:13 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 14:14:17 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:14:27 runner 8 connected 2025/08/19 14:14:28 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 14:14:45 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:14:53 runner 3 connected 2025/08/19 14:14:59 runner 0 connected 2025/08/19 14:15:09 runner 2 connected 2025/08/19 14:15:15 runner 6 connected 2025/08/19 14:15:25 runner 1 connected 2025/08/19 14:15:27 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 14:15:27 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 14:15:38 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 14:15:41 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:15:43 runner 5 connected 2025/08/19 14:15:57 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:16:00 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 14:16:24 runner 3 connected 2025/08/19 14:16:25 runner 9 connected 2025/08/19 14:16:25 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 14:16:33 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 14:16:33 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 14:16:36 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 35723, "comps overflows": 0, "corpus": 44567, "corpus [files]": 0, "corpus [symbols]": 1464, "cover overflows": 38941, "coverage": 312244, "distributor delayed": 58811, "distributor undelayed": 58811, "distributor violated": 913, "exec candidate": 45427, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 91452, "exec total [new]": 274847, "exec triage": 140390, "executor restarts": 1035, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 315686, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45397, "no exec duration": 44794000000, "no exec requests": 341, "pending": 42, "prog exec time": 364, "reproducing": 0, "rpc recv": 10391451668, "rpc sent": 1796339888, "signal": 307753, "smash jobs": 0, "triage jobs": 0, "vm output": 40580109, "vm restarts [base]": 55, "vm restarts [new]": 151 } 2025/08/19 14:16:37 runner 7 connected 2025/08/19 14:16:44 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 14:16:44 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 14:16:55 runner 4 connected 2025/08/19 14:16:57 runner 2 connected 2025/08/19 14:17:08 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:17:16 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:17:23 runner 0 connected 2025/08/19 14:17:30 runner 5 connected 2025/08/19 14:17:37 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 14:17:42 runner 1 connected 2025/08/19 14:17:52 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 14:17:57 runner 9 connected 2025/08/19 14:18:11 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:18:13 runner 0 connected 2025/08/19 14:18:14 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:18:17 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 14:18:34 runner 1 connected 2025/08/19 14:18:43 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 14:18:49 runner 3 connected 2025/08/19 14:19:01 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 14:19:03 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:19:03 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:19:07 runner 3 connected 2025/08/19 14:19:11 runner 4 connected 2025/08/19 14:19:14 runner 0 connected 2025/08/19 14:19:14 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:19:37 base crash: possible deadlock in ocfs2_setattr 2025/08/19 14:19:41 runner 7 connected 2025/08/19 14:19:49 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 14:19:51 runner 2 connected 2025/08/19 14:19:52 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 14:19:59 runner 9 connected 2025/08/19 14:20:04 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 14:20:04 runner 6 connected 2025/08/19 14:20:14 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 14:20:14 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 14:20:25 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 14:20:34 runner 0 connected 2025/08/19 14:20:38 runner 2 connected 2025/08/19 14:20:48 runner 1 connected 2025/08/19 14:20:53 runner 3 connected 2025/08/19 14:21:09 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:21:11 runner 4 connected 2025/08/19 14:21:11 runner 1 connected 2025/08/19 14:21:11 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 14:21:21 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:21:22 runner 0 connected 2025/08/19 14:21:36 STAT { "buffer too small": 0, "candidate triage jobs": 7, "candidates": 35149, "comps overflows": 0, "corpus": 45073, "corpus [files]": 0, "corpus [symbols]": 1479, "cover overflows": 41037, "coverage": 313176, "distributor delayed": 59728, "distributor undelayed": 59728, "distributor violated": 915, "exec candidate": 46001, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 97532, "exec total [new]": 287074, "exec triage": 142025, "executor restarts": 1111, "fault jobs": 0, "fuzzer jobs": 7, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 316743, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45929, "no exec duration": 44794000000, "no exec requests": 341, "pending": 43, "prog exec time": 308, "reproducing": 0, "rpc recv": 11202827768, "rpc sent": 1913878432, "signal": 308674, "smash jobs": 0, "triage jobs": 0, "vm output": 42794916, "vm restarts [base]": 63, "vm restarts [new]": 167 } 2025/08/19 14:22:09 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 14:22:11 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:22:18 runner 6 connected 2025/08/19 14:22:22 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:22:44 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:22:57 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 14:23:05 runner 4 connected 2025/08/19 14:23:07 runner 0 connected 2025/08/19 14:23:12 runner 1 connected 2025/08/19 14:23:13 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 14:23:34 runner 2 connected 2025/08/19 14:23:46 runner 6 connected 2025/08/19 14:24:10 runner 3 connected 2025/08/19 14:24:16 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 14:24:46 patched crashed: possible deadlock in ntfs_fiemap [need repro = true] 2025/08/19 14:24:46 scheduled a reproduction of 'possible deadlock in ntfs_fiemap' 2025/08/19 14:24:58 base crash: WARNING in xfrm_state_fini 2025/08/19 14:25:02 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 14:25:04 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 14:25:13 runner 9 connected 2025/08/19 14:25:31 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = false] 2025/08/19 14:25:43 runner 2 connected 2025/08/19 14:25:43 new: boot error: can't ssh into the instance 2025/08/19 14:25:54 runner 4 connected 2025/08/19 14:25:54 runner 1 connected 2025/08/19 14:25:59 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:26:01 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 14:26:20 runner 3 connected 2025/08/19 14:26:32 runner 8 connected 2025/08/19 14:26:36 timed out waiting for coprus triage 2025/08/19 14:26:36 starting bug reproductions 2025/08/19 14:26:36 starting bug reproductions (max 10 VMs, 7 repros) 2025/08/19 14:26:36 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "unregister_netdevice: waiting for DEV to become free" aborted: it's no longer needed 2025/08/19 14:26:36 reproduction of "KASAN: slab-use-after-free Read in xfrm_alloc_spi" aborted: it's no longer needed 2025/08/19 14:26:36 start reproducing 'INFO: task hung in __iterate_supers' 2025/08/19 14:26:36 reproduction of "WARNING in ext4_xattr_inode_lookup_create" aborted: it's no longer needed 2025/08/19 14:26:36 start reproducing 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 14:26:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 104619, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 28, "prog exec time": 280, "reproducing": 1, "rpc recv": 11615771868, "rpc sent": 2015480376, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 44488532, "vm restarts [base]": 64, "vm restarts [new]": 179 } 2025/08/19 14:26:36 start reproducing 'WARNING in xfrm6_tunnel_net_exit' 2025/08/19 14:26:36 start reproducing 'kernel BUG in jfs_evict_inode' 2025/08/19 14:26:36 start reproducing 'WARNING in __ww_mutex_wound' 2025/08/19 14:26:36 start reproducing 'INFO: task hung in tun_chr_close' 2025/08/19 14:26:36 start reproducing 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 14:26:36 failed to recv *flatrpc.InfoRequestRawT: EOF 2025/08/19 14:27:14 base crash: possible deadlock in ntfs_fiemap 2025/08/19 14:28:03 runner 0 connected 2025/08/19 14:28:10 reproducing crash 'WARNING in __ww_mutex_wound': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/gpu/drm/drm_modeset_lock.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 14:29:09 new: boot error: can't ssh into the instance 2025/08/19 14:31:15 new: boot error: can't ssh into the instance 2025/08/19 14:31:17 base: boot error: can't ssh into the instance 2025/08/19 14:31:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108725, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 15, "prog exec time": 0, "reproducing": 7, "rpc recv": 11647742572, "rpc sent": 2031644672, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 45397860, "vm restarts [base]": 65, "vm restarts [new]": 179 } 2025/08/19 14:32:07 runner 3 connected 2025/08/19 14:35:03 base crash: no output from test machine 2025/08/19 14:35:04 base crash: no output from test machine 2025/08/19 14:35:08 base: boot error: can't ssh into the instance 2025/08/19 14:35:52 runner 0 connected 2025/08/19 14:35:53 runner 1 connected 2025/08/19 14:35:57 runner 2 connected 2025/08/19 14:36:14 reproducing crash 'WARNING in __ww_mutex_wound': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/gpu/drm/drm_modeset_lock.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 14:36:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108725, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 15, "prog exec time": 0, "reproducing": 7, "rpc recv": 11771326820, "rpc sent": 2031645792, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 46014058, "vm restarts [base]": 69, "vm restarts [new]": 179 } 2025/08/19 14:37:06 base crash: no output from test machine 2025/08/19 14:37:55 runner 3 connected 2025/08/19 14:38:04 new: boot error: can't ssh into the instance 2025/08/19 14:39:32 reproducing crash 'WARNING in __ww_mutex_wound': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/gpu/drm/drm_modeset_lock.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 14:40:37 new: boot error: can't ssh into the instance 2025/08/19 14:40:51 base crash: no output from test machine 2025/08/19 14:40:53 base crash: no output from test machine 2025/08/19 14:40:56 base crash: no output from test machine 2025/08/19 14:41:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108725, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 15, "prog exec time": 0, "reproducing": 7, "rpc recv": 11802222884, "rpc sent": 2031646072, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 46638926, "vm restarts [base]": 70, "vm restarts [new]": 179 } 2025/08/19 14:41:42 runner 0 connected 2025/08/19 14:41:43 runner 1 connected 2025/08/19 14:41:43 new: boot error: can't ssh into the instance 2025/08/19 14:41:46 runner 2 connected 2025/08/19 14:42:10 repro finished 'KASAN: slab-use-after-free Read in __xfrm_state_lookup', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 14:42:10 failed repro for "KASAN: slab-use-after-free Read in __xfrm_state_lookup", err=%!s() 2025/08/19 14:42:10 start reproducing 'KASAN: slab-use-after-free Read in xfrm_state_find' 2025/08/19 14:42:10 "KASAN: slab-use-after-free Read in __xfrm_state_lookup": saved crash log into 1755614530.crash.log 2025/08/19 14:42:10 "KASAN: slab-use-after-free Read in __xfrm_state_lookup": saved repro log into 1755614530.repro.log 2025/08/19 14:42:11 new: boot error: can't ssh into the instance 2025/08/19 14:42:13 reproducing crash 'WARNING in __ww_mutex_wound': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/gpu/drm/drm_modeset_lock.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 14:42:21 new: boot error: can't ssh into the instance 2025/08/19 14:42:43 new: boot error: can't ssh into the instance 2025/08/19 14:42:55 base crash: no output from test machine 2025/08/19 14:43:43 reproducing crash 'WARNING in __ww_mutex_wound': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/gpu/drm/drm_modeset_lock.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 14:43:44 runner 3 connected 2025/08/19 14:44:17 new: boot error: can't ssh into the instance 2025/08/19 14:45:52 reproducing crash 'WARNING in __ww_mutex_wound': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/gpu/drm/drm_modeset_lock.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 14:46:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108725, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 14, "prog exec time": 0, "reproducing": 7, "rpc recv": 11925807132, "rpc sent": 2031647192, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 47683010, "vm restarts [base]": 74, "vm restarts [new]": 179 } 2025/08/19 14:46:41 base crash: no output from test machine 2025/08/19 14:46:43 base crash: no output from test machine 2025/08/19 14:46:46 base crash: no output from test machine 2025/08/19 14:47:29 runner 0 connected 2025/08/19 14:47:35 runner 2 connected 2025/08/19 14:47:50 reproducing crash 'WARNING in __ww_mutex_wound': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/gpu/drm/drm_modeset_lock.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 14:48:43 base crash: no output from test machine 2025/08/19 14:49:32 runner 3 connected 2025/08/19 14:51:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108725, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 14, "prog exec time": 0, "reproducing": 7, "rpc recv": 12018495316, "rpc sent": 2031648032, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 48851915, "vm restarts [base]": 77, "vm restarts [new]": 179 } 2025/08/19 14:52:28 base crash: no output from test machine 2025/08/19 14:52:34 base crash: no output from test machine 2025/08/19 14:53:10 new: boot error: can't ssh into the instance 2025/08/19 14:53:18 runner 0 connected 2025/08/19 14:53:23 runner 2 connected 2025/08/19 14:53:34 reproducing crash 'WARNING in __ww_mutex_wound': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/gpu/drm/drm_modeset_lock.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 14:53:34 repro finished 'WARNING in __ww_mutex_wound', repro=true crepro=false desc='WARNING in __ww_mutex_wound' hub=false from_dashboard=false 2025/08/19 14:53:34 found repro for "WARNING in __ww_mutex_wound" (orig title: "-SAME-", reliability: 1), took 26.96 minutes 2025/08/19 14:53:34 start reproducing 'KASAN: slab-use-after-free Write in __xfrm_state_delete' 2025/08/19 14:53:34 "WARNING in __ww_mutex_wound": saved crash log into 1755615214.crash.log 2025/08/19 14:53:34 "WARNING in __ww_mutex_wound": saved repro log into 1755615214.repro.log 2025/08/19 14:53:38 new: boot error: can't ssh into the instance 2025/08/19 14:53:49 new: boot error: can't ssh into the instance 2025/08/19 14:54:11 new: boot error: can't ssh into the instance 2025/08/19 14:54:31 base crash: no output from test machine 2025/08/19 14:55:11 attempt #0 to run "WARNING in __ww_mutex_wound" on base: crashed with WARNING in __ww_mutex_wound 2025/08/19 14:55:11 crashes both: WARNING in __ww_mutex_wound / WARNING in __ww_mutex_wound 2025/08/19 14:55:21 runner 3 connected 2025/08/19 14:56:00 runner 0 connected 2025/08/19 14:56:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108725, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 13, "prog exec time": 0, "reproducing": 7, "rpc recv": 12142079556, "rpc sent": 2031649152, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 50920698, "vm restarts [base]": 81, "vm restarts [new]": 179 } 2025/08/19 14:56:48 base: boot error: can't ssh into the instance 2025/08/19 14:57:39 runner 1 connected 2025/08/19 14:58:23 base crash: no output from test machine 2025/08/19 14:59:12 runner 2 connected 2025/08/19 15:00:20 base crash: no output from test machine 2025/08/19 15:01:00 base crash: no output from test machine 2025/08/19 15:01:10 runner 3 connected 2025/08/19 15:01:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108725, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 13, "prog exec time": 0, "reproducing": 7, "rpc recv": 12234767748, "rpc sent": 2031649992, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 54213259, "vm restarts [base]": 84, "vm restarts [new]": 179 } 2025/08/19 15:01:49 runner 0 connected 2025/08/19 15:02:00 new: boot error: can't ssh into the instance 2025/08/19 15:02:38 base crash: no output from test machine 2025/08/19 15:03:27 runner 1 connected 2025/08/19 15:03:37 new: boot error: can't ssh into the instance 2025/08/19 15:04:12 base crash: no output from test machine 2025/08/19 15:05:03 runner 2 connected 2025/08/19 15:05:40 new: boot error: can't ssh into the instance 2025/08/19 15:06:10 base crash: no output from test machine 2025/08/19 15:06:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108725, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 8, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 13, "prog exec time": 0, "reproducing": 7, "rpc recv": 12327455932, "rpc sent": 2031650832, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 57003454, "vm restarts [base]": 87, "vm restarts [new]": 179 } 2025/08/19 15:06:48 base crash: no output from test machine 2025/08/19 15:06:51 repro finished 'KASAN: slab-use-after-free Write in __xfrm_state_delete', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 15:06:51 reproduction of "possible deadlock in ntfs_fiemap" aborted: it's no longer needed 2025/08/19 15:06:51 start reproducing 'INFO: task hung in v9fs_evict_inode' 2025/08/19 15:06:51 failed repro for "KASAN: slab-use-after-free Write in __xfrm_state_delete", err=%!s() 2025/08/19 15:06:51 "KASAN: slab-use-after-free Write in __xfrm_state_delete": saved crash log into 1755616011.crash.log 2025/08/19 15:06:51 "KASAN: slab-use-after-free Write in __xfrm_state_delete": saved repro log into 1755616011.repro.log 2025/08/19 15:06:59 runner 3 connected 2025/08/19 15:07:37 runner 0 connected 2025/08/19 15:08:26 base crash: no output from test machine 2025/08/19 15:09:15 runner 1 connected 2025/08/19 15:10:02 base crash: no output from test machine 2025/08/19 15:10:50 runner 2 connected 2025/08/19 15:11:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108725, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 10, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 11, "prog exec time": 0, "reproducing": 7, "rpc recv": 12451040180, "rpc sent": 2031651952, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 59824664, "vm restarts [base]": 91, "vm restarts [new]": 179 } 2025/08/19 15:11:58 base crash: no output from test machine 2025/08/19 15:12:00 new: boot error: can't ssh into the instance 2025/08/19 15:12:37 base crash: no output from test machine 2025/08/19 15:12:49 runner 3 connected 2025/08/19 15:13:26 runner 0 connected 2025/08/19 15:14:15 base crash: no output from test machine 2025/08/19 15:15:04 runner 1 connected 2025/08/19 15:15:50 base crash: no output from test machine 2025/08/19 15:16:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108725, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 11, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 11, "prog exec time": 0, "reproducing": 7, "rpc recv": 12543728364, "rpc sent": 2031652792, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 62792453, "vm restarts [base]": 94, "vm restarts [new]": 179 } 2025/08/19 15:16:39 runner 2 connected 2025/08/19 15:17:49 base crash: no output from test machine 2025/08/19 15:18:20 new: boot error: can't ssh into the instance 2025/08/19 15:18:25 base crash: no output from test machine 2025/08/19 15:18:39 runner 3 connected 2025/08/19 15:19:14 runner 0 connected 2025/08/19 15:19:30 new: boot error: can't ssh into the instance 2025/08/19 15:20:04 base crash: no output from test machine 2025/08/19 15:20:54 runner 1 connected 2025/08/19 15:21:25 new: boot error: can't ssh into the instance 2025/08/19 15:21:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108725, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 12, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 11, "prog exec time": 0, "reproducing": 7, "rpc recv": 12667312612, "rpc sent": 2031653912, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 64620523, "vm restarts [base]": 98, "vm restarts [new]": 179 } 2025/08/19 15:21:39 base crash: no output from test machine 2025/08/19 15:22:29 runner 2 connected 2025/08/19 15:23:38 base crash: no output from test machine 2025/08/19 15:24:14 base crash: no output from test machine 2025/08/19 15:24:26 runner 3 connected 2025/08/19 15:24:50 new: boot error: can't ssh into the instance 2025/08/19 15:25:03 runner 0 connected 2025/08/19 15:25:53 base crash: no output from test machine 2025/08/19 15:26:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108725, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 13, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 11, "prog exec time": 0, "reproducing": 7, "rpc recv": 12760000796, "rpc sent": 2031654752, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 67465626, "vm restarts [base]": 101, "vm restarts [new]": 179 } 2025/08/19 15:26:44 runner 1 connected 2025/08/19 15:26:44 new: boot error: can't ssh into the instance 2025/08/19 15:27:29 base crash: no output from test machine 2025/08/19 15:29:26 base crash: no output from test machine 2025/08/19 15:30:02 base crash: no output from test machine 2025/08/19 15:30:14 runner 3 connected 2025/08/19 15:30:51 runner 0 connected 2025/08/19 15:31:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108725, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 14, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 11, "prog exec time": 0, "reproducing": 7, "rpc recv": 12852688980, "rpc sent": 2031655592, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 71339123, "vm restarts [base]": 104, "vm restarts [new]": 179 } 2025/08/19 15:31:43 base crash: no output from test machine 2025/08/19 15:32:33 runner 1 connected 2025/08/19 15:35:14 base crash: no output from test machine 2025/08/19 15:35:20 repro finished 'WARNING in xfrm6_tunnel_net_exit', repro=true crepro=false desc='lost connection to test machine' hub=false from_dashboard=false 2025/08/19 15:35:20 found repro for "lost connection to test machine" (orig title: "WARNING in xfrm6_tunnel_net_exit", reliability: 1), took 68.73 minutes 2025/08/19 15:35:20 start reproducing 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 15:35:20 "lost connection to test machine": saved crash log into 1755617720.crash.log 2025/08/19 15:35:20 "lost connection to test machine": saved repro log into 1755617720.repro.log 2025/08/19 15:36:03 runner 3 connected 2025/08/19 15:36:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108725, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 15, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 11, "prog exec time": 0, "reproducing": 7, "rpc recv": 12914481108, "rpc sent": 2031656152, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 75080467, "vm restarts [base]": 106, "vm restarts [new]": 179 } 2025/08/19 15:36:39 reproducing crash 'KASAN: slab-use-after-free Read in __xfrm_state_lookup': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/xfrm/xfrm_state.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 15:37:21 reproducing crash 'KASAN: slab-use-after-free Read in __xfrm_state_lookup': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/xfrm/xfrm_state.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 15:37:33 base crash: no output from test machine 2025/08/19 15:37:34 base: boot error: can't ssh into the instance 2025/08/19 15:38:11 reproducing crash 'KASAN: slab-use-after-free Read in __xfrm_state_lookup': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/xfrm/xfrm_state.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 15:38:21 runner 1 connected 2025/08/19 15:38:23 runner 2 connected 2025/08/19 15:38:31 attempt #0 to run "lost connection to test machine" on base: crashed with lost connection to test machine 2025/08/19 15:38:31 crashes both: lost connection to test machine / lost connection to test machine 2025/08/19 15:38:59 reproducing crash 'KASAN: slab-use-after-free Read in __xfrm_state_lookup': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/xfrm/xfrm_state.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 15:39:21 runner 0 connected 2025/08/19 15:39:33 repro finished 'KASAN: slab-use-after-free Read in xfrm_state_find', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 15:39:33 failed repro for "KASAN: slab-use-after-free Read in xfrm_state_find", err=%!s() 2025/08/19 15:39:33 "KASAN: slab-use-after-free Read in xfrm_state_find": saved crash log into 1755617973.crash.log 2025/08/19 15:39:33 "KASAN: slab-use-after-free Read in xfrm_state_find": saved repro log into 1755617973.repro.log 2025/08/19 15:41:03 base crash: no output from test machine 2025/08/19 15:41:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34815, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43087, "coverage": 313671, "distributor delayed": 60268, "distributor undelayed": 60268, "distributor violated": 916, "exec candidate": 46335, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108725, "exec total [new]": 298448, "exec triage": 142945, "executor restarts": 1162, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317336, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 20, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46205, "no exec duration": 44872000000, "no exec requests": 342, "pending": 11, "prog exec time": 0, "reproducing": 6, "rpc recv": 13007169292, "rpc sent": 2031656992, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 77776646, "vm restarts [base]": 109, "vm restarts [new]": 179 } 2025/08/19 15:41:51 runner 3 connected 2025/08/19 15:43:21 base crash: no output from test machine 2025/08/19 15:43:22 base crash: no output from test machine 2025/08/19 15:44:10 runner 1 connected 2025/08/19 15:44:12 runner 2 connected 2025/08/19 15:44:21 base crash: no output from test machine 2025/08/19 15:45:05 runner 0 connected 2025/08/19 15:45:11 runner 0 connected 2025/08/19 15:45:24 runner 1 connected 2025/08/19 15:46:03 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 15:46:36 STAT { "buffer too small": 0, "candidate triage jobs": 7, "candidates": 34806, "comps overflows": 0, "corpus": 45318, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43222, "coverage": 313671, "distributor delayed": 60278, "distributor undelayed": 60271, "distributor violated": 916, "exec candidate": 46344, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 109507, "exec total [new]": 299262, "exec triage": 142953, "executor restarts": 1172, "fault jobs": 0, "fuzzer jobs": 7, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 317360, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 22, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46210, "no exec duration": 332371000000, "no exec requests": 1040, "pending": 11, "prog exec time": 319, "reproducing": 6, "rpc recv": 13193347756, "rpc sent": 2042077424, "signal": 309174, "smash jobs": 0, "triage jobs": 0, "vm output": 80264871, "vm restarts [base]": 113, "vm restarts [new]": 181 } 2025/08/19 15:46:52 runner 0 connected 2025/08/19 15:46:52 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 15:46:54 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 15:47:42 runner 2 connected 2025/08/19 15:48:40 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 15:49:30 runner 0 connected 2025/08/19 15:50:02 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 15:50:04 reproducing crash 'KASAN: slab-use-after-free Read in __xfrm_state_lookup': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/xfrm/xfrm_state.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 15:51:01 base crash: WARNING in xfrm_state_fini 2025/08/19 15:51:36 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34766, "comps overflows": 0, "corpus": 45351, "corpus [files]": 0, "corpus [symbols]": 1484, "cover overflows": 43547, "coverage": 313765, "distributor delayed": 60286, "distributor undelayed": 60286, "distributor violated": 930, "exec candidate": 46384, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 111293, "exec total [new]": 301059, "exec triage": 143062, "executor restarts": 1183, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 317425, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 24, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46243, "no exec duration": 1057223000000, "no exec requests": 2764, "pending": 11, "prog exec time": 306, "reproducing": 6, "rpc recv": 13292590976, "rpc sent": 2061075168, "signal": 309263, "smash jobs": 0, "triage jobs": 0, "vm output": 82369385, "vm restarts [base]": 114, "vm restarts [new]": 183 } 2025/08/19 15:51:39 reproducing crash 'KASAN: slab-use-after-free Read in __xfrm_state_lookup': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/xfrm/xfrm_state.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 15:51:50 runner 3 connected 2025/08/19 15:51:53 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/08/19 15:51:53 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/08/19 15:52:42 runner 0 connected 2025/08/19 15:53:12 reproducing crash 'KASAN: slab-use-after-free Read in __xfrm_state_lookup': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/xfrm/xfrm_state.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 15:53:12 repro finished 'KASAN: slab-use-after-free Read in __xfrm_state_lookup', repro=true crepro=false desc='WARNING in xfrm_state_fini' hub=false from_dashboard=false 2025/08/19 15:53:12 start reproducing 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 15:53:12 found repro for "WARNING in xfrm_state_fini" (orig title: "KASAN: slab-use-after-free Read in __xfrm_state_lookup", reliability: 1), took 17.87 minutes 2025/08/19 15:53:12 "WARNING in xfrm_state_fini": saved crash log into 1755618792.crash.log 2025/08/19 15:53:12 "WARNING in xfrm_state_fini": saved repro log into 1755618792.repro.log 2025/08/19 15:53:19 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 15:54:55 attempt #0 to run "WARNING in xfrm_state_fini" on base: crashed with WARNING in xfrm_state_fini 2025/08/19 15:54:55 crashes both: WARNING in xfrm_state_fini / WARNING in xfrm_state_fini 2025/08/19 15:55:44 runner 0 connected 2025/08/19 15:56:14 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 15:56:31 status reporting terminated 2025/08/19 15:56:31 bug reporting terminated 2025/08/19 15:56:31 repro finished 'WARNING: suspicious RCU usage in get_callchain_entry', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 15:56:31 repro finished 'kernel BUG in jfs_evict_inode', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 15:56:31 repro finished 'INFO: task hung in tun_chr_close', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 15:56:31 repro finished 'INFO: task hung in __iterate_supers', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 15:56:36 syz-diff (base): kernel context loop terminated 2025/08/19 16:01:25 repro finished 'KASAN: slab-use-after-free Read in __xfrm_state_lookup', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 16:01:26 repro finished 'INFO: task hung in v9fs_evict_inode', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 16:05:17 syz-diff (new): kernel context loop terminated 2025/08/19 16:05:17 diff fuzzing terminated 2025/08/19 16:05:17 fuzzing is finished 2025/08/19 16:05:17 status at the end: Title On-Base On-Patched INFO: task hung in __iterate_supers 2 crashes INFO: task hung in tun_chr_close 1 crashes INFO: task hung in v9fs_evict_inode 2 crashes KASAN: slab-use-after-free Read in __xfrm_state_lookup 3 crashes KASAN: slab-use-after-free Read in xfrm_alloc_spi 2 crashes 5 crashes KASAN: slab-use-after-free Read in xfrm_state_find 1 crashes KASAN: slab-use-after-free Write in __xfrm_state_delete 1 crashes WARNING in __ww_mutex_wound 1 crashes 1 crashes[reproduced] WARNING in dbAdjTree 1 crashes WARNING in ext4_xattr_inode_lookup_create 1 crashes 3 crashes WARNING in xfrm6_tunnel_net_exit 1 crashes WARNING in xfrm_state_fini 12 crashes 12 crashes[reproduced] WARNING: suspicious RCU usage in get_callchain_entry 7 crashes kernel BUG in jfs_evict_inode 4 crashes kernel BUG in txUnlock 2 crashes 7 crashes lost connection to test machine 2 crashes 7 crashes[reproduced] no output from test machine 44 crashes possible deadlock in ntfs_fiemap 1 crashes 1 crashes possible deadlock in ocfs2_init_acl 6 crashes 34 crashes possible deadlock in ocfs2_reserve_suballoc_bits 15 crashes 37 crashes possible deadlock in ocfs2_setattr 1 crashes possible deadlock in ocfs2_truncate_file 1 crashes possible deadlock in ocfs2_try_remove_refcount_tree 15 crashes 30 crashes possible deadlock in ocfs2_xattr_set 7 crashes 17 crashes unregister_netdevice: waiting for DEV to become free 1 crashes 1 crashes