2025/07/07 05:56:36 adding directly modified files to focus_order: ["fs/ext4/inode.c"] 2025/07/07 05:56:37 downloaded the corpus from https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db 2025/07/07 05:57:12 broken programs in the corpus: 184, broken seeds: 0 2025/07/07 05:57:28 runner 7 connected 2025/07/07 05:57:34 initializing coverage information... 2025/07/07 05:57:35 runner 6 connected 2025/07/07 05:57:35 runner 1 connected 2025/07/07 05:57:35 runner 8 connected 2025/07/07 05:57:35 runner 0 connected 2025/07/07 05:57:35 runner 9 connected 2025/07/07 05:57:35 runner 5 connected 2025/07/07 05:57:35 runner 4 connected 2025/07/07 05:57:35 runner 0 connected 2025/07/07 05:57:35 runner 2 connected 2025/07/07 05:57:35 runner 3 connected 2025/07/07 05:57:35 runner 1 connected 2025/07/07 05:57:35 runner 3 connected 2025/07/07 05:57:36 runner 2 connected 2025/07/07 05:57:38 discovered 7632 source files, 336887 symbols 2025/07/07 05:57:39 coverage filter: fs/ext4/inode.c: [workdir/fs/ext4/inode.c] 2025/07/07 05:57:39 cover filter size: 0 2025/07/07 05:57:42 cover filter size: 0 2025/07/07 05:57:45 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3810/7986 2025/07/07 05:57:45 new: machine check complete 2025/07/07 05:57:47 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3810/7986 2025/07/07 05:57:47 base: machine check complete 2025/07/07 05:57:47 new: adding 78593 seeds 2025/07/07 05:58:35 base crash: lost connection to test machine 2025/07/07 05:58:36 base crash: lost connection to test machine 2025/07/07 05:59:23 runner 2 connected 2025/07/07 05:59:25 runner 3 connected 2025/07/07 06:00:33 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 06:01:23 runner 4 connected 2025/07/07 06:01:39 STAT { "buffer too small": 0, "candidate triage jobs": 58, "candidates": 72547, "corpus": 5904, "corpus [modified]": 195, "coverage": 173329, "distributor delayed": 4734, "distributor undelayed": 4734, "distributor violated": 0, "exec candidate": 6046, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 10000, "exec total [new]": 27565, "exec triage": 19068, "executor restarts": 110, "fault jobs": 0, "fuzzer jobs": 58, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 10, "hints jobs": 0, "max signal": 176362, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 6046, "no exec duration": 33862000000, "no exec requests": 314, "pending": 0, "prog exec time": 207, "reproducing": 0, "rpc recv": 1068185684, "rpc sent": 127336392, "signal": 169645, "smash jobs": 0, "triage jobs": 0, "vm output": 3014883, "vm restarts [base]": 6, "vm restarts [new]": 11 } 2025/07/07 06:03:41 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:03:48 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:04:01 base crash: lost connection to test machine 2025/07/07 06:04:03 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 06:04:10 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/07/07 06:04:10 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/07/07 06:04:11 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/07/07 06:04:11 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/07/07 06:04:13 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/07/07 06:04:13 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/07/07 06:04:21 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 06:04:26 patched crashed: WARNING in io_ring_exit_work [need repro = true] 2025/07/07 06:04:26 scheduled a reproduction of 'WARNING in io_ring_exit_work' 2025/07/07 06:04:30 runner 5 connected 2025/07/07 06:04:37 runner 8 connected 2025/07/07 06:04:51 runner 3 connected 2025/07/07 06:04:51 runner 1 connected 2025/07/07 06:04:59 runner 1 connected 2025/07/07 06:05:00 runner 2 connected 2025/07/07 06:05:02 runner 0 connected 2025/07/07 06:05:07 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 06:05:10 runner 2 connected 2025/07/07 06:05:15 runner 3 connected 2025/07/07 06:05:52 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/07/07 06:05:52 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/07/07 06:05:57 runner 0 connected 2025/07/07 06:06:03 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/07/07 06:06:03 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/07/07 06:06:39 STAT { "buffer too small": 0, "candidate triage jobs": 54, "candidates": 66620, "corpus": 11726, "corpus [modified]": 338, "coverage": 212291, "distributor delayed": 10731, "distributor undelayed": 10731, "distributor violated": 41, "exec candidate": 11973, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 44, "exec seeds": 0, "exec smash": 0, "exec total [base]": 18094, "exec total [new]": 55715, "exec triage": 37719, "executor restarts": 170, "fault jobs": 0, "fuzzer jobs": 54, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 215600, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 11973, "no exec duration": 34032000000, "no exec requests": 318, "pending": 6, "prog exec time": 369, "reproducing": 0, "rpc recv": 1838358788, "rpc sent": 249343176, "signal": 206819, "smash jobs": 0, "triage jobs": 0, "vm output": 5821841, "vm restarts [base]": 10, "vm restarts [new]": 17 } 2025/07/07 06:06:43 runner 2 connected 2025/07/07 06:06:53 runner 0 connected 2025/07/07 06:07:15 base crash: possible deadlock in ocfs2_init_acl 2025/07/07 06:08:04 runner 3 connected 2025/07/07 06:08:58 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 06:09:06 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/07/07 06:09:06 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/07/07 06:09:16 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/07/07 06:09:33 patched crashed: INFO: task hung in sync_bdevs [need repro = true] 2025/07/07 06:09:33 scheduled a reproduction of 'INFO: task hung in sync_bdevs' 2025/07/07 06:09:46 runner 4 connected 2025/07/07 06:09:56 runner 5 connected 2025/07/07 06:10:07 runner 3 connected 2025/07/07 06:10:22 runner 3 connected 2025/07/07 06:11:09 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:11:10 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:11:24 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 06:11:39 STAT { "buffer too small": 0, "candidate triage jobs": 50, "candidates": 60610, "corpus": 17676, "corpus [modified]": 487, "coverage": 238035, "distributor delayed": 16054, "distributor undelayed": 16054, "distributor violated": 41, "exec candidate": 17983, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 86, "exec seeds": 0, "exec smash": 0, "exec total [base]": 28342, "exec total [new]": 85325, "exec triage": 56370, "executor restarts": 219, "fault jobs": 0, "fuzzer jobs": 50, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 241546, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 17983, "no exec duration": 34455000000, "no exec requests": 325, "pending": 8, "prog exec time": 240, "reproducing": 0, "rpc recv": 2512392376, "rpc sent": 395514392, "signal": 232399, "smash jobs": 0, "triage jobs": 0, "vm output": 8788056, "vm restarts [base]": 12, "vm restarts [new]": 22 } 2025/07/07 06:11:58 runner 2 connected 2025/07/07 06:12:01 patched crashed: possible deadlock in attr_data_get_block [need repro = true] 2025/07/07 06:12:01 scheduled a reproduction of 'possible deadlock in attr_data_get_block' 2025/07/07 06:12:07 runner 1 connected 2025/07/07 06:12:11 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 06:12:14 runner 0 connected 2025/07/07 06:12:31 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 06:12:49 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 06:12:50 runner 8 connected 2025/07/07 06:13:08 runner 3 connected 2025/07/07 06:13:20 runner 7 connected 2025/07/07 06:13:37 runner 2 connected 2025/07/07 06:15:03 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 06:15:49 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:15:51 runner 3 connected 2025/07/07 06:16:05 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:16:06 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:16:13 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 06:16:37 runner 8 connected 2025/07/07 06:16:39 STAT { "buffer too small": 0, "candidate triage jobs": 28, "candidates": 55006, "corpus": 23124, "corpus [modified]": 560, "coverage": 255069, "distributor delayed": 22388, "distributor undelayed": 22388, "distributor violated": 41, "exec candidate": 23587, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 215, "exec seeds": 0, "exec smash": 0, "exec total [base]": 41052, "exec total [new]": 117125, "exec triage": 74215, "executor restarts": 261, "fault jobs": 0, "fuzzer jobs": 28, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 258839, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 23587, "no exec duration": 37222000000, "no exec requests": 346, "pending": 9, "prog exec time": 222, "reproducing": 0, "rpc recv": 3156261232, "rpc sent": 546896528, "signal": 248806, "smash jobs": 0, "triage jobs": 0, "vm output": 11832912, "vm restarts [base]": 14, "vm restarts [new]": 29 } 2025/07/07 06:16:50 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:16:53 runner 2 connected 2025/07/07 06:16:55 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 06:16:55 runner 4 connected 2025/07/07 06:17:02 runner 0 connected 2025/07/07 06:17:21 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/07/07 06:17:33 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 06:17:33 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/07/07 06:17:40 runner 7 connected 2025/07/07 06:17:45 runner 2 connected 2025/07/07 06:17:47 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 06:18:03 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/07/07 06:18:10 runner 3 connected 2025/07/07 06:18:21 patched crashed: possible deadlock in team_device_event [need repro = true] 2025/07/07 06:18:21 scheduled a reproduction of 'possible deadlock in team_device_event' 2025/07/07 06:18:21 runner 4 connected 2025/07/07 06:18:22 runner 8 connected 2025/07/07 06:18:37 runner 2 connected 2025/07/07 06:18:50 runner 3 connected 2025/07/07 06:18:54 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 06:19:11 runner 6 connected 2025/07/07 06:19:15 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 06:19:38 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:19:42 runner 0 connected 2025/07/07 06:20:03 runner 2 connected 2025/07/07 06:20:25 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 06:20:26 runner 1 connected 2025/07/07 06:20:27 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/07/07 06:20:44 patched crashed: possible deadlock in team_device_event [need repro = true] 2025/07/07 06:20:44 scheduled a reproduction of 'possible deadlock in team_device_event' 2025/07/07 06:21:12 patched crashed: possible deadlock in ntfs_fiemap [need repro = true] 2025/07/07 06:21:12 scheduled a reproduction of 'possible deadlock in ntfs_fiemap' 2025/07/07 06:21:13 runner 0 connected 2025/07/07 06:21:16 runner 1 connected 2025/07/07 06:21:34 runner 9 connected 2025/07/07 06:21:39 STAT { "buffer too small": 0, "candidate triage jobs": 44, "candidates": 50559, "corpus": 27478, "corpus [modified]": 634, "coverage": 266494, "distributor delayed": 28103, "distributor undelayed": 28103, "distributor violated": 47, "exec candidate": 28034, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 240, "exec seeds": 0, "exec smash": 0, "exec total [base]": 52390, "exec total [new]": 142392, "exec triage": 88033, "executor restarts": 331, "fault jobs": 0, "fuzzer jobs": 44, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 270454, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 28034, "no exec duration": 37750000000, "no exec requests": 350, "pending": 12, "prog exec time": 323, "reproducing": 0, "rpc recv": 4003385052, "rpc sent": 686744352, "signal": 260013, "smash jobs": 0, "triage jobs": 0, "vm output": 14623942, "vm restarts [base]": 19, "vm restarts [new]": 41 } 2025/07/07 06:22:01 runner 5 connected 2025/07/07 06:24:47 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/07/07 06:24:58 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/07/07 06:25:32 patched crashed: possible deadlock in ext4_readpage_inline [need repro = true] 2025/07/07 06:25:32 scheduled a reproduction of 'possible deadlock in ext4_readpage_inline' 2025/07/07 06:25:37 runner 8 connected 2025/07/07 06:25:48 runner 4 connected 2025/07/07 06:26:01 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/07/07 06:26:01 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/07/07 06:26:12 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/07/07 06:26:12 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/07/07 06:26:22 runner 9 connected 2025/07/07 06:26:39 STAT { "buffer too small": 0, "candidate triage jobs": 44, "candidates": 46086, "corpus": 31885, "corpus [modified]": 733, "coverage": 279600, "distributor delayed": 32066, "distributor undelayed": 32066, "distributor violated": 47, "exec candidate": 32507, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 281, "exec seeds": 0, "exec smash": 0, "exec total [base]": 62937, "exec total [new]": 165156, "exec triage": 101598, "executor restarts": 408, "fault jobs": 0, "fuzzer jobs": 44, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 283300, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 32507, "no exec duration": 37750000000, "no exec requests": 350, "pending": 15, "prog exec time": 238, "reproducing": 0, "rpc recv": 4566092548, "rpc sent": 845221280, "signal": 273605, "smash jobs": 0, "triage jobs": 0, "vm output": 18985938, "vm restarts [base]": 19, "vm restarts [new]": 45 } 2025/07/07 06:26:50 runner 3 connected 2025/07/07 06:27:03 runner 6 connected 2025/07/07 06:27:42 patched crashed: possible deadlock in team_del_slave [need repro = true] 2025/07/07 06:27:42 scheduled a reproduction of 'possible deadlock in team_del_slave' 2025/07/07 06:27:59 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 06:28:02 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:28:04 patched crashed: KASAN: slab-use-after-free Read in l2cap_unregister_user [need repro = true] 2025/07/07 06:28:04 scheduled a reproduction of 'KASAN: slab-use-after-free Read in l2cap_unregister_user' 2025/07/07 06:28:10 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:28:22 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 06:28:31 runner 2 connected 2025/07/07 06:28:47 runner 8 connected 2025/07/07 06:28:50 runner 5 connected 2025/07/07 06:28:53 runner 7 connected 2025/07/07 06:28:59 runner 1 connected 2025/07/07 06:29:15 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 06:29:16 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:29:19 runner 0 connected 2025/07/07 06:29:28 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:29:49 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:30:05 runner 3 connected 2025/07/07 06:30:05 runner 9 connected 2025/07/07 06:30:16 runner 4 connected 2025/07/07 06:30:38 runner 0 connected 2025/07/07 06:31:39 STAT { "buffer too small": 0, "candidate triage jobs": 43, "candidates": 42325, "corpus": 35593, "corpus [modified]": 822, "coverage": 288270, "distributor delayed": 36210, "distributor undelayed": 36210, "distributor violated": 50, "exec candidate": 36268, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 438, "exec seeds": 0, "exec smash": 0, "exec total [base]": 71288, "exec total [new]": 186086, "exec triage": 113033, "executor restarts": 484, "fault jobs": 0, "fuzzer jobs": 43, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 10, "hints jobs": 0, "max signal": 292216, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 36268, "no exec duration": 38040000000, "no exec requests": 354, "pending": 17, "prog exec time": 274, "reproducing": 0, "rpc recv": 5250145548, "rpc sent": 975353336, "signal": 282516, "smash jobs": 0, "triage jobs": 0, "vm output": 23380655, "vm restarts [base]": 21, "vm restarts [new]": 55 } 2025/07/07 06:32:09 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:32:41 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:32:51 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:32:56 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:32:59 runner 2 connected 2025/07/07 06:33:02 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:33:29 runner 6 connected 2025/07/07 06:33:40 runner 5 connected 2025/07/07 06:33:43 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 06:33:45 runner 8 connected 2025/07/07 06:33:51 runner 7 connected 2025/07/07 06:34:39 patched crashed: INFO: task hung in v9fs_evict_inode [need repro = true] 2025/07/07 06:34:39 scheduled a reproduction of 'INFO: task hung in v9fs_evict_inode' 2025/07/07 06:34:39 runner 3 connected 2025/07/07 06:35:05 base crash: possible deadlock in ocfs2_page_mkwrite 2025/07/07 06:35:19 base crash: INFO: task hung in v9fs_evict_inode 2025/07/07 06:35:20 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:35:36 runner 1 connected 2025/07/07 06:35:49 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:35:54 runner 2 connected 2025/07/07 06:36:09 runner 3 connected 2025/07/07 06:36:09 runner 0 connected 2025/07/07 06:36:11 patched crashed: possible deadlock in __del_gendisk [need repro = true] 2025/07/07 06:36:11 scheduled a reproduction of 'possible deadlock in __del_gendisk' 2025/07/07 06:36:22 patched crashed: possible deadlock in __del_gendisk [need repro = true] 2025/07/07 06:36:22 scheduled a reproduction of 'possible deadlock in __del_gendisk' 2025/07/07 06:36:22 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 06:36:32 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 06:36:39 STAT { "buffer too small": 0, "candidate triage jobs": 27, "candidates": 39036, "corpus": 38850, "corpus [modified]": 910, "coverage": 295466, "distributor delayed": 39566, "distributor undelayed": 39563, "distributor violated": 50, "exec candidate": 39557, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 548, "exec seeds": 0, "exec smash": 0, "exec total [base]": 79651, "exec total [new]": 206021, "exec triage": 123144, "executor restarts": 532, "fault jobs": 0, "fuzzer jobs": 27, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 299391, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 39557, "no exec duration": 39752000000, "no exec requests": 360, "pending": 20, "prog exec time": 331, "reproducing": 0, "rpc recv": 5847932800, "rpc sent": 1119859008, "signal": 289925, "smash jobs": 0, "triage jobs": 0, "vm output": 27453771, "vm restarts [base]": 24, "vm restarts [new]": 62 } 2025/07/07 06:36:39 runner 9 connected 2025/07/07 06:36:41 patched crashed: possible deadlock in ocfs2_fiemap [need repro = true] 2025/07/07 06:36:41 scheduled a reproduction of 'possible deadlock in ocfs2_fiemap' 2025/07/07 06:37:01 runner 2 connected 2025/07/07 06:37:10 runner 4 connected 2025/07/07 06:37:11 runner 7 connected 2025/07/07 06:37:22 runner 1 connected 2025/07/07 06:37:22 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:37:31 runner 5 connected 2025/07/07 06:37:40 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/07/07 06:37:40 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/07/07 06:38:11 runner 0 connected 2025/07/07 06:38:29 runner 3 connected 2025/07/07 06:39:09 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/07/07 06:39:09 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/07/07 06:39:21 base crash: WARNING in dbAdjTree 2025/07/07 06:39:57 runner 0 connected 2025/07/07 06:40:06 patched crashed: possible deadlock in page_cache_ra_unbounded [need repro = true] 2025/07/07 06:40:06 scheduled a reproduction of 'possible deadlock in page_cache_ra_unbounded' 2025/07/07 06:40:07 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/07/07 06:40:09 runner 2 connected 2025/07/07 06:40:11 patched crashed: kernel BUG in dnotify_free_mark [need repro = true] 2025/07/07 06:40:11 scheduled a reproduction of 'kernel BUG in dnotify_free_mark' 2025/07/07 06:40:13 patched crashed: kernel BUG in dnotify_free_mark [need repro = true] 2025/07/07 06:40:13 scheduled a reproduction of 'kernel BUG in dnotify_free_mark' 2025/07/07 06:40:14 patched crashed: kernel BUG in dnotify_free_mark [need repro = true] 2025/07/07 06:40:14 scheduled a reproduction of 'kernel BUG in dnotify_free_mark' 2025/07/07 06:40:24 base crash: kernel BUG in dnotify_free_mark 2025/07/07 06:40:55 runner 8 connected 2025/07/07 06:40:56 runner 9 connected 2025/07/07 06:40:59 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/07/07 06:41:02 runner 4 connected 2025/07/07 06:41:02 runner 7 connected 2025/07/07 06:41:02 runner 2 connected 2025/07/07 06:41:10 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 06:41:12 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/07/07 06:41:13 runner 1 connected 2025/07/07 06:41:28 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 06:41:39 STAT { "buffer too small": 0, "candidate triage jobs": 24, "candidates": 36821, "corpus": 41003, "corpus [modified]": 998, "coverage": 301022, "distributor delayed": 41930, "distributor undelayed": 41928, "distributor violated": 60, "exec candidate": 41772, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 557, "exec seeds": 0, "exec smash": 0, "exec total [base]": 89452, "exec total [new]": 227622, "exec triage": 130125, "executor restarts": 611, "fault jobs": 0, "fuzzer jobs": 24, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 304741, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 41772, "no exec duration": 41293000000, "no exec requests": 366, "pending": 27, "prog exec time": 369, "reproducing": 0, "rpc recv": 6554235948, "rpc sent": 1311578552, "signal": 295294, "smash jobs": 0, "triage jobs": 0, "vm output": 30762767, "vm restarts [base]": 27, "vm restarts [new]": 75 } 2025/07/07 06:41:48 runner 3 connected 2025/07/07 06:41:54 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 06:42:01 runner 0 connected 2025/07/07 06:42:02 runner 5 connected 2025/07/07 06:42:18 runner 2 connected 2025/07/07 06:42:43 runner 4 connected 2025/07/07 06:43:06 patched crashed: possible deadlock in blk_mq_update_nr_hw_queues [need repro = true] 2025/07/07 06:43:06 scheduled a reproduction of 'possible deadlock in blk_mq_update_nr_hw_queues' 2025/07/07 06:43:12 patched crashed: possible deadlock in team_device_event [need repro = true] 2025/07/07 06:43:12 scheduled a reproduction of 'possible deadlock in team_device_event' 2025/07/07 06:43:24 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/07/07 06:43:24 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/07/07 06:43:55 runner 0 connected 2025/07/07 06:44:00 runner 3 connected 2025/07/07 06:44:06 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:44:13 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/07/07 06:44:13 runner 1 connected 2025/07/07 06:44:27 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/07/07 06:44:46 base crash: possible deadlock in ocfs2_init_acl 2025/07/07 06:45:02 runner 5 connected 2025/07/07 06:45:03 runner 7 connected 2025/07/07 06:45:16 runner 9 connected 2025/07/07 06:45:29 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:45:35 runner 0 connected 2025/07/07 06:45:41 patched crashed: possible deadlock in team_device_event [need repro = true] 2025/07/07 06:45:41 scheduled a reproduction of 'possible deadlock in team_device_event' 2025/07/07 06:46:17 runner 2 connected 2025/07/07 06:46:30 runner 8 connected 2025/07/07 06:46:39 STAT { "buffer too small": 0, "candidate triage jobs": 17, "candidates": 35338, "corpus": 42430, "corpus [modified]": 1038, "coverage": 304207, "distributor delayed": 43443, "distributor undelayed": 43443, "distributor violated": 61, "exec candidate": 43255, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 611, "exec seeds": 0, "exec smash": 0, "exec total [base]": 100087, "exec total [new]": 249318, "exec triage": 134753, "executor restarts": 685, "fault jobs": 0, "fuzzer jobs": 17, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 307906, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43255, "no exec duration": 41442000000, "no exec requests": 369, "pending": 31, "prog exec time": 249, "reproducing": 0, "rpc recv": 7142616748, "rpc sent": 1474178528, "signal": 298475, "smash jobs": 0, "triage jobs": 0, "vm output": 34094409, "vm restarts [base]": 29, "vm restarts [new]": 87 } 2025/07/07 06:46:53 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 06:46:56 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/07/07 06:47:02 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 06:47:42 runner 3 connected 2025/07/07 06:47:44 runner 3 connected 2025/07/07 06:47:51 runner 9 connected 2025/07/07 06:47:58 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 06:48:36 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 06:48:47 runner 4 connected 2025/07/07 06:48:49 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 06:48:56 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 06:49:25 runner 9 connected 2025/07/07 06:49:28 patched crashed: possible deadlock in team_del_slave [need repro = true] 2025/07/07 06:49:28 scheduled a reproduction of 'possible deadlock in team_del_slave' 2025/07/07 06:49:37 runner 7 connected 2025/07/07 06:49:45 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 06:49:52 runner 1 connected 2025/07/07 06:50:05 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:50:25 runner 5 connected 2025/07/07 06:50:35 runner 0 connected 2025/07/07 06:50:54 runner 8 connected 2025/07/07 06:51:27 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/07/07 06:51:27 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/07/07 06:51:39 STAT { "buffer too small": 0, "candidate triage jobs": 20, "candidates": 34051, "corpus": 43622, "corpus [modified]": 1068, "coverage": 306665, "distributor delayed": 44660, "distributor undelayed": 44660, "distributor violated": 61, "exec candidate": 44542, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 640, "exec seeds": 0, "exec smash": 0, "exec total [base]": 109053, "exec total [new]": 272145, "exec triage": 138748, "executor restarts": 756, "fault jobs": 0, "fuzzer jobs": 20, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 310454, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44514, "no exec duration": 41477000000, "no exec requests": 370, "pending": 33, "prog exec time": 321, "reproducing": 0, "rpc recv": 7582357228, "rpc sent": 1629113064, "signal": 300920, "smash jobs": 0, "triage jobs": 0, "vm output": 37356858, "vm restarts [base]": 31, "vm restarts [new]": 95 } 2025/07/07 06:51:51 base crash: possible deadlock in blk_mq_update_nr_hw_queues 2025/07/07 06:51:54 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/07/07 06:52:15 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 06:52:16 runner 3 connected 2025/07/07 06:52:33 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 06:52:39 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 06:52:40 runner 0 connected 2025/07/07 06:52:44 runner 7 connected 2025/07/07 06:53:04 runner 8 connected 2025/07/07 06:53:22 runner 4 connected 2025/07/07 06:53:28 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/07/07 06:53:28 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/07/07 06:53:28 runner 6 connected 2025/07/07 06:53:38 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/07/07 06:53:38 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/07/07 06:53:50 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/07/07 06:54:00 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:54:24 runner 1 connected 2025/07/07 06:54:26 runner 7 connected 2025/07/07 06:54:44 base crash: WARNING in io_ring_exit_work 2025/07/07 06:54:46 runner 2 connected 2025/07/07 06:54:50 runner 0 connected 2025/07/07 06:55:05 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:55:20 patched crashed: WARNING in dbAdjTree [need repro = false] 2025/07/07 06:55:20 patched crashed: WARNING in dbAdjTree [need repro = false] 2025/07/07 06:55:22 patched crashed: WARNING in dbAdjTree [need repro = false] 2025/07/07 06:55:25 base crash: WARNING in dbAdjTree 2025/07/07 06:55:26 base crash: WARNING in dbAdjTree 2025/07/07 06:55:34 runner 3 connected 2025/07/07 06:55:35 patched crashed: WARNING in dbAdjTree [need repro = false] 2025/07/07 06:55:42 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 06:56:02 runner 2 connected 2025/07/07 06:56:08 runner 5 connected 2025/07/07 06:56:16 runner 1 connected 2025/07/07 06:56:17 runner 0 connected 2025/07/07 06:56:20 runner 7 connected 2025/07/07 06:56:21 runner 0 connected 2025/07/07 06:56:23 runner 6 connected 2025/07/07 06:56:32 runner 3 connected 2025/07/07 06:56:36 patched crashed: possible deadlock in team_del_slave [need repro = true] 2025/07/07 06:56:36 scheduled a reproduction of 'possible deadlock in team_del_slave' 2025/07/07 06:56:39 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 33369, "corpus": 44213, "corpus [modified]": 1086, "coverage": 307965, "distributor delayed": 45374, "distributor undelayed": 45374, "distributor violated": 63, "exec candidate": 45224, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 714, "exec seeds": 0, "exec smash": 0, "exec total [base]": 115681, "exec total [new]": 290201, "exec triage": 140765, "executor restarts": 826, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 311935, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45139, "no exec duration": 41477000000, "no exec requests": 370, "pending": 36, "prog exec time": 228, "reproducing": 0, "rpc recv": 8217074992, "rpc sent": 1743828832, "signal": 302228, "smash jobs": 0, "triage jobs": 0, "vm output": 40595189, "vm restarts [base]": 36, "vm restarts [new]": 109 } 2025/07/07 06:57:05 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:57:34 runner 4 connected 2025/07/07 06:58:02 runner 8 connected 2025/07/07 06:58:05 base crash: possible deadlock in team_del_slave 2025/07/07 06:58:50 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 06:58:53 runner 3 connected 2025/07/07 06:59:06 patched crashed: possible deadlock in team_device_event [need repro = true] 2025/07/07 06:59:06 scheduled a reproduction of 'possible deadlock in team_device_event' 2025/07/07 06:59:40 runner 1 connected 2025/07/07 06:59:49 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 06:59:50 base crash: KASAN: slab-use-after-free Read in jfs_lazycommit 2025/07/07 06:59:54 runner 9 connected 2025/07/07 07:00:46 runner 7 connected 2025/07/07 07:00:47 runner 2 connected 2025/07/07 07:01:18 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 07:01:23 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 07:01:39 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 16821, "corpus": 44782, "corpus [modified]": 1104, "coverage": 309218, "distributor delayed": 45876, "distributor undelayed": 45876, "distributor violated": 63, "exec candidate": 61772, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 765, "exec seeds": 0, "exec smash": 0, "exec total [base]": 126752, "exec total [new]": 314226, "exec triage": 142740, "executor restarts": 911, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 313284, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45760, "no exec duration": 41900000000, "no exec requests": 373, "pending": 37, "prog exec time": 250, "reproducing": 0, "rpc recv": 8542861596, "rpc sent": 1881427792, "signal": 303484, "smash jobs": 0, "triage jobs": 0, "vm output": 44314768, "vm restarts [base]": 38, "vm restarts [new]": 114 } 2025/07/07 07:01:45 patched crashed: possible deadlock in team_device_event [need repro = true] 2025/07/07 07:01:45 scheduled a reproduction of 'possible deadlock in team_device_event' 2025/07/07 07:01:55 patched crashed: possible deadlock in team_device_event [need repro = true] 2025/07/07 07:01:55 scheduled a reproduction of 'possible deadlock in team_device_event' 2025/07/07 07:02:00 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = true] 2025/07/07 07:02:00 scheduled a reproduction of 'unregister_netdevice: waiting for DEV to become free' 2025/07/07 07:02:07 runner 5 connected 2025/07/07 07:02:12 runner 6 connected 2025/07/07 07:02:33 runner 8 connected 2025/07/07 07:02:45 runner 4 connected 2025/07/07 07:02:49 runner 0 connected 2025/07/07 07:03:47 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 07:04:09 triaged 93.8% of the corpus 2025/07/07 07:04:09 starting bug reproductions 2025/07/07 07:04:09 starting bug reproductions (max 10 VMs, 7 repros) 2025/07/07 07:04:09 reproduction of "WARNING in io_ring_exit_work" aborted: it's no longer needed 2025/07/07 07:04:09 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/07/07 07:04:09 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/07/07 07:04:09 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/07/07 07:04:09 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/07/07 07:04:09 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/07/07 07:04:09 reproduction of "possible deadlock in team_del_slave" aborted: it's no longer needed 2025/07/07 07:04:09 reproduction of "INFO: task hung in v9fs_evict_inode" aborted: it's no longer needed 2025/07/07 07:04:09 start reproducing 'kernel BUG in txUnlock' 2025/07/07 07:04:09 start reproducing 'KASAN: slab-use-after-free Read in l2cap_unregister_user' 2025/07/07 07:04:09 start reproducing 'possible deadlock in attr_data_get_block' 2025/07/07 07:04:09 start reproducing 'possible deadlock in ntfs_fiemap' 2025/07/07 07:04:09 start reproducing 'possible deadlock in ext4_readpage_inline' 2025/07/07 07:04:09 start reproducing 'possible deadlock in team_device_event' 2025/07/07 07:04:09 start reproducing 'INFO: task hung in sync_bdevs' 2025/07/07 07:04:24 base crash: lost connection to test machine 2025/07/07 07:04:36 runner 3 connected 2025/07/07 07:05:02 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 07:05:14 runner 2 connected 2025/07/07 07:05:35 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 07:05:52 runner 1 connected 2025/07/07 07:05:57 reproducing crash 'kernel BUG in txUnlock': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 07:06:24 runner 0 connected 2025/07/07 07:06:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 4877, "corpus": 44824, "corpus [modified]": 1106, "coverage": 309293, "distributor delayed": 45977, "distributor undelayed": 45977, "distributor violated": 64, "exec candidate": 73716, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 820, "exec seeds": 0, "exec smash": 0, "exec total [base]": 135348, "exec total [new]": 326501, "exec triage": 143001, "executor restarts": 943, "fault jobs": 0, "fuzzer jobs": 0, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 313410, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45826, "no exec duration": 42062000000, "no exec requests": 375, "pending": 24, "prog exec time": 0, "reproducing": 7, "rpc recv": 8834701832, "rpc sent": 1950152360, "signal": 303556, "smash jobs": 0, "triage jobs": 0, "vm output": 46292935, "vm restarts [base]": 42, "vm restarts [new]": 119 } 2025/07/07 07:08:59 repro finished 'possible deadlock in team_device_event', repro=true crepro=false desc='possible deadlock in team_device_event' hub=false from_dashboard=false 2025/07/07 07:08:59 found repro for "possible deadlock in team_device_event" (orig title: "-SAME-", reliability: 1), took 4.84 minutes 2025/07/07 07:08:59 start reproducing 'possible deadlock in __del_gendisk' 2025/07/07 07:08:59 "possible deadlock in team_device_event": saved crash log into 1751872139.crash.log 2025/07/07 07:08:59 "possible deadlock in team_device_event": saved repro log into 1751872139.repro.log 2025/07/07 07:10:14 attempt #0 to run "possible deadlock in team_device_event" on base: crashed with possible deadlock in team_device_event 2025/07/07 07:10:14 crashes both: possible deadlock in team_device_event / possible deadlock in team_device_event 2025/07/07 07:10:15 reproducing crash 'kernel BUG in txUnlock': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 07:11:02 reproducing crash 'kernel BUG in txUnlock': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 07:11:02 runner 0 connected 2025/07/07 07:11:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 4877, "corpus": 44824, "corpus [modified]": 1106, "coverage": 309293, "distributor delayed": 45977, "distributor undelayed": 45977, "distributor violated": 64, "exec candidate": 73716, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 820, "exec seeds": 0, "exec smash": 0, "exec total [base]": 135531, "exec total [new]": 326501, "exec triage": 143001, "executor restarts": 943, "fault jobs": 0, "fuzzer jobs": 0, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 313410, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 12, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45826, "no exec duration": 42062000000, "no exec requests": 375, "pending": 23, "prog exec time": 0, "reproducing": 7, "rpc recv": 8865479656, "rpc sent": 1950403336, "signal": 303556, "smash jobs": 0, "triage jobs": 0, "vm output": 49778294, "vm restarts [base]": 43, "vm restarts [new]": 119 } 2025/07/07 07:11:39 reproducing crash 'kernel BUG in txUnlock': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 07:11:39 repro finished 'kernel BUG in txUnlock', repro=true crepro=false desc='kernel BUG in txUnlock' hub=false from_dashboard=false 2025/07/07 07:11:39 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/07/07 07:11:39 reproduction of "WARNING in dbAdjTree" aborted: it's no longer needed 2025/07/07 07:11:39 found repro for "kernel BUG in txUnlock" (orig title: "-SAME-", reliability: 1), took 7.50 minutes 2025/07/07 07:11:39 start reproducing 'possible deadlock in ocfs2_fiemap' 2025/07/07 07:11:39 "kernel BUG in txUnlock": saved crash log into 1751872299.crash.log 2025/07/07 07:11:39 "kernel BUG in txUnlock": saved repro log into 1751872299.repro.log 2025/07/07 07:11:43 base crash: no output from test machine 2025/07/07 07:11:46 base crash: no output from test machine 2025/07/07 07:11:51 base crash: no output from test machine 2025/07/07 07:12:34 runner 2 connected 2025/07/07 07:12:35 runner 3 connected 2025/07/07 07:12:42 runner 1 connected 2025/07/07 07:12:54 attempt #0 to run "kernel BUG in txUnlock" on base: crashed with kernel BUG in txUnlock 2025/07/07 07:12:54 crashes both: kernel BUG in txUnlock / kernel BUG in txUnlock 2025/07/07 07:13:43 runner 0 connected 2025/07/07 07:16:30 repro finished 'possible deadlock in ext4_readpage_inline', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/07 07:16:30 failed repro for "possible deadlock in ext4_readpage_inline", err=%!s() 2025/07/07 07:16:30 reproduction of "kernel BUG in dnotify_free_mark" aborted: it's no longer needed 2025/07/07 07:16:30 reproduction of "kernel BUG in dnotify_free_mark" aborted: it's no longer needed 2025/07/07 07:16:30 reproduction of "kernel BUG in dnotify_free_mark" aborted: it's no longer needed 2025/07/07 07:16:30 reproduction of "possible deadlock in blk_mq_update_nr_hw_queues" aborted: it's no longer needed 2025/07/07 07:16:30 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/07/07 07:16:30 reproduction of "possible deadlock in team_del_slave" aborted: it's no longer needed 2025/07/07 07:16:30 start reproducing 'possible deadlock in page_cache_ra_unbounded' 2025/07/07 07:16:30 "possible deadlock in ext4_readpage_inline": saved crash log into 1751872590.crash.log 2025/07/07 07:16:30 "possible deadlock in ext4_readpage_inline": saved repro log into 1751872590.repro.log 2025/07/07 07:16:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 4877, "corpus": 44824, "corpus [modified]": 1106, "coverage": 309293, "distributor delayed": 45977, "distributor undelayed": 45977, "distributor violated": 64, "exec candidate": 73716, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 820, "exec seeds": 0, "exec smash": 0, "exec total [base]": 135531, "exec total [new]": 326501, "exec triage": 143001, "executor restarts": 943, "fault jobs": 0, "fuzzer jobs": 0, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 313410, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 13, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45826, "no exec duration": 42062000000, "no exec requests": 375, "pending": 13, "prog exec time": 0, "reproducing": 7, "rpc recv": 8988392064, "rpc sent": 1950404456, "signal": 303556, "smash jobs": 0, "triage jobs": 0, "vm output": 54268922, "vm restarts [base]": 47, "vm restarts [new]": 119 } 2025/07/07 07:17:08 repro finished 'possible deadlock in attr_data_get_block', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/07 07:17:08 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/07/07 07:17:08 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/07/07 07:17:08 failed repro for "possible deadlock in attr_data_get_block", err=%!s() 2025/07/07 07:17:08 reproduction of "possible deadlock in team_del_slave" aborted: it's no longer needed 2025/07/07 07:17:08 start reproducing 'possible deadlock in ocfs2_xattr_set' 2025/07/07 07:17:08 "possible deadlock in attr_data_get_block": saved crash log into 1751872628.crash.log 2025/07/07 07:17:08 "possible deadlock in attr_data_get_block": saved repro log into 1751872628.repro.log 2025/07/07 07:17:15 repro finished 'possible deadlock in ntfs_fiemap', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/07 07:17:15 failed repro for "possible deadlock in ntfs_fiemap", err=%!s() 2025/07/07 07:17:15 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/07/07 07:17:15 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/07/07 07:17:15 reproduction of "possible deadlock in team_device_event" aborted: it's no longer needed 2025/07/07 07:17:15 reproduction of "possible deadlock in team_device_event" aborted: it's no longer needed 2025/07/07 07:17:15 reproduction of "possible deadlock in team_device_event" aborted: it's no longer needed 2025/07/07 07:17:15 reproduction of "possible deadlock in team_device_event" aborted: it's no longer needed 2025/07/07 07:17:15 reproduction of "possible deadlock in team_device_event" aborted: it's no longer needed 2025/07/07 07:17:15 reproduction of "possible deadlock in team_device_event" aborted: it's no longer needed 2025/07/07 07:17:15 start reproducing 'unregister_netdevice: waiting for DEV to become free' 2025/07/07 07:17:15 "possible deadlock in ntfs_fiemap": saved crash log into 1751872635.crash.log 2025/07/07 07:17:15 "possible deadlock in ntfs_fiemap": saved repro log into 1751872635.repro.log 2025/07/07 07:17:33 base crash: no output from test machine 2025/07/07 07:17:35 base crash: no output from test machine 2025/07/07 07:17:41 base crash: no output from test machine 2025/07/07 07:18:22 runner 2 connected 2025/07/07 07:18:23 runner 3 connected 2025/07/07 07:18:30 runner 1 connected 2025/07/07 07:18:34 repro finished 'KASAN: slab-use-after-free Read in l2cap_unregister_user', repro=true crepro=false desc='lost connection to test machine' hub=false from_dashboard=false 2025/07/07 07:18:34 found repro for "lost connection to test machine" (orig title: "KASAN: slab-use-after-free Read in l2cap_unregister_user", reliability: 1), took 14.41 minutes 2025/07/07 07:18:34 "lost connection to test machine": saved crash log into 1751872714.crash.log 2025/07/07 07:18:34 "lost connection to test machine": saved repro log into 1751872714.repro.log 2025/07/07 07:18:38 runner 1 connected 2025/07/07 07:19:23 runner 0 connected 2025/07/07 07:20:26 attempt #0 to run "lost connection to test machine" on base: crashed with lost connection to test machine 2025/07/07 07:20:26 crashes both: lost connection to test machine / lost connection to test machine 2025/07/07 07:21:16 runner 0 connected 2025/07/07 07:21:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 2001, "corpus": 44847, "corpus [modified]": 1109, "coverage": 309328, "distributor delayed": 46001, "distributor undelayed": 46001, "distributor violated": 71, "exec candidate": 76592, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 822, "exec seeds": 0, "exec smash": 0, "exec total [base]": 138421, "exec total [new]": 329469, "exec triage": 143087, "executor restarts": 956, "fault jobs": 0, "fuzzer jobs": 0, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 313448, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 13, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45852, "no exec duration": 471008000000, "no exec requests": 1792, "pending": 1, "prog exec time": 266, "reproducing": 6, "rpc recv": 9179722584, "rpc sent": 1974380664, "signal": 303598, "smash jobs": 0, "triage jobs": 0, "vm output": 57554124, "vm restarts [base]": 51, "vm restarts [new]": 121 } 2025/07/07 07:22:26 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/07 07:23:17 runner 2 connected 2025/07/07 07:23:18 repro finished 'possible deadlock in ocfs2_fiemap', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/07 07:23:18 failed repro for "possible deadlock in ocfs2_fiemap", err=%!s() 2025/07/07 07:23:18 "possible deadlock in ocfs2_fiemap": saved crash log into 1751872998.crash.log 2025/07/07 07:23:18 "possible deadlock in ocfs2_fiemap": saved repro log into 1751872998.repro.log 2025/07/07 07:23:46 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 07:24:09 runner 2 connected 2025/07/07 07:24:21 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 07:24:35 runner 3 connected 2025/07/07 07:25:11 runner 0 connected 2025/07/07 07:25:38 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 07:26:25 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 07:26:27 runner 1 connected 2025/07/07 07:26:31 base crash: possible deadlock in blk_mq_update_nr_hw_queues 2025/07/07 07:26:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 44859, "corpus [modified]": 1109, "coverage": 309350, "distributor delayed": 46033, "distributor undelayed": 46032, "distributor violated": 71, "exec candidate": 78593, "exec collide": 332, "exec fuzz": 630, "exec gen": 34, "exec hints": 70, "exec inject": 0, "exec minimize": 126, "exec retries": 842, "exec seeds": 15, "exec smash": 87, "exec total [base]": 141832, "exec total [new]": 332877, "exec triage": 143179, "executor restarts": 988, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 313570, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 78, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45887, "no exec duration": 1297905000000, "no exec requests": 4233, "pending": 1, "prog exec time": 422, "reproducing": 5, "rpc recv": 9343250408, "rpc sent": 2019650160, "signal": 303616, "smash jobs": 1, "triage jobs": 10, "vm output": 60591532, "vm restarts [base]": 53, "vm restarts [new]": 124 } 2025/07/07 07:26:58 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 07:27:15 runner 0 connected 2025/07/07 07:27:21 runner 1 connected 2025/07/07 07:27:24 repro finished 'possible deadlock in page_cache_ra_unbounded', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/07 07:27:24 failed repro for "possible deadlock in page_cache_ra_unbounded", err=%!s() 2025/07/07 07:27:24 "possible deadlock in page_cache_ra_unbounded": saved crash log into 1751873244.crash.log 2025/07/07 07:27:24 "possible deadlock in page_cache_ra_unbounded": saved repro log into 1751873244.repro.log 2025/07/07 07:27:45 base crash: INFO: trying to register non-static key in ocfs2_dlm_shutdown 2025/07/07 07:27:49 runner 1 connected 2025/07/07 07:28:16 repro finished 'possible deadlock in ocfs2_xattr_set', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/07 07:28:16 failed repro for "possible deadlock in ocfs2_xattr_set", err=%!s() 2025/07/07 07:28:16 "possible deadlock in ocfs2_xattr_set": saved crash log into 1751873296.crash.log 2025/07/07 07:28:16 "possible deadlock in ocfs2_xattr_set": saved repro log into 1751873296.repro.log 2025/07/07 07:28:18 runner 4 connected 2025/07/07 07:28:34 runner 3 connected 2025/07/07 07:30:51 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/07 07:30:51 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/07/07 07:31:07 runner 3 connected 2025/07/07 07:31:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 44894, "corpus [modified]": 1110, "coverage": 309549, "distributor delayed": 46176, "distributor undelayed": 46175, "distributor violated": 71, "exec candidate": 78593, "exec collide": 1077, "exec fuzz": 2028, "exec gen": 105, "exec hints": 402, "exec inject": 0, "exec minimize": 833, "exec retries": 860, "exec seeds": 104, "exec smash": 813, "exec total [base]": 146111, "exec total [new]": 337204, "exec triage": 143408, "executor restarts": 1055, "fault jobs": 0, "fuzzer jobs": 10, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 1, "max signal": 313800, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 556, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45962, "no exec duration": 1600169000000, "no exec requests": 5042, "pending": 1, "prog exec time": 632, "reproducing": 3, "rpc recv": 9562554560, "rpc sent": 2124249352, "signal": 303745, "smash jobs": 5, "triage jobs": 4, "vm output": 67693813, "vm restarts [base]": 55, "vm restarts [new]": 128 } 2025/07/07 07:31:39 runner 0 connected 2025/07/07 07:31:40 runner 0 connected 2025/07/07 07:31:46 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 07:31:58 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 07:32:36 runner 2 connected 2025/07/07 07:32:47 runner 1 connected 2025/07/07 07:32:52 runner 5 connected 2025/07/07 07:33:00 base crash: lost connection to test machine 2025/07/07 07:33:02 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 07:33:15 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:8338: connect: connection refused 2025/07/07 07:33:15 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:8338: connect: connection refused 2025/07/07 07:33:25 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 07:33:41 runner 0 connected 2025/07/07 07:33:43 runner 2 connected 2025/07/07 07:34:06 runner 3 connected 2025/07/07 07:34:38 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 07:35:15 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 07:35:19 base crash: lost connection to test machine 2025/07/07 07:35:19 runner 3 connected 2025/07/07 07:35:56 runner 2 connected 2025/07/07 07:36:08 runner 0 connected 2025/07/07 07:36:30 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/07 07:36:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 44915, "corpus [modified]": 1113, "coverage": 309583, "distributor delayed": 46331, "distributor undelayed": 46331, "distributor violated": 71, "exec candidate": 78593, "exec collide": 3083, "exec fuzz": 5703, "exec gen": 311, "exec hints": 604, "exec inject": 0, "exec minimize": 1441, "exec retries": 862, "exec seeds": 160, "exec smash": 1255, "exec total [base]": 151773, "exec total [new]": 344656, "exec triage": 143656, "executor restarts": 1204, "fault jobs": 0, "fuzzer jobs": 20, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 4, "max signal": 313953, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1024, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46052, "no exec duration": 1612518000000, "no exec requests": 5062, "pending": 1, "prog exec time": 446, "reproducing": 3, "rpc recv": 9945217020, "rpc sent": 2289440256, "signal": 303778, "smash jobs": 4, "triage jobs": 12, "vm output": 75959655, "vm restarts [base]": 59, "vm restarts [new]": 135 } 2025/07/07 07:37:19 runner 3 connected 2025/07/07 07:37:25 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/07/07 07:37:46 patched crashed: KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry [need repro = true] 2025/07/07 07:37:46 scheduled a reproduction of 'KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry' 2025/07/07 07:37:46 start reproducing 'KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry' 2025/07/07 07:38:15 runner 2 connected 2025/07/07 07:38:37 base crash: lost connection to test machine 2025/07/07 07:38:39 base crash: possible deadlock in ocfs2_xattr_set 2025/07/07 07:38:42 runner 3 connected 2025/07/07 07:39:25 runner 3 connected 2025/07/07 07:39:30 runner 0 connected 2025/07/07 07:40:17 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/07/07 07:41:05 runner 5 connected 2025/07/07 07:41:31 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/07/07 07:41:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 44939, "corpus [modified]": 1114, "coverage": 309625, "distributor delayed": 46448, "distributor undelayed": 46448, "distributor violated": 71, "exec candidate": 78593, "exec collide": 4362, "exec fuzz": 8188, "exec gen": 434, "exec hints": 1168, "exec inject": 0, "exec minimize": 2138, "exec retries": 862, "exec seeds": 231, "exec smash": 1841, "exec total [base]": 156029, "exec total [new]": 350669, "exec triage": 143865, "executor restarts": 1281, "fault jobs": 0, "fuzzer jobs": 14, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 2, "max signal": 314058, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1499, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46122, "no exec duration": 1613518000000, "no exec requests": 5063, "pending": 1, "prog exec time": 536, "reproducing": 4, "rpc recv": 10163763412, "rpc sent": 2400750672, "signal": 303819, "smash jobs": 1, "triage jobs": 11, "vm output": 81322325, "vm restarts [base]": 62, "vm restarts [new]": 138 } 2025/07/07 07:42:21 runner 3 connected 2025/07/07 07:42:30 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/07/07 07:43:03 base crash: kernel BUG in txUnlock 2025/07/07 07:43:19 runner 0 connected 2025/07/07 07:43:53 runner 3 connected 2025/07/07 07:44:04 repro finished 'KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry', repro=true crepro=false desc='KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry' hub=false from_dashboard=false 2025/07/07 07:44:04 found repro for "KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry" (orig title: "-SAME-", reliability: 1), took 6.22 minutes 2025/07/07 07:44:04 "KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry": saved crash log into 1751874244.crash.log 2025/07/07 07:44:04 "KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry": saved repro log into 1751874244.repro.log 2025/07/07 07:44:43 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 07:44:53 runner 0 connected 2025/07/07 07:45:24 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 07:45:29 attempt #0 to run "KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry" on base: crashed with KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry 2025/07/07 07:45:29 crashes both: KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry / KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry 2025/07/07 07:45:31 runner 3 connected 2025/07/07 07:46:15 runner 5 connected 2025/07/07 07:46:18 runner 1 connected 2025/07/07 07:46:19 runner 0 connected 2025/07/07 07:46:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 44962, "corpus [modified]": 1114, "coverage": 309659, "distributor delayed": 46537, "distributor undelayed": 46537, "distributor violated": 81, "exec candidate": 78593, "exec collide": 5238, "exec fuzz": 9813, "exec gen": 522, "exec hints": 1486, "exec inject": 0, "exec minimize": 2842, "exec retries": 873, "exec seeds": 297, "exec smash": 2267, "exec total [base]": 160482, "exec total [new]": 354933, "exec triage": 144009, "executor restarts": 1354, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 6, "hints jobs": 2, "max signal": 314233, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1979, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46170, "no exec duration": 1618209000000, "no exec requests": 5068, "pending": 1, "prog exec time": 601, "reproducing": 3, "rpc recv": 10445873200, "rpc sent": 2528186232, "signal": 303853, "smash jobs": 2, "triage jobs": 11, "vm output": 85581342, "vm restarts [base]": 66, "vm restarts [new]": 142 } 2025/07/07 07:48:23 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 07:48:41 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/07 07:49:02 patched crashed: KASAN: slab-out-of-bounds Read in hfsplus_bnode_read [need repro = true] 2025/07/07 07:49:02 scheduled a reproduction of 'KASAN: slab-out-of-bounds Read in hfsplus_bnode_read' 2025/07/07 07:49:02 start reproducing 'KASAN: slab-out-of-bounds Read in hfsplus_bnode_read' 2025/07/07 07:49:06 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 07:49:12 runner 5 connected 2025/07/07 07:49:38 runner 2 connected 2025/07/07 07:49:40 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/07/07 07:49:51 runner 3 connected 2025/07/07 07:49:55 runner 3 connected 2025/07/07 07:50:11 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 07:50:13 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/07/07 07:50:29 runner 2 connected 2025/07/07 07:50:40 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/07/07 07:50:54 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/07 07:51:01 runner 3 connected 2025/07/07 07:51:02 runner 1 connected 2025/07/07 07:51:29 runner 2 connected 2025/07/07 07:51:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 44991, "corpus [modified]": 1115, "coverage": 309722, "distributor delayed": 46648, "distributor undelayed": 46648, "distributor violated": 81, "exec candidate": 78593, "exec collide": 6317, "exec fuzz": 11836, "exec gen": 634, "exec hints": 1945, "exec inject": 0, "exec minimize": 3548, "exec retries": 881, "exec seeds": 378, "exec smash": 2822, "exec total [base]": 164008, "exec total [new]": 360135, "exec triage": 144187, "executor restarts": 1459, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 1, "max signal": 314392, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2453, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46232, "no exec duration": 1623287000000, "no exec requests": 5074, "pending": 1, "prog exec time": 459, "reproducing": 4, "rpc recv": 10742839340, "rpc sent": 2657352136, "signal": 303908, "smash jobs": 3, "triage jobs": 4, "vm output": 90699200, "vm restarts [base]": 70, "vm restarts [new]": 146 } 2025/07/07 07:51:43 runner 0 connected 2025/07/07 07:53:27 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 07:53:41 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 07:54:17 runner 2 connected 2025/07/07 07:54:29 runner 4 connected 2025/07/07 07:55:19 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/07/07 07:55:39 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/07 07:56:08 runner 3 connected 2025/07/07 07:56:29 runner 3 connected 2025/07/07 07:56:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 45002, "corpus [modified]": 1118, "coverage": 309751, "distributor delayed": 46746, "distributor undelayed": 46746, "distributor violated": 83, "exec candidate": 78593, "exec collide": 7859, "exec fuzz": 14895, "exec gen": 805, "exec hints": 2408, "exec inject": 0, "exec minimize": 4002, "exec retries": 895, "exec seeds": 413, "exec smash": 3096, "exec total [base]": 171026, "exec total [new]": 366310, "exec triage": 144348, "executor restarts": 1530, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 3, "max signal": 314487, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2780, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46287, "no exec duration": 1628596000000, "no exec requests": 5085, "pending": 1, "prog exec time": 413, "reproducing": 4, "rpc recv": 10927070020, "rpc sent": 2816221000, "signal": 303930, "smash jobs": 3, "triage jobs": 6, "vm output": 96349392, "vm restarts [base]": 72, "vm restarts [new]": 149 } 2025/07/07 07:56:42 repro finished 'KASAN: slab-out-of-bounds Read in hfsplus_bnode_read', repro=true crepro=false desc='KASAN: slab-out-of-bounds Read in hfsplus_bnode_read' hub=false from_dashboard=false 2025/07/07 07:56:42 found repro for "KASAN: slab-out-of-bounds Read in hfsplus_bnode_read" (orig title: "-SAME-", reliability: 1), took 7.51 minutes 2025/07/07 07:56:42 "KASAN: slab-out-of-bounds Read in hfsplus_bnode_read": saved crash log into 1751875002.crash.log 2025/07/07 07:56:42 "KASAN: slab-out-of-bounds Read in hfsplus_bnode_read": saved repro log into 1751875002.repro.log 2025/07/07 07:57:25 runner 0 connected 2025/07/07 07:57:58 attempt #0 to run "KASAN: slab-out-of-bounds Read in hfsplus_bnode_read" on base: crashed with KASAN: slab-out-of-bounds Read in hfsplus_bnode_read 2025/07/07 07:57:58 crashes both: KASAN: slab-out-of-bounds Read in hfsplus_bnode_read / KASAN: slab-out-of-bounds Read in hfsplus_bnode_read 2025/07/07 07:58:05 base crash: lost connection to test machine 2025/07/07 07:58:48 runner 0 connected 2025/07/07 07:58:54 runner 1 connected 2025/07/07 08:00:23 patched crashed: KASAN: out-of-bounds Read in ext4_xattr_set_entry [need repro = true] 2025/07/07 08:00:23 scheduled a reproduction of 'KASAN: out-of-bounds Read in ext4_xattr_set_entry' 2025/07/07 08:00:23 start reproducing 'KASAN: out-of-bounds Read in ext4_xattr_set_entry' 2025/07/07 08:01:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 45019, "corpus [modified]": 1118, "coverage": 309792, "distributor delayed": 46863, "distributor undelayed": 46861, "distributor violated": 85, "exec candidate": 78593, "exec collide": 9288, "exec fuzz": 17626, "exec gen": 936, "exec hints": 3116, "exec inject": 0, "exec minimize": 4408, "exec retries": 903, "exec seeds": 472, "exec smash": 3537, "exec total [base]": 176690, "exec total [new]": 372416, "exec triage": 144540, "executor restarts": 1637, "fault jobs": 0, "fuzzer jobs": 13, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 6, "max signal": 314857, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3046, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46353, "no exec duration": 1628613000000, "no exec requests": 5086, "pending": 1, "prog exec time": 426, "reproducing": 4, "rpc recv": 11053199924, "rpc sent": 2947070152, "signal": 303985, "smash jobs": 1, "triage jobs": 6, "vm output": 101630978, "vm restarts [base]": 74, "vm restarts [new]": 150 } 2025/07/07 08:02:04 patched crashed: kernel BUG in may_open [need repro = true] 2025/07/07 08:02:04 scheduled a reproduction of 'kernel BUG in may_open' 2025/07/07 08:02:04 start reproducing 'kernel BUG in may_open' 2025/07/07 08:03:01 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:03:03 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 08:03:19 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 08:03:53 runner 3 connected 2025/07/07 08:03:56 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:04:07 runner 4 connected 2025/07/07 08:04:12 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 08:04:22 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:05:01 runner 2 connected 2025/07/07 08:05:15 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:05:46 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:06:35 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:06:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 45034, "corpus [modified]": 1119, "coverage": 309818, "distributor delayed": 46914, "distributor undelayed": 46914, "distributor violated": 92, "exec candidate": 78593, "exec collide": 10137, "exec fuzz": 19220, "exec gen": 1006, "exec hints": 4134, "exec inject": 0, "exec minimize": 4842, "exec retries": 927, "exec seeds": 515, "exec smash": 3910, "exec total [base]": 183799, "exec total [new]": 376912, "exec triage": 144633, "executor restarts": 1676, "fault jobs": 0, "fuzzer jobs": 9, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 3, "hints jobs": 5, "max signal": 314931, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3257, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46384, "no exec duration": 1629254000000, "no exec requests": 5090, "pending": 1, "prog exec time": 550, "reproducing": 5, "rpc recv": 11166515772, "rpc sent": 3071514328, "signal": 304009, "smash jobs": 1, "triage jobs": 3, "vm output": 104644192, "vm restarts [base]": 75, "vm restarts [new]": 152 } 2025/07/07 08:07:19 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:07:27 base crash: lost connection to test machine 2025/07/07 08:07:54 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 08:08:10 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:08:16 runner 1 connected 2025/07/07 08:08:43 runner 5 connected 2025/07/07 08:09:10 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:09:36 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:09:56 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 08:10:28 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:10:35 repro finished 'KASAN: out-of-bounds Read in ext4_xattr_set_entry', repro=true crepro=false desc='KASAN: out-of-bounds Read in ext4_xattr_set_entry' hub=false from_dashboard=false 2025/07/07 08:10:35 found repro for "KASAN: out-of-bounds Read in ext4_xattr_set_entry" (orig title: "-SAME-", reliability: 1), took 10.15 minutes 2025/07/07 08:10:35 "KASAN: out-of-bounds Read in ext4_xattr_set_entry": saved crash log into 1751875835.crash.log 2025/07/07 08:10:35 "KASAN: out-of-bounds Read in ext4_xattr_set_entry": saved repro log into 1751875835.repro.log 2025/07/07 08:10:42 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 08:10:42 runner 0 connected 2025/07/07 08:10:45 runner 5 connected 2025/07/07 08:11:08 patched crashed: INFO: task hung in read_part_sector [need repro = true] 2025/07/07 08:11:08 scheduled a reproduction of 'INFO: task hung in read_part_sector' 2025/07/07 08:11:08 start reproducing 'INFO: task hung in read_part_sector' 2025/07/07 08:11:31 runner 3 connected 2025/07/07 08:11:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 45037, "corpus [modified]": 1120, "coverage": 309823, "distributor delayed": 46929, "distributor undelayed": 46929, "distributor violated": 93, "exec candidate": 78593, "exec collide": 10695, "exec fuzz": 20218, "exec gen": 1054, "exec hints": 4648, "exec inject": 0, "exec minimize": 4923, "exec retries": 936, "exec seeds": 524, "exec smash": 3954, "exec total [base]": 186782, "exec total [new]": 379198, "exec triage": 144661, "executor restarts": 1710, "fault jobs": 0, "fuzzer jobs": 9, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 4, "max signal": 314953, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3316, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46395, "no exec duration": 2178473000000, "no exec requests": 6499, "pending": 1, "prog exec time": 715, "reproducing": 5, "rpc recv": 11329234048, "rpc sent": 3140268800, "signal": 304014, "smash jobs": 3, "triage jobs": 2, "vm output": 107481970, "vm restarts [base]": 76, "vm restarts [new]": 156 } 2025/07/07 08:11:42 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:11:52 attempt #0 to run "KASAN: out-of-bounds Read in ext4_xattr_set_entry" on base: crashed with KASAN: out-of-bounds Read in ext4_xattr_set_entry 2025/07/07 08:11:52 crashes both: KASAN: out-of-bounds Read in ext4_xattr_set_entry / KASAN: out-of-bounds Read in ext4_xattr_set_entry 2025/07/07 08:11:57 runner 4 connected 2025/07/07 08:12:02 repro finished 'possible deadlock in __del_gendisk', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/07 08:12:02 failed repro for "possible deadlock in __del_gendisk", err=%!s() 2025/07/07 08:12:02 start reproducing 'possible deadlock in __del_gendisk' 2025/07/07 08:12:02 "possible deadlock in __del_gendisk": saved crash log into 1751875922.crash.log 2025/07/07 08:12:02 "possible deadlock in __del_gendisk": saved repro log into 1751875922.repro.log 2025/07/07 08:12:04 base crash: WARNING in io_ring_exit_work 2025/07/07 08:12:14 base crash: INFO: task hung in lock_metapage 2025/07/07 08:12:22 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:12:23 base crash: INFO: task hung in read_part_sector 2025/07/07 08:12:41 runner 0 connected 2025/07/07 08:12:54 runner 3 connected 2025/07/07 08:13:03 runner 1 connected 2025/07/07 08:13:11 runner 2 connected 2025/07/07 08:13:18 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:15:21 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:15:57 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:16:32 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/07/07 08:16:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 45051, "corpus [modified]": 1121, "coverage": 309836, "distributor delayed": 46990, "distributor undelayed": 46988, "distributor violated": 99, "exec candidate": 78593, "exec collide": 11646, "exec fuzz": 21998, "exec gen": 1148, "exec hints": 5093, "exec inject": 0, "exec minimize": 5196, "exec retries": 936, "exec seeds": 558, "exec smash": 4305, "exec total [base]": 190821, "exec total [new]": 383235, "exec triage": 144769, "executor restarts": 1777, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 2, "max signal": 315054, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3583, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46435, "no exec duration": 2377718000000, "no exec requests": 7131, "pending": 0, "prog exec time": 459, "reproducing": 5, "rpc recv": 11502653860, "rpc sent": 3230707888, "signal": 304027, "smash jobs": 1, "triage jobs": 3, "vm output": 111201820, "vm restarts [base]": 80, "vm restarts [new]": 157 } 2025/07/07 08:16:46 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/07/07 08:17:07 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:17:10 base crash: possible deadlock in ocfs2_init_acl 2025/07/07 08:17:10 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 08:17:20 runner 3 connected 2025/07/07 08:17:34 runner 2 connected 2025/07/07 08:17:41 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/07 08:17:57 runner 1 connected 2025/07/07 08:18:00 runner 5 connected 2025/07/07 08:18:02 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:18:02 repro finished 'kernel BUG in may_open', repro=true crepro=false desc='kernel BUG in may_open' hub=false from_dashboard=false 2025/07/07 08:18:02 found repro for "kernel BUG in may_open" (orig title: "-SAME-", reliability: 1), took 15.75 minutes 2025/07/07 08:18:02 "kernel BUG in may_open": saved crash log into 1751876282.crash.log 2025/07/07 08:18:02 "kernel BUG in may_open": saved repro log into 1751876282.repro.log 2025/07/07 08:18:31 runner 3 connected 2025/07/07 08:19:09 runner 0 connected 2025/07/07 08:19:16 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 08:19:23 attempt #0 to run "kernel BUG in may_open" on base: crashed with kernel BUG in may_open 2025/07/07 08:19:23 crashes both: kernel BUG in may_open / kernel BUG in may_open 2025/07/07 08:19:28 base crash: possible deadlock in ocfs2_init_acl 2025/07/07 08:20:06 runner 5 connected 2025/07/07 08:20:13 runner 0 connected 2025/07/07 08:20:17 runner 2 connected 2025/07/07 08:21:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 45064, "corpus [modified]": 1121, "coverage": 309927, "distributor delayed": 47041, "distributor undelayed": 47041, "distributor violated": 99, "exec candidate": 78593, "exec collide": 12496, "exec fuzz": 23629, "exec gen": 1239, "exec hints": 5616, "exec inject": 0, "exec minimize": 5651, "exec retries": 936, "exec seeds": 599, "exec smash": 4618, "exec total [base]": 194068, "exec total [new]": 387250, "exec triage": 144876, "executor restarts": 1835, "fault jobs": 0, "fuzzer jobs": 13, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 2, "max signal": 315181, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3872, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46474, "no exec duration": 2478902000000, "no exec requests": 7416, "pending": 0, "prog exec time": 744, "reproducing": 4, "rpc recv": 11810219664, "rpc sent": 3320842176, "signal": 304090, "smash jobs": 3, "triage jobs": 8, "vm output": 114442838, "vm restarts [base]": 85, "vm restarts [new]": 161 } 2025/07/07 08:22:43 patched crashed: kernel BUG in ocfs2_write_cluster_by_desc [need repro = true] 2025/07/07 08:22:43 scheduled a reproduction of 'kernel BUG in ocfs2_write_cluster_by_desc' 2025/07/07 08:22:43 start reproducing 'kernel BUG in ocfs2_write_cluster_by_desc' 2025/07/07 08:22:58 base crash: kernel BUG in ocfs2_write_cluster_by_desc 2025/07/07 08:23:32 runner 4 connected 2025/07/07 08:23:40 reproducing crash 'kernel BUG in ocfs2_write_cluster_by_desc': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/aops.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:23:47 runner 0 connected 2025/07/07 08:24:02 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = true] 2025/07/07 08:24:02 scheduled a reproduction of 'INFO: task hung in bch2_journal_reclaim_thread' 2025/07/07 08:24:02 start reproducing 'INFO: task hung in bch2_journal_reclaim_thread' 2025/07/07 08:24:59 runner 5 connected 2025/07/07 08:25:26 base crash: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 08:26:17 runner 2 connected 2025/07/07 08:26:23 reproducing crash 'kernel BUG in ocfs2_write_cluster_by_desc': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/aops.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:26:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 45070, "corpus [modified]": 1122, "coverage": 309940, "distributor delayed": 47068, "distributor undelayed": 47061, "distributor violated": 102, "exec candidate": 78593, "exec collide": 13078, "exec fuzz": 24775, "exec gen": 1293, "exec hints": 5796, "exec inject": 0, "exec minimize": 5965, "exec retries": 952, "exec seeds": 614, "exec smash": 4744, "exec total [base]": 197312, "exec total [new]": 389723, "exec triage": 144911, "executor restarts": 1895, "fault jobs": 0, "fuzzer jobs": 9, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 1, "max signal": 315206, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4118, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46489, "no exec duration": 2605337000000, "no exec requests": 7736, "pending": 0, "prog exec time": 478, "reproducing": 6, "rpc recv": 11945347692, "rpc sent": 3405629408, "signal": 304102, "smash jobs": 1, "triage jobs": 7, "vm output": 117553939, "vm restarts [base]": 87, "vm restarts [new]": 163 } 2025/07/07 08:27:07 reproducing crash 'kernel BUG in ocfs2_write_cluster_by_desc': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/aops.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:28:54 reproducing crash 'kernel BUG in ocfs2_write_cluster_by_desc': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/aops.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:29:39 reproducing crash 'kernel BUG in ocfs2_write_cluster_by_desc': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/aops.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:31:18 reproducing crash 'kernel BUG in ocfs2_write_cluster_by_desc': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/aops.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:31:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 45084, "corpus [modified]": 1122, "coverage": 309957, "distributor delayed": 47086, "distributor undelayed": 47086, "distributor violated": 109, "exec candidate": 78593, "exec collide": 13841, "exec fuzz": 26250, "exec gen": 1362, "exec hints": 6245, "exec inject": 0, "exec minimize": 6331, "exec retries": 958, "exec seeds": 660, "exec smash": 5187, "exec total [base]": 201006, "exec total [new]": 393422, "exec triage": 144995, "executor restarts": 1926, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 2, "max signal": 315236, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4347, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46514, "no exec duration": 3328103000000, "no exec requests": 9827, "pending": 0, "prog exec time": 276, "reproducing": 6, "rpc recv": 11960223840, "rpc sent": 3484999952, "signal": 304119, "smash jobs": 0, "triage jobs": 2, "vm output": 121801640, "vm restarts [base]": 87, "vm restarts [new]": 163 } 2025/07/07 08:31:49 reproducing crash 'kernel BUG in ocfs2_write_cluster_by_desc': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/aops.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:32:33 reproducing crash 'kernel BUG in ocfs2_write_cluster_by_desc': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/aops.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:33:12 repro finished 'INFO: task hung in sync_bdevs', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/07 08:33:12 failed repro for "INFO: task hung in sync_bdevs", err=%!s() 2025/07/07 08:33:12 "INFO: task hung in sync_bdevs": saved crash log into 1751877192.crash.log 2025/07/07 08:33:12 "INFO: task hung in sync_bdevs": saved repro log into 1751877192.repro.log 2025/07/07 08:33:14 base crash: SYZFAIL: ebtable: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 2025/07/07 08:33:19 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 08:33:21 runner 0 connected 2025/07/07 08:33:34 reproducing crash 'kernel BUG in ocfs2_write_cluster_by_desc': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/aops.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:34:03 runner 0 connected 2025/07/07 08:34:08 runner 4 connected 2025/07/07 08:34:19 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:6236: connect: connection refused 2025/07/07 08:34:19 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:6236: connect: connection refused 2025/07/07 08:34:21 VM-5 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:16848: connect: connection refused 2025/07/07 08:34:21 VM-5 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:16848: connect: connection refused 2025/07/07 08:34:29 base crash: lost connection to test machine 2025/07/07 08:34:30 reproducing crash 'kernel BUG in ocfs2_write_cluster_by_desc': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/aops.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/07 08:34:30 repro finished 'kernel BUG in ocfs2_write_cluster_by_desc', repro=true crepro=false desc='kernel BUG in ocfs2_write_cluster_by_desc' hub=false from_dashboard=false 2025/07/07 08:34:30 found repro for "kernel BUG in ocfs2_write_cluster_by_desc" (orig title: "-SAME-", reliability: 1), took 11.30 minutes 2025/07/07 08:34:30 "kernel BUG in ocfs2_write_cluster_by_desc": saved crash log into 1751877270.crash.log 2025/07/07 08:34:30 "kernel BUG in ocfs2_write_cluster_by_desc": saved repro log into 1751877270.repro.log 2025/07/07 08:34:31 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 08:34:50 base crash: possible deadlock in ocfs2_init_acl 2025/07/07 08:35:18 runner 3 connected 2025/07/07 08:35:20 runner 5 connected 2025/07/07 08:35:38 runner 1 connected 2025/07/07 08:35:45 attempt #0 to run "kernel BUG in ocfs2_write_cluster_by_desc" on base: crashed with kernel BUG in ocfs2_write_cluster_by_desc 2025/07/07 08:35:45 crashes both: kernel BUG in ocfs2_write_cluster_by_desc / kernel BUG in ocfs2_write_cluster_by_desc 2025/07/07 08:35:52 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 08:36:35 runner 0 connected 2025/07/07 08:36:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 45088, "corpus [modified]": 1124, "coverage": 309966, "distributor delayed": 47101, "distributor undelayed": 47098, "distributor violated": 109, "exec candidate": 78593, "exec collide": 14726, "exec fuzz": 27836, "exec gen": 1460, "exec hints": 6717, "exec inject": 0, "exec minimize": 6436, "exec retries": 972, "exec seeds": 672, "exec smash": 5281, "exec total [base]": 204301, "exec total [new]": 396721, "exec triage": 145025, "executor restarts": 1977, "fault jobs": 0, "fuzzer jobs": 7, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 2, "max signal": 315254, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4429, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46527, "no exec duration": 3771867000000, "no exec requests": 11054, "pending": 0, "prog exec time": 386, "reproducing": 4, "rpc recv": 12154031120, "rpc sent": 3561336488, "signal": 304126, "smash jobs": 2, "triage jobs": 3, "vm output": 126530707, "vm restarts [base]": 91, "vm restarts [new]": 166 } 2025/07/07 08:36:39 runner 5 connected 2025/07/07 08:38:07 base crash: WARNING in path_noexec 2025/07/07 08:38:08 patched crashed: WARNING in path_noexec [need repro = false] 2025/07/07 08:38:12 base crash: lost connection to test machine 2025/07/07 08:38:18 base crash: WARNING in path_noexec 2025/07/07 08:38:19 runner 1 connected 2025/07/07 08:38:57 runner 4 connected 2025/07/07 08:38:57 runner 0 connected 2025/07/07 08:39:01 runner 2 connected 2025/07/07 08:39:07 runner 1 connected 2025/07/07 08:39:14 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/07 08:40:05 runner 3 connected 2025/07/07 08:40:36 base crash: kernel BUG in may_open 2025/07/07 08:40:56 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 08:41:05 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 08:41:09 patched crashed: INFO: trying to register non-static key in ocfs2_dlm_shutdown [need repro = false] 2025/07/07 08:41:26 runner 3 connected 2025/07/07 08:41:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 45098, "corpus [modified]": 1126, "coverage": 309976, "distributor delayed": 47147, "distributor undelayed": 47143, "distributor violated": 109, "exec candidate": 78593, "exec collide": 15859, "exec fuzz": 29983, "exec gen": 1574, "exec hints": 7110, "exec inject": 0, "exec minimize": 6744, "exec retries": 974, "exec seeds": 701, "exec smash": 5471, "exec total [base]": 208091, "exec total [new]": 401114, "exec triage": 145102, "executor restarts": 2063, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 1, "hints jobs": 2, "max signal": 315300, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4637, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46556, "no exec duration": 3906509000000, "no exec requests": 11486, "pending": 0, "prog exec time": 449, "reproducing": 4, "rpc recv": 12448196008, "rpc sent": 3689097904, "signal": 304136, "smash jobs": 2, "triage jobs": 4, "vm output": 130420231, "vm restarts [base]": 96, "vm restarts [new]": 169 } 2025/07/07 08:41:45 runner 1 connected 2025/07/07 08:41:54 runner 4 connected 2025/07/07 08:41:58 runner 0 connected 2025/07/07 08:41:59 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 08:42:07 base crash: INFO: trying to register non-static key in ocfs2_dlm_shutdown 2025/07/07 08:42:48 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/07/07 08:42:49 runner 5 connected 2025/07/07 08:42:55 runner 2 connected 2025/07/07 08:43:07 base crash: possible deadlock in ocfs2_init_acl 2025/07/07 08:43:08 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 08:43:37 runner 0 connected 2025/07/07 08:43:56 runner 3 connected 2025/07/07 08:44:04 runner 1 connected 2025/07/07 08:44:15 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 08:44:36 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 08:44:54 base crash: possible deadlock in ocfs2_init_acl 2025/07/07 08:45:12 runner 0 connected 2025/07/07 08:45:33 runner 4 connected 2025/07/07 08:45:38 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/07 08:45:44 runner 1 connected 2025/07/07 08:46:28 runner 2 connected 2025/07/07 08:46:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 45111, "corpus [modified]": 1126, "coverage": 309997, "distributor delayed": 47201, "distributor undelayed": 47201, "distributor violated": 109, "exec candidate": 78593, "exec collide": 16578, "exec fuzz": 31294, "exec gen": 1653, "exec hints": 7238, "exec inject": 0, "exec minimize": 6972, "exec retries": 975, "exec seeds": 740, "exec smash": 5748, "exec total [base]": 211330, "exec total [new]": 403989, "exec triage": 145185, "executor restarts": 2140, "fault jobs": 0, "fuzzer jobs": 14, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 4, "max signal": 315353, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4828, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46588, "no exec duration": 4001146000000, "no exec requests": 11773, "pending": 0, "prog exec time": 713, "reproducing": 4, "rpc recv": 12840509124, "rpc sent": 3797621400, "signal": 304153, "smash jobs": 2, "triage jobs": 8, "vm output": 135001632, "vm restarts [base]": 101, "vm restarts [new]": 176 } 2025/07/07 08:47:47 repro finished 'possible deadlock in __del_gendisk', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/07 08:47:47 failed repro for "possible deadlock in __del_gendisk", err=%!s() 2025/07/07 08:47:47 "possible deadlock in __del_gendisk": saved crash log into 1751878067.crash.log 2025/07/07 08:47:47 "possible deadlock in __del_gendisk": saved repro log into 1751878067.repro.log 2025/07/07 08:48:36 runner 3 connected 2025/07/07 08:49:41 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 08:49:51 repro finished 'unregister_netdevice: waiting for DEV to become free', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/07 08:49:51 failed repro for "unregister_netdevice: waiting for DEV to become free", err=%!s() 2025/07/07 08:49:51 "unregister_netdevice: waiting for DEV to become free": saved crash log into 1751878191.crash.log 2025/07/07 08:49:51 "unregister_netdevice: waiting for DEV to become free": saved repro log into 1751878191.repro.log 2025/07/07 08:50:32 runner 3 connected 2025/07/07 08:50:39 runner 6 connected 2025/07/07 08:51:11 runner 2 connected 2025/07/07 08:51:28 patched crashed: WARNING in path_noexec [need repro = false] 2025/07/07 08:51:39 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 45127, "corpus [modified]": 1126, "coverage": 310054, "distributor delayed": 47278, "distributor undelayed": 47278, "distributor violated": 109, "exec candidate": 78593, "exec collide": 17439, "exec fuzz": 32952, "exec gen": 1740, "exec hints": 7401, "exec inject": 0, "exec minimize": 7571, "exec retries": 995, "exec seeds": 779, "exec smash": 6130, "exec total [base]": 215209, "exec total [new]": 407923, "exec triage": 145309, "executor restarts": 2211, "fault jobs": 0, "fuzzer jobs": 9, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 315523, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5332, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46633, "no exec duration": 4032365000000, "no exec requests": 11865, "pending": 0, "prog exec time": 655, "reproducing": 2, "rpc recv": 12995740280, "rpc sent": 3919579312, "signal": 304208, "smash jobs": 4, "triage jobs": 5, "vm output": 139587935, "vm restarts [base]": 101, "vm restarts [new]": 180 } 2025/07/07 08:52:25 base crash: kernel BUG in dnotify_free_mark 2025/07/07 08:52:25 runner 6 connected 2025/07/07 08:52:26 patched crashed: SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) [need repro = false] 2025/07/07 08:52:51 patched crashed: possible deadlock in __del_gendisk [need repro = true] 2025/07/07 08:52:51 scheduled a reproduction of 'possible deadlock in __del_gendisk' 2025/07/07 08:52:51 start reproducing 'possible deadlock in __del_gendisk' 2025/07/07 08:53:15 runner 2 connected 2025/07/07 08:53:23 runner 4 connected 2025/07/07 08:54:13 patched crashed: lost connection to test machine [need repro = false] 2025/07/07 08:54:22 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/07 08:55:03 runner 2 connected 2025/07/07 08:55:03 base crash: lost connection to test machine 2025/07/07 08:55:19 runner 6 connected 2025/07/07 08:55:33 patched crashed: WARNING in path_noexec [need repro = false] 2025/07/07 08:55:46 base crash: WARNING in path_noexec 2025/07/07 08:56:00 runner 1 connected 2025/07/07 08:56:29 runner 5 connected 2025/07/07 08:56:35 status reporting terminated 2025/07/07 08:56:35 bug reporting terminated 2025/07/07 08:56:35 failed to recv *flatrpc.InfoRequestRawT: read tcp 127.0.0.1:35409->127.0.0.1:58400: use of closed network connection 2025/07/07 08:56:35 syz-diff (base): kernel context loop terminated 2025/07/07 08:56:51 repro finished 'possible deadlock in __del_gendisk', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/07 08:57:33 repro finished 'INFO: task hung in bch2_journal_reclaim_thread', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/07 09:00:34 repro finished 'INFO: task hung in read_part_sector', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/07 09:00:34 syz-diff: repro loop terminated 2025/07/07 09:00:34 syz-diff (new): kernel context loop terminated 2025/07/07 09:00:34 diff fuzzing terminated 2025/07/07 09:00:34 fuzzing is finished 2025/07/07 09:00:34 status at the end: Title On-Base On-Patched INFO: task hung in bch2_journal_reclaim_thread 1 crashes INFO: task hung in lock_metapage 1 crashes INFO: task hung in read_part_sector 1 crashes 1 crashes INFO: task hung in sync_bdevs 1 crashes INFO: task hung in v9fs_evict_inode 1 crashes 1 crashes INFO: trying to register non-static key in ocfs2_dlm_shutdown 2 crashes 1 crashes KASAN: out-of-bounds Read in ext4_xattr_set_entry 1 crashes 1 crashes[reproduced] KASAN: slab-out-of-bounds Read in hfsplus_bnode_read 1 crashes 1 crashes[reproduced] KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry 1 crashes 1 crashes[reproduced] KASAN: slab-use-after-free Read in jfs_lazycommit 1 crashes KASAN: slab-use-after-free Read in l2cap_unregister_user 1 crashes SYZFAIL: ebtable checkpoint: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 23 crashes 37 crashes SYZFAIL: ebtable: socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) 1 crashes WARNING in dbAdjTree 3 crashes 5 crashes WARNING in io_ring_exit_work 2 crashes 1 crashes WARNING in path_noexec 3 crashes 3 crashes kernel BUG in dnotify_free_mark 2 crashes 3 crashes kernel BUG in may_open 2 crashes 1 crashes[reproduced] kernel BUG in ocfs2_write_cluster_by_desc 2 crashes 1 crashes[reproduced] kernel BUG in txUnlock 2 crashes 3 crashes[reproduced] lost connection to test machine 13 crashes 36 crashes[reproduced] no output from test machine 6 crashes possible deadlock in __del_gendisk 3 crashes possible deadlock in attr_data_get_block 1 crashes possible deadlock in blk_mq_update_nr_hw_queues 2 crashes 1 crashes possible deadlock in ext4_readpage_inline 1 crashes possible deadlock in ntfs_fiemap 1 crashes possible deadlock in ocfs2_fiemap 1 crashes possible deadlock in ocfs2_init_acl 7 crashes 16 crashes possible deadlock in ocfs2_page_mkwrite 1 crashes possible deadlock in ocfs2_reserve_suballoc_bits 6 crashes 12 crashes possible deadlock in ocfs2_try_remove_refcount_tree 4 crashes 13 crashes possible deadlock in ocfs2_xattr_set 1 crashes 1 crashes possible deadlock in page_cache_ra_unbounded 1 crashes possible deadlock in team_del_slave 1 crashes 3 crashes possible deadlock in team_device_event 1 crashes 7 crashes[reproduced] unregister_netdevice: waiting for DEV to become free 9 crashes 2 crashes