2025/07/18 13:12:24 adding directly modified files to focus_order: ["fs/ext4/extents.c"] 2025/07/18 13:12:25 downloaded the corpus from https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db 2025/07/18 13:12:27 no PCs in the areas of focused fuzzing, skipping the zero patched coverage check 2025/07/18 13:13:15 runner 2 connected 2025/07/18 13:13:15 runner 6 connected 2025/07/18 13:13:15 runner 1 connected 2025/07/18 13:13:15 runner 3 connected 2025/07/18 13:13:15 runner 9 connected 2025/07/18 13:13:15 runner 7 connected 2025/07/18 13:13:16 runner 3 connected 2025/07/18 13:13:16 runner 0 connected 2025/07/18 13:13:16 runner 1 connected 2025/07/18 13:13:16 runner 8 connected 2025/07/18 13:13:16 runner 2 connected 2025/07/18 13:13:16 runner 5 connected 2025/07/18 13:13:17 runner 4 connected 2025/07/18 13:13:18 runner 0 connected 2025/07/18 13:13:22 initializing coverage information... 2025/07/18 13:13:22 cover filter size: 0 2025/07/18 13:13:27 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8043 2025/07/18 13:13:27 base: machine check complete 2025/07/18 13:13:28 discovered 7692 source files, 337732 symbols 2025/07/18 13:13:28 coverage filter: fs/ext4/extents.c: [workdir/fs/ext4/extents.c] 2025/07/18 13:13:28 cover filter size: 0 2025/07/18 13:13:31 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8043 2025/07/18 13:13:31 new: machine check complete 2025/07/18 13:13:33 new: adding 79104 seeds 2025/07/18 13:16:58 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/07/18 13:16:58 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/07/18 13:17:09 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/07/18 13:17:09 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/07/18 13:17:11 patched crashed: possible deadlock in __del_gendisk [need repro = true] 2025/07/18 13:17:11 scheduled a reproduction of 'possible deadlock in __del_gendisk' 2025/07/18 13:17:18 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 13:17:23 patched crashed: possible deadlock in __del_gendisk [need repro = true] 2025/07/18 13:17:23 scheduled a reproduction of 'possible deadlock in __del_gendisk' 2025/07/18 13:17:27 STAT { "buffer too small": 0, "candidate triage jobs": 45, "candidates": 72821, "corpus": 6194, "corpus [modified]": 167, "coverage": 179065, "distributor delayed": 4981, "distributor undelayed": 4966, "distributor violated": 1, "exec candidate": 6283, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 13122, "exec total [new]": 27324, "exec triage": 19423, "executor restarts": 122, "fault jobs": 0, "fuzzer jobs": 45, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 180672, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 6283, "no exec duration": 46804000000, "no exec requests": 352, "pending": 4, "prog exec time": 220, "reproducing": 0, "rpc recv": 1018891836, "rpc sent": 135046208, "signal": 176480, "smash jobs": 0, "triage jobs": 0, "vm output": 3647758, "vm restarts [base]": 4, "vm restarts [new]": 10 } 2025/07/18 13:17:28 base crash: lost connection to test machine 2025/07/18 13:17:33 patched crashed: possible deadlock in __del_gendisk [need repro = true] 2025/07/18 13:17:33 scheduled a reproduction of 'possible deadlock in __del_gendisk' 2025/07/18 13:17:47 runner 2 connected 2025/07/18 13:17:50 runner 4 connected 2025/07/18 13:17:59 runner 1 connected 2025/07/18 13:18:04 runner 0 connected 2025/07/18 13:18:07 runner 8 connected 2025/07/18 13:18:15 patched crashed: possible deadlock in ocfs2_acquire_dquot [need repro = true] 2025/07/18 13:18:15 scheduled a reproduction of 'possible deadlock in ocfs2_acquire_dquot' 2025/07/18 13:18:16 runner 7 connected 2025/07/18 13:18:17 runner 0 connected 2025/07/18 13:18:42 base crash: possible deadlock in __del_gendisk 2025/07/18 13:18:44 base crash: possible deadlock in __del_gendisk 2025/07/18 13:19:04 runner 6 connected 2025/07/18 13:19:05 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/07/18 13:19:05 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/07/18 13:19:14 base crash: lost connection to test machine 2025/07/18 13:19:23 runner 3 connected 2025/07/18 13:19:33 runner 2 connected 2025/07/18 13:19:46 runner 1 connected 2025/07/18 13:20:03 runner 0 connected 2025/07/18 13:21:21 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 13:22:02 runner 7 connected 2025/07/18 13:22:27 STAT { "buffer too small": 0, "candidate triage jobs": 55, "candidates": 65797, "corpus": 13101, "corpus [modified]": 282, "coverage": 221054, "distributor delayed": 12417, "distributor undelayed": 12417, "distributor violated": 25, "exec candidate": 13307, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 26596, "exec total [new]": 60682, "exec triage": 41401, "executor restarts": 194, "fault jobs": 0, "fuzzer jobs": 55, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 10, "hints jobs": 0, "max signal": 223187, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 13307, "no exec duration": 47118000000, "no exec requests": 357, "pending": 7, "prog exec time": 213, "reproducing": 0, "rpc recv": 1959687176, "rpc sent": 301818400, "signal": 217684, "smash jobs": 0, "triage jobs": 0, "vm output": 6386211, "vm restarts [base]": 8, "vm restarts [new]": 19 } 2025/07/18 13:22:45 patched crashed: possible deadlock in team_device_event [need repro = true] 2025/07/18 13:22:45 scheduled a reproduction of 'possible deadlock in team_device_event' 2025/07/18 13:23:07 patched crashed: possible deadlock in ntfs_fiemap [need repro = true] 2025/07/18 13:23:07 scheduled a reproduction of 'possible deadlock in ntfs_fiemap' 2025/07/18 13:23:17 base crash: possible deadlock in team_device_event 2025/07/18 13:23:26 runner 7 connected 2025/07/18 13:23:36 patched crashed: no output from test machine [need repro = false] 2025/07/18 13:23:56 runner 5 connected 2025/07/18 13:23:59 patched crashed: possible deadlock in team_device_event [need repro = false] 2025/07/18 13:24:06 runner 3 connected 2025/07/18 13:24:18 runner 2 connected 2025/07/18 13:24:34 patched crashed: possible deadlock in team_device_event [need repro = false] 2025/07/18 13:24:41 runner 1 connected 2025/07/18 13:24:43 base crash: possible deadlock in ocfs2_acquire_dquot 2025/07/18 13:25:15 runner 0 connected 2025/07/18 13:25:32 patched crashed: possible deadlock in blk_mq_update_nr_hw_queues [need repro = true] 2025/07/18 13:25:32 scheduled a reproduction of 'possible deadlock in blk_mq_update_nr_hw_queues' 2025/07/18 13:25:32 runner 1 connected 2025/07/18 13:26:21 runner 3 connected 2025/07/18 13:26:24 base crash: possible deadlock in team_del_slave 2025/07/18 13:26:38 patched crashed: possible deadlock in blk_mq_update_nr_hw_queues [need repro = true] 2025/07/18 13:26:39 scheduled a reproduction of 'possible deadlock in blk_mq_update_nr_hw_queues' 2025/07/18 13:27:05 runner 2 connected 2025/07/18 13:27:27 STAT { "buffer too small": 0, "candidate triage jobs": 51, "candidates": 58423, "corpus": 20342, "corpus [modified]": 375, "coverage": 247666, "distributor delayed": 19696, "distributor undelayed": 19696, "distributor violated": 26, "exec candidate": 20681, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 41565, "exec total [new]": 98526, "exec triage": 64565, "executor restarts": 254, "fault jobs": 0, "fuzzer jobs": 51, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 250214, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 20681, "no exec duration": 47369000000, "no exec requests": 361, "pending": 11, "prog exec time": 162, "reproducing": 0, "rpc recv": 2764748944, "rpc sent": 483242896, "signal": 243479, "smash jobs": 0, "triage jobs": 0, "vm output": 9257471, "vm restarts [base]": 11, "vm restarts [new]": 25 } 2025/07/18 13:27:27 runner 6 connected 2025/07/18 13:28:05 patched crashed: WARNING in io_ring_exit_work [need repro = true] 2025/07/18 13:28:05 scheduled a reproduction of 'WARNING in io_ring_exit_work' 2025/07/18 13:28:19 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 13:28:54 runner 4 connected 2025/07/18 13:29:08 runner 5 connected 2025/07/18 13:29:20 base crash: possible deadlock in team_device_event 2025/07/18 13:30:03 runner 0 connected 2025/07/18 13:30:50 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/18 13:30:53 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 13:31:36 patched crashed: possible deadlock in __del_gendisk [need repro = false] 2025/07/18 13:31:39 runner 3 connected 2025/07/18 13:31:42 runner 9 connected 2025/07/18 13:32:07 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/18 13:32:18 runner 0 connected 2025/07/18 13:32:27 STAT { "buffer too small": 0, "candidate triage jobs": 49, "candidates": 52159, "corpus": 26541, "corpus [modified]": 475, "coverage": 266581, "distributor delayed": 25151, "distributor undelayed": 25149, "distributor violated": 26, "exec candidate": 26945, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 55804, "exec total [new]": 130690, "exec triage": 83778, "executor restarts": 304, "fault jobs": 0, "fuzzer jobs": 49, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 10, "hints jobs": 0, "max signal": 269440, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 26945, "no exec duration": 47744000000, "no exec requests": 365, "pending": 12, "prog exec time": 277, "reproducing": 0, "rpc recv": 3474921116, "rpc sent": 652576344, "signal": 262118, "smash jobs": 0, "triage jobs": 0, "vm output": 13120052, "vm restarts [base]": 13, "vm restarts [new]": 30 } 2025/07/18 13:32:57 runner 1 connected 2025/07/18 13:33:15 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/07/18 13:33:15 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/07/18 13:33:26 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/07/18 13:33:26 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/07/18 13:34:04 runner 3 connected 2025/07/18 13:34:17 runner 4 connected 2025/07/18 13:34:17 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/07/18 13:34:41 base crash: possible deadlock in ocfs2_init_acl 2025/07/18 13:35:00 patched crashed: possible deadlock in __del_gendisk [need repro = false] 2025/07/18 13:35:02 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/07/18 13:35:02 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/07/18 13:35:04 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/07/18 13:35:04 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/07/18 13:35:05 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/07/18 13:35:05 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/07/18 13:35:06 runner 1 connected 2025/07/18 13:35:06 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/07/18 13:35:06 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/07/18 13:35:11 patched crashed: possible deadlock in __del_gendisk [need repro = false] 2025/07/18 13:35:29 runner 3 connected 2025/07/18 13:35:42 runner 5 connected 2025/07/18 13:35:45 runner 4 connected 2025/07/18 13:35:46 runner 0 connected 2025/07/18 13:35:49 runner 9 connected 2025/07/18 13:35:52 runner 2 connected 2025/07/18 13:35:52 runner 1 connected 2025/07/18 13:36:07 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/07/18 13:36:07 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/07/18 13:36:56 runner 6 connected 2025/07/18 13:37:20 patched crashed: possible deadlock in ocfs2_write_begin_nolock [need repro = true] 2025/07/18 13:37:20 scheduled a reproduction of 'possible deadlock in ocfs2_write_begin_nolock' 2025/07/18 13:37:22 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/07/18 13:37:22 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/07/18 13:37:27 STAT { "buffer too small": 0, "candidate triage jobs": 39, "candidates": 47014, "corpus": 31631, "corpus [modified]": 537, "coverage": 279484, "distributor delayed": 30218, "distributor undelayed": 30217, "distributor violated": 39, "exec candidate": 32090, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 69626, "exec total [new]": 158052, "exec triage": 99461, "executor restarts": 395, "fault jobs": 0, "fuzzer jobs": 39, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 282315, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 32090, "no exec duration": 48099000000, "no exec requests": 369, "pending": 21, "prog exec time": 252, "reproducing": 0, "rpc recv": 4260436312, "rpc sent": 823227768, "signal": 275013, "smash jobs": 0, "triage jobs": 0, "vm output": 17033240, "vm restarts [base]": 16, "vm restarts [new]": 39 } 2025/07/18 13:37:29 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/07/18 13:38:02 runner 1 connected 2025/07/18 13:38:11 runner 7 connected 2025/07/18 13:38:11 runner 9 connected 2025/07/18 13:38:31 patched crashed: possible deadlock in ocfs2_reserve_local_alloc_bits [need repro = true] 2025/07/18 13:38:31 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_local_alloc_bits' 2025/07/18 13:38:32 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/07/18 13:39:04 patched crashed: INFO: task hung in corrupted [need repro = true] 2025/07/18 13:39:04 scheduled a reproduction of 'INFO: task hung in corrupted' 2025/07/18 13:39:21 runner 3 connected 2025/07/18 13:39:21 runner 1 connected 2025/07/18 13:39:45 runner 0 connected 2025/07/18 13:40:51 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/07/18 13:40:51 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/07/18 13:40:54 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/18 13:41:00 patched crashed: kernel BUG in dnotify_free_mark [need repro = true] 2025/07/18 13:41:00 scheduled a reproduction of 'kernel BUG in dnotify_free_mark' 2025/07/18 13:41:00 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/07/18 13:41:01 patched crashed: kernel BUG in dnotify_free_mark [need repro = true] 2025/07/18 13:41:01 scheduled a reproduction of 'kernel BUG in dnotify_free_mark' 2025/07/18 13:41:02 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/07/18 13:41:03 base crash: possible deadlock in ocfs2_xattr_set 2025/07/18 13:41:32 runner 5 connected 2025/07/18 13:41:35 runner 7 connected 2025/07/18 13:41:41 runner 1 connected 2025/07/18 13:41:42 runner 6 connected 2025/07/18 13:41:50 runner 2 connected 2025/07/18 13:41:50 runner 9 connected 2025/07/18 13:41:50 runner 0 connected 2025/07/18 13:42:27 STAT { "buffer too small": 0, "candidate triage jobs": 48, "candidates": 41590, "corpus": 36964, "corpus [modified]": 594, "coverage": 290937, "distributor delayed": 35455, "distributor undelayed": 35455, "distributor violated": 43, "exec candidate": 37514, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 83810, "exec total [new]": 189697, "exec triage": 116052, "executor restarts": 481, "fault jobs": 0, "fuzzer jobs": 48, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 10, "hints jobs": 0, "max signal": 293839, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 37514, "no exec duration": 48112000000, "no exec requests": 370, "pending": 26, "prog exec time": 227, "reproducing": 0, "rpc recv": 5070252060, "rpc sent": 1011894712, "signal": 286337, "smash jobs": 0, "triage jobs": 0, "vm output": 20936558, "vm restarts [base]": 19, "vm restarts [new]": 49 } 2025/07/18 13:43:15 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 13:43:53 base crash: lost connection to test machine 2025/07/18 13:43:57 runner 0 connected 2025/07/18 13:44:21 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/07/18 13:44:21 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/07/18 13:44:22 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/07/18 13:44:22 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/07/18 13:44:23 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/07/18 13:44:23 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/07/18 13:44:41 runner 1 connected 2025/07/18 13:45:10 runner 9 connected 2025/07/18 13:45:11 runner 5 connected 2025/07/18 13:45:11 runner 6 connected 2025/07/18 13:45:21 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/07/18 13:45:21 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/07/18 13:46:10 runner 0 connected 2025/07/18 13:46:12 base crash: kernel BUG in txUnlock 2025/07/18 13:47:00 runner 0 connected 2025/07/18 13:47:27 STAT { "buffer too small": 0, "candidate triage jobs": 21, "candidates": 37828, "corpus": 40626, "corpus [modified]": 646, "coverage": 299214, "distributor delayed": 38771, "distributor undelayed": 38771, "distributor violated": 43, "exec candidate": 41276, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 17, "exec seeds": 0, "exec smash": 0, "exec total [base]": 98878, "exec total [new]": 224552, "exec triage": 128097, "executor restarts": 540, "fault jobs": 0, "fuzzer jobs": 21, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 10, "hints jobs": 0, "max signal": 302257, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 41276, "no exec duration": 48193000000, "no exec requests": 378, "pending": 30, "prog exec time": 225, "reproducing": 0, "rpc recv": 5595553452, "rpc sent": 1251271872, "signal": 294322, "smash jobs": 0, "triage jobs": 0, "vm output": 24667493, "vm restarts [base]": 21, "vm restarts [new]": 54 } 2025/07/18 13:47:33 base crash: possible deadlock in blk_mq_update_nr_hw_queues 2025/07/18 13:48:15 runner 3 connected 2025/07/18 13:48:20 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/07/18 13:49:01 runner 6 connected 2025/07/18 13:49:13 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 13:49:54 runner 3 connected 2025/07/18 13:50:17 base crash: possible deadlock in attr_data_get_block 2025/07/18 13:51:06 runner 0 connected 2025/07/18 13:51:39 patched crashed: possible deadlock in blk_mq_update_nr_hw_queues [need repro = false] 2025/07/18 13:51:51 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/07/18 13:51:51 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/07/18 13:52:01 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 13:52:27 STAT { "buffer too small": 0, "candidate triage jobs": 24, "candidates": 35646, "corpus": 42678, "corpus [modified]": 700, "coverage": 303908, "distributor delayed": 40631, "distributor undelayed": 40631, "distributor violated": 43, "exec candidate": 43458, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 113815, "exec total [new]": 258079, "exec triage": 135077, "executor restarts": 609, "fault jobs": 0, "fuzzer jobs": 24, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 307269, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43453, "no exec duration": 48664000000, "no exec requests": 382, "pending": 31, "prog exec time": 285, "reproducing": 0, "rpc recv": 5935648760, "rpc sent": 1446057104, "signal": 298960, "smash jobs": 0, "triage jobs": 0, "vm output": 28517140, "vm restarts [base]": 23, "vm restarts [new]": 56 } 2025/07/18 13:52:27 runner 8 connected 2025/07/18 13:52:40 runner 7 connected 2025/07/18 13:52:43 runner 6 connected 2025/07/18 13:53:06 base crash: INFO: trying to register non-static key in ocfs2_dlm_shutdown 2025/07/18 13:53:08 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/07/18 13:53:08 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/07/18 13:53:48 patched crashed: possible deadlock in __del_gendisk [need repro = false] 2025/07/18 13:53:49 runner 7 connected 2025/07/18 13:53:54 runner 2 connected 2025/07/18 13:53:58 patched crashed: possible deadlock in __del_gendisk [need repro = false] 2025/07/18 13:54:26 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/07/18 13:54:26 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/07/18 13:54:28 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/07/18 13:54:36 runner 0 connected 2025/07/18 13:54:39 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/07/18 13:54:47 runner 5 connected 2025/07/18 13:55:07 runner 8 connected 2025/07/18 13:55:09 runner 3 connected 2025/07/18 13:55:23 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13151: connect: connection refused 2025/07/18 13:55:23 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13151: connect: connection refused 2025/07/18 13:55:23 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:30219: connect: connection refused 2025/07/18 13:55:23 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:30219: connect: connection refused 2025/07/18 13:55:25 VM-5 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:6634: connect: connection refused 2025/07/18 13:55:25 VM-5 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:6634: connect: connection refused 2025/07/18 13:55:27 runner 2 connected 2025/07/18 13:55:33 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 13:55:33 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 13:55:35 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 13:55:57 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:58755: connect: connection refused 2025/07/18 13:55:57 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:58755: connect: connection refused 2025/07/18 13:56:04 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:50266: connect: connection refused 2025/07/18 13:56:04 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:50266: connect: connection refused 2025/07/18 13:56:07 base crash: lost connection to test machine 2025/07/18 13:56:14 runner 0 connected 2025/07/18 13:56:14 base crash: lost connection to test machine 2025/07/18 13:56:21 runner 1 connected 2025/07/18 13:56:24 runner 5 connected 2025/07/18 13:56:56 runner 3 connected 2025/07/18 13:57:04 runner 2 connected 2025/07/18 13:57:06 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/07/18 13:57:27 STAT { "buffer too small": 0, "candidate triage jobs": 8, "candidates": 34214, "corpus": 44039, "corpus [modified]": 743, "coverage": 306608, "distributor delayed": 41928, "distributor undelayed": 41928, "distributor violated": 43, "exec candidate": 44890, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 23, "exec seeds": 0, "exec smash": 0, "exec total [base]": 127220, "exec total [new]": 284442, "exec triage": 139485, "executor restarts": 716, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 10, "hints jobs": 0, "max signal": 310002, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44863, "no exec duration": 48677000000, "no exec requests": 384, "pending": 33, "prog exec time": 258, "reproducing": 0, "rpc recv": 6555118920, "rpc sent": 1638961920, "signal": 301662, "smash jobs": 0, "triage jobs": 0, "vm output": 32388442, "vm restarts [base]": 26, "vm restarts [new]": 68 } 2025/07/18 13:57:48 runner 0 connected 2025/07/18 13:58:06 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/18 13:58:41 base crash: possible deadlock in input_inject_event 2025/07/18 13:58:55 runner 2 connected 2025/07/18 13:59:22 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/18 13:59:30 runner 1 connected 2025/07/18 13:59:43 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/07/18 13:59:43 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/07/18 14:00:12 runner 5 connected 2025/07/18 14:00:25 runner 0 connected 2025/07/18 14:00:39 base crash: WARNING in dbAdjTree 2025/07/18 14:01:27 runner 1 connected 2025/07/18 14:02:27 STAT { "buffer too small": 0, "candidate triage jobs": 9, "candidates": 8879, "corpus": 44693, "corpus [modified]": 763, "coverage": 307871, "distributor delayed": 42647, "distributor undelayed": 42646, "distributor violated": 43, "exec candidate": 70225, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 27, "exec seeds": 0, "exec smash": 0, "exec total [base]": 141494, "exec total [new]": 322962, "exec triage": 142119, "executor restarts": 786, "fault jobs": 0, "fuzzer jobs": 9, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 10, "hints jobs": 0, "max signal": 311599, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45629, "no exec duration": 48990000000, "no exec requests": 391, "pending": 34, "prog exec time": 264, "reproducing": 0, "rpc recv": 6835274020, "rpc sent": 1831255568, "signal": 302904, "smash jobs": 0, "triage jobs": 0, "vm output": 35666888, "vm restarts [base]": 29, "vm restarts [new]": 71 } 2025/07/18 14:02:57 triaged 93.1% of the corpus 2025/07/18 14:02:57 starting bug reproductions 2025/07/18 14:02:57 starting bug reproductions (max 10 VMs, 7 repros) 2025/07/18 14:02:57 reproduction of "possible deadlock in __del_gendisk" aborted: it's no longer needed 2025/07/18 14:02:57 reproduction of "possible deadlock in __del_gendisk" aborted: it's no longer needed 2025/07/18 14:02:57 reproduction of "possible deadlock in __del_gendisk" aborted: it's no longer needed 2025/07/18 14:02:57 reproduction of "possible deadlock in ocfs2_acquire_dquot" aborted: it's no longer needed 2025/07/18 14:02:57 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/07/18 14:02:57 reproduction of "possible deadlock in team_device_event" aborted: it's no longer needed 2025/07/18 14:02:57 reproduction of "possible deadlock in blk_mq_update_nr_hw_queues" aborted: it's no longer needed 2025/07/18 14:02:57 reproduction of "possible deadlock in blk_mq_update_nr_hw_queues" aborted: it's no longer needed 2025/07/18 14:02:57 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/07/18 14:02:57 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/07/18 14:02:57 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/07/18 14:02:57 start reproducing 'INFO: task hung in corrupted' 2025/07/18 14:02:57 start reproducing 'kernel BUG in jfs_evict_inode' 2025/07/18 14:02:57 start reproducing 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/07/18 14:02:57 start reproducing 'possible deadlock in ntfs_fiemap' 2025/07/18 14:02:57 start reproducing 'WARNING in io_ring_exit_work' 2025/07/18 14:02:57 start reproducing 'possible deadlock in ocfs2_write_begin_nolock' 2025/07/18 14:02:57 start reproducing 'possible deadlock in ocfs2_reserve_local_alloc_bits' 2025/07/18 14:04:42 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:07:27 STAT { "buffer too small": 0, "candidate triage jobs": 12, "candidates": 5420, "corpus": 44736, "corpus [modified]": 765, "coverage": 307948, "distributor delayed": 42712, "distributor undelayed": 42712, "distributor violated": 43, "exec candidate": 73684, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 27, "exec seeds": 0, "exec smash": 0, "exec total [base]": 147071, "exec total [new]": 326632, "exec triage": 142328, "executor restarts": 793, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311726, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45691, "no exec duration": 49211000000, "no exec requests": 394, "pending": 15, "prog exec time": 0, "reproducing": 7, "rpc recv": 6844771088, "rpc sent": 1859056744, "signal": 302981, "smash jobs": 0, "triage jobs": 0, "vm output": 38696158, "vm restarts [base]": 29, "vm restarts [new]": 71 } 2025/07/18 14:09:34 base crash: no output from test machine 2025/07/18 14:09:34 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:09:36 base crash: no output from test machine 2025/07/18 14:09:38 base crash: no output from test machine 2025/07/18 14:09:47 base crash: no output from test machine 2025/07/18 14:10:15 runner 2 connected 2025/07/18 14:10:21 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:10:24 runner 3 connected 2025/07/18 14:10:26 runner 0 connected 2025/07/18 14:10:35 runner 1 connected 2025/07/18 14:11:12 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:11:12 repro finished 'kernel BUG in jfs_evict_inode', repro=true crepro=false desc='kernel BUG in jfs_evict_inode' hub=false from_dashboard=false 2025/07/18 14:11:12 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/07/18 14:11:12 found repro for "kernel BUG in jfs_evict_inode" (orig title: "-SAME-", reliability: 1), took 8.25 minutes 2025/07/18 14:11:12 start reproducing 'kernel BUG in dnotify_free_mark' 2025/07/18 14:11:12 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/07/18 14:11:12 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/07/18 14:11:12 reproduction of "WARNING in dbAdjTree" aborted: it's no longer needed 2025/07/18 14:11:12 reproduction of "WARNING in dbAdjTree" aborted: it's no longer needed 2025/07/18 14:11:12 reproduction of "WARNING in dbAdjTree" aborted: it's no longer needed 2025/07/18 14:11:12 "kernel BUG in jfs_evict_inode": saved crash log into 1752847872.crash.log 2025/07/18 14:11:12 "kernel BUG in jfs_evict_inode": saved repro log into 1752847872.repro.log 2025/07/18 14:11:50 reproducing crash 'kernel BUG in dnotify_free_mark': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/notify/dnotify/dnotify.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:12:22 attempt #0 to run "kernel BUG in jfs_evict_inode" on base: crashed with kernel BUG in jfs_evict_inode 2025/07/18 14:12:22 crashes both: kernel BUG in jfs_evict_inode / kernel BUG in jfs_evict_inode 2025/07/18 14:12:27 STAT { "buffer too small": 0, "candidate triage jobs": 12, "candidates": 5420, "corpus": 44736, "corpus [modified]": 765, "coverage": 307948, "distributor delayed": 42712, "distributor undelayed": 42712, "distributor violated": 43, "exec candidate": 73684, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 27, "exec seeds": 0, "exec smash": 0, "exec total [base]": 147071, "exec total [new]": 326632, "exec triage": 142328, "executor restarts": 793, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311726, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45691, "no exec duration": 49211000000, "no exec requests": 394, "pending": 8, "prog exec time": 0, "reproducing": 7, "rpc recv": 6967972808, "rpc sent": 1859057864, "signal": 302981, "smash jobs": 0, "triage jobs": 0, "vm output": 42844734, "vm restarts [base]": 33, "vm restarts [new]": 71 } 2025/07/18 14:13:04 runner 0 connected 2025/07/18 14:14:52 repro finished 'possible deadlock in ocfs2_try_remove_refcount_tree', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/18 14:14:52 failed repro for "possible deadlock in ocfs2_try_remove_refcount_tree", err=%!s() 2025/07/18 14:14:52 start reproducing 'kernel BUG in jfs_evict_inode' 2025/07/18 14:14:52 "possible deadlock in ocfs2_try_remove_refcount_tree": saved crash log into 1752848092.crash.log 2025/07/18 14:14:52 "possible deadlock in ocfs2_try_remove_refcount_tree": saved repro log into 1752848092.repro.log 2025/07/18 14:15:14 repro finished 'possible deadlock in ocfs2_reserve_local_alloc_bits', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/18 14:15:14 failed repro for "possible deadlock in ocfs2_reserve_local_alloc_bits", err=%!s() 2025/07/18 14:15:14 start reproducing 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/07/18 14:15:14 "possible deadlock in ocfs2_reserve_local_alloc_bits": saved crash log into 1752848114.crash.log 2025/07/18 14:15:14 "possible deadlock in ocfs2_reserve_local_alloc_bits": saved repro log into 1752848114.repro.log 2025/07/18 14:15:15 base crash: no output from test machine 2025/07/18 14:15:24 base crash: no output from test machine 2025/07/18 14:15:35 base crash: no output from test machine 2025/07/18 14:16:02 reproducing crash 'kernel BUG in dnotify_free_mark': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/notify/dnotify/dnotify.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:16:03 runner 2 connected 2025/07/18 14:16:05 runner 3 connected 2025/07/18 14:16:17 runner 1 connected 2025/07/18 14:16:33 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:17:04 reproducing crash 'kernel BUG in dnotify_free_mark': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/notify/dnotify/dnotify.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:17:27 STAT { "buffer too small": 0, "candidate triage jobs": 12, "candidates": 5420, "corpus": 44736, "corpus [modified]": 765, "coverage": 307948, "distributor delayed": 42712, "distributor undelayed": 42712, "distributor violated": 43, "exec candidate": 73684, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 27, "exec seeds": 0, "exec smash": 0, "exec total [base]": 147071, "exec total [new]": 326632, "exec triage": 142328, "executor restarts": 793, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311726, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 13, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45691, "no exec duration": 49211000000, "no exec requests": 394, "pending": 7, "prog exec time": 0, "reproducing": 7, "rpc recv": 7091174528, "rpc sent": 1859058984, "signal": 302981, "smash jobs": 0, "triage jobs": 0, "vm output": 46679460, "vm restarts [base]": 37, "vm restarts [new]": 71 } 2025/07/18 14:17:33 reproducing crash 'kernel BUG in dnotify_free_mark': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/notify/dnotify/dnotify.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:17:33 repro finished 'kernel BUG in dnotify_free_mark', repro=true crepro=false desc='kernel BUG in dnotify_free_mark' hub=false from_dashboard=false 2025/07/18 14:17:33 start reproducing 'kernel BUG in dnotify_free_mark' 2025/07/18 14:17:33 found repro for "kernel BUG in dnotify_free_mark" (orig title: "-SAME-", reliability: 1), took 6.21 minutes 2025/07/18 14:17:33 "kernel BUG in dnotify_free_mark": saved crash log into 1752848253.crash.log 2025/07/18 14:17:33 "kernel BUG in dnotify_free_mark": saved repro log into 1752848253.repro.log 2025/07/18 14:18:15 reproducing crash 'kernel BUG in dnotify_free_mark': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/notify/dnotify/dnotify.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:18:45 attempt #0 to run "kernel BUG in dnotify_free_mark" on base: crashed with kernel BUG in dnotify_free_mark 2025/07/18 14:18:45 crashes both: kernel BUG in dnotify_free_mark / kernel BUG in dnotify_free_mark 2025/07/18 14:19:26 runner 0 connected 2025/07/18 14:21:03 base crash: no output from test machine 2025/07/18 14:21:05 base crash: no output from test machine 2025/07/18 14:21:13 repro finished 'possible deadlock in ocfs2_write_begin_nolock', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/18 14:21:13 failed repro for "possible deadlock in ocfs2_write_begin_nolock", err=%!s() 2025/07/18 14:21:13 "possible deadlock in ocfs2_write_begin_nolock": saved crash log into 1752848473.crash.log 2025/07/18 14:21:13 "possible deadlock in ocfs2_write_begin_nolock": saved repro log into 1752848473.repro.log 2025/07/18 14:21:16 base crash: no output from test machine 2025/07/18 14:21:51 runner 2 connected 2025/07/18 14:21:54 runner 3 connected 2025/07/18 14:21:55 runner 1 connected 2025/07/18 14:21:58 runner 1 connected 2025/07/18 14:22:19 reproducing crash 'kernel BUG in dnotify_free_mark': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/notify/dnotify/dnotify.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:22:26 runner 0 connected 2025/07/18 14:22:27 STAT { "buffer too small": 0, "candidate triage jobs": 14, "candidates": 5184, "corpus": 44742, "corpus [modified]": 765, "coverage": 307961, "distributor delayed": 42726, "distributor undelayed": 42712, "distributor violated": 43, "exec candidate": 73920, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 27, "exec seeds": 0, "exec smash": 0, "exec total [base]": 147257, "exec total [new]": 326883, "exec triage": 142340, "executor restarts": 796, "fault jobs": 0, "fuzzer jobs": 14, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 311735, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 22, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45699, "no exec duration": 83403000000, "no exec requests": 511, "pending": 6, "prog exec time": 196, "reproducing": 6, "rpc recv": 7245888736, "rpc sent": 1863113496, "signal": 302994, "smash jobs": 0, "triage jobs": 0, "vm output": 50115792, "vm restarts [base]": 41, "vm restarts [new]": 73 } 2025/07/18 14:22:48 reproducing crash 'kernel BUG in dnotify_free_mark': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/notify/dnotify/dnotify.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:23:25 reproducing crash 'kernel BUG in dnotify_free_mark': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/notify/dnotify/dnotify.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:23:25 repro finished 'kernel BUG in dnotify_free_mark', repro=true crepro=false desc='kernel BUG in dnotify_free_mark' hub=false from_dashboard=false 2025/07/18 14:23:25 found repro for "kernel BUG in dnotify_free_mark" (orig title: "-SAME-", reliability: 1), took 5.73 minutes 2025/07/18 14:23:25 "kernel BUG in dnotify_free_mark": saved crash log into 1752848605.crash.log 2025/07/18 14:23:25 "kernel BUG in dnotify_free_mark": saved repro log into 1752848605.repro.log 2025/07/18 14:24:44 attempt #0 to run "kernel BUG in dnotify_free_mark" on base: crashed with kernel BUG in dnotify_free_mark 2025/07/18 14:24:44 crashes both: kernel BUG in dnotify_free_mark / kernel BUG in dnotify_free_mark 2025/07/18 14:25:32 runner 0 connected 2025/07/18 14:27:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 44881, "corpus [modified]": 766, "coverage": 308206, "distributor delayed": 42858, "distributor undelayed": 42858, "distributor violated": 78, "exec candidate": 79104, "exec collide": 309, "exec fuzz": 529, "exec gen": 30, "exec hints": 64, "exec inject": 0, "exec minimize": 106, "exec retries": 27, "exec seeds": 18, "exec smash": 107, "exec total [base]": 154105, "exec total [new]": 333738, "exec triage": 142846, "executor restarts": 825, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 2, "max signal": 311992, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 135, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45847, "no exec duration": 960143000000, "no exec requests": 3438, "pending": 6, "prog exec time": 326, "reproducing": 5, "rpc recv": 7329207200, "rpc sent": 1935688520, "signal": 303239, "smash jobs": 1, "triage jobs": 2, "vm output": 54234918, "vm restarts [base]": 42, "vm restarts [new]": 73 } 2025/07/18 14:28:33 reproducing crash 'kernel BUG in jfs_evict_inode': reproducer is too unreliable: 0.10 2025/07/18 14:28:33 repro finished 'kernel BUG in jfs_evict_inode', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/18 14:28:33 reproduction of "kernel BUG in jfs_evict_inode" aborted: it's no longer needed 2025/07/18 14:28:33 reproduction of "kernel BUG in jfs_evict_inode" aborted: it's no longer needed 2025/07/18 14:28:33 reproduction of "kernel BUG in jfs_evict_inode" aborted: it's no longer needed 2025/07/18 14:28:33 reproduction of "kernel BUG in jfs_evict_inode" aborted: it's no longer needed 2025/07/18 14:28:33 reproduction of "kernel BUG in jfs_evict_inode" aborted: it's no longer needed 2025/07/18 14:28:33 reproduction of "kernel BUG in jfs_evict_inode" aborted: it's no longer needed 2025/07/18 14:28:33 failed repro for "kernel BUG in jfs_evict_inode", err=%!s() 2025/07/18 14:28:33 "kernel BUG in jfs_evict_inode": saved crash log into 1752848913.crash.log 2025/07/18 14:28:33 "kernel BUG in jfs_evict_inode": saved repro log into 1752848913.repro.log 2025/07/18 14:29:07 runner 2 connected 2025/07/18 14:29:09 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 14:29:34 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 14:29:54 runner 3 connected 2025/07/18 14:29:58 runner 0 connected 2025/07/18 14:30:16 runner 1 connected 2025/07/18 14:30:43 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 14:30:46 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 14:31:32 runner 2 connected 2025/07/18 14:31:35 runner 1 connected 2025/07/18 14:31:49 patched crashed: kernel BUG in may_open [need repro = true] 2025/07/18 14:31:49 scheduled a reproduction of 'kernel BUG in may_open' 2025/07/18 14:31:49 start reproducing 'kernel BUG in may_open' 2025/07/18 14:32:15 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:32:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 44904, "corpus [modified]": 768, "coverage": 308301, "distributor delayed": 42954, "distributor undelayed": 42941, "distributor violated": 80, "exec candidate": 79104, "exec collide": 1079, "exec fuzz": 1862, "exec gen": 106, "exec hints": 446, "exec inject": 0, "exec minimize": 543, "exec retries": 29, "exec seeds": 91, "exec smash": 594, "exec total [base]": 157801, "exec total [new]": 337431, "exec triage": 142971, "executor restarts": 882, "fault jobs": 0, "fuzzer jobs": 28, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 6, "max signal": 312157, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 490, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45904, "no exec duration": 1952232000000, "no exec requests": 6105, "pending": 0, "prog exec time": 477, "reproducing": 5, "rpc recv": 7538933872, "rpc sent": 2034043088, "signal": 303334, "smash jobs": 7, "triage jobs": 15, "vm output": 58877749, "vm restarts [base]": 42, "vm restarts [new]": 79 } 2025/07/18 14:32:30 runner 3 connected 2025/07/18 14:33:20 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:34:04 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:34:32 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:35:16 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:35:19 base crash: lost connection to test machine 2025/07/18 14:35:20 base crash: kernel BUG in jfs_evict_inode 2025/07/18 14:35:42 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:36:07 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:36:07 runner 1 connected 2025/07/18 14:36:09 runner 2 connected 2025/07/18 14:36:21 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/07/18 14:37:03 runner 0 connected 2025/07/18 14:37:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 44945, "corpus [modified]": 769, "coverage": 308358, "distributor delayed": 43071, "distributor undelayed": 43071, "distributor violated": 86, "exec candidate": 79104, "exec collide": 1901, "exec fuzz": 3415, "exec gen": 187, "exec hints": 1730, "exec inject": 0, "exec minimize": 1600, "exec retries": 29, "exec seeds": 223, "exec smash": 1631, "exec total [base]": 163308, "exec total [new]": 343635, "exec triage": 143207, "executor restarts": 947, "fault jobs": 0, "fuzzer jobs": 21, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 3, "hints jobs": 16, "max signal": 312370, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1214, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45984, "no exec duration": 2509046000000, "no exec requests": 8038, "pending": 0, "prog exec time": 228, "reproducing": 5, "rpc recv": 7710505080, "rpc sent": 2174583832, "signal": 303390, "smash jobs": 3, "triage jobs": 2, "vm output": 63453564, "vm restarts [base]": 45, "vm restarts [new]": 80 } 2025/07/18 14:37:32 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:37:57 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:38:22 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:38:22 repro finished 'kernel BUG in may_open', repro=true crepro=false desc='kernel BUG in may_open' hub=false from_dashboard=false 2025/07/18 14:38:22 found repro for "kernel BUG in may_open" (orig title: "-SAME-", reliability: 1), took 6.53 minutes 2025/07/18 14:38:22 "kernel BUG in may_open": saved crash log into 1752849502.crash.log 2025/07/18 14:38:22 "kernel BUG in may_open": saved repro log into 1752849502.repro.log 2025/07/18 14:38:45 runner 0 connected 2025/07/18 14:39:35 attempt #0 to run "kernel BUG in may_open" on base: crashed with kernel BUG in may_open 2025/07/18 14:39:35 crashes both: kernel BUG in may_open / kernel BUG in may_open 2025/07/18 14:40:24 runner 0 connected 2025/07/18 14:41:09 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/18 14:41:52 runner 2 connected 2025/07/18 14:42:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 44966, "corpus [modified]": 769, "coverage": 308389, "distributor delayed": 43211, "distributor undelayed": 43211, "distributor violated": 86, "exec candidate": 79104, "exec collide": 4170, "exec fuzz": 7642, "exec gen": 419, "exec hints": 5044, "exec inject": 0, "exec minimize": 2165, "exec retries": 29, "exec seeds": 298, "exec smash": 2188, "exec total [base]": 173850, "exec total [new]": 355139, "exec triage": 143468, "executor restarts": 992, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 7, "max signal": 312545, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1547, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46072, "no exec duration": 2528711000000, "no exec requests": 8071, "pending": 0, "prog exec time": 277, "reproducing": 4, "rpc recv": 7841790952, "rpc sent": 2401111664, "signal": 303421, "smash jobs": 2, "triage jobs": 3, "vm output": 67054713, "vm restarts [base]": 47, "vm restarts [new]": 81 } 2025/07/18 14:45:11 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:36172: connect: connection refused 2025/07/18 14:45:11 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:36172: connect: connection refused 2025/07/18 14:45:21 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 14:45:22 base crash: lost connection to test machine 2025/07/18 14:46:02 patched crashed: possible deadlock in blk_mq_update_nr_hw_queues [need repro = false] 2025/07/18 14:46:02 runner 0 connected 2025/07/18 14:46:04 runner 3 connected 2025/07/18 14:46:44 runner 1 connected 2025/07/18 14:47:26 patched crashed: possible deadlock in __del_gendisk [need repro = false] 2025/07/18 14:47:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 45001, "corpus [modified]": 769, "coverage": 308479, "distributor delayed": 43376, "distributor undelayed": 43370, "distributor violated": 97, "exec candidate": 79104, "exec collide": 5778, "exec fuzz": 10581, "exec gen": 567, "exec hints": 7174, "exec inject": 0, "exec minimize": 3015, "exec retries": 29, "exec seeds": 403, "exec smash": 3094, "exec total [base]": 184383, "exec total [new]": 364071, "exec triage": 143717, "executor restarts": 1042, "fault jobs": 0, "fuzzer jobs": 23, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 3, "hints jobs": 7, "max signal": 312757, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2000, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46163, "no exec duration": 2536202000000, "no exec requests": 8085, "pending": 0, "prog exec time": 338, "reproducing": 4, "rpc recv": 7975088964, "rpc sent": 2567520744, "signal": 303490, "smash jobs": 3, "triage jobs": 13, "vm output": 70666124, "vm restarts [base]": 48, "vm restarts [new]": 83 } 2025/07/18 14:48:15 runner 3 connected 2025/07/18 14:48:41 base crash: possible deadlock in __del_gendisk 2025/07/18 14:48:42 base crash: possible deadlock in __del_gendisk 2025/07/18 14:48:43 repro finished 'possible deadlock in ntfs_fiemap', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/18 14:48:43 failed repro for "possible deadlock in ntfs_fiemap", err=%!s() 2025/07/18 14:48:43 "possible deadlock in ntfs_fiemap": saved crash log into 1752850123.crash.log 2025/07/18 14:48:43 "possible deadlock in ntfs_fiemap": saved repro log into 1752850123.repro.log 2025/07/18 14:48:44 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 14:48:44 runner 5 connected 2025/07/18 14:49:01 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/07/18 14:49:23 runner 1 connected 2025/07/18 14:49:26 runner 1 connected 2025/07/18 14:49:27 repro finished 'INFO: task hung in corrupted', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/18 14:49:27 failed repro for "INFO: task hung in corrupted", err=%!s() 2025/07/18 14:49:27 "INFO: task hung in corrupted": saved crash log into 1752850167.crash.log 2025/07/18 14:49:27 "INFO: task hung in corrupted": saved repro log into 1752850167.repro.log 2025/07/18 14:49:29 runner 0 connected 2025/07/18 14:49:32 runner 6 connected 2025/07/18 14:49:42 runner 0 connected 2025/07/18 14:50:02 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/07/18 14:50:15 runner 4 connected 2025/07/18 14:50:28 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:40579: connect: connection refused 2025/07/18 14:50:28 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:40579: connect: connection refused 2025/07/18 14:50:38 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 14:50:44 runner 6 connected 2025/07/18 14:50:46 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 14:50:54 base crash: lost connection to test machine 2025/07/18 14:50:59 patched crashed: possible deadlock in blk_mq_update_nr_hw_queues [need repro = false] 2025/07/18 14:51:20 runner 3 connected 2025/07/18 14:51:36 runner 0 connected 2025/07/18 14:51:41 runner 1 connected 2025/07/18 14:51:42 runner 2 connected 2025/07/18 14:52:01 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/07/18 14:52:02 base crash: possible deadlock in blk_mq_update_nr_hw_queues 2025/07/18 14:52:02 base crash: possible deadlock in blk_mq_update_nr_hw_queues 2025/07/18 14:52:16 base crash: possible deadlock in team_device_event 2025/07/18 14:52:21 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/07/18 14:52:27 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "corpus": 45035, "corpus [modified]": 769, "coverage": 308530, "distributor delayed": 43508, "distributor undelayed": 43508, "distributor violated": 103, "exec candidate": 79104, "exec collide": 7701, "exec fuzz": 14155, "exec gen": 763, "exec hints": 8696, "exec inject": 0, "exec minimize": 4005, "exec retries": 31, "exec seeds": 469, "exec smash": 3865, "exec total [base]": 191152, "exec total [new]": 373350, "exec triage": 143945, "executor restarts": 1125, "fault jobs": 0, "fuzzer jobs": 18, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 5, "hints jobs": 6, "max signal": 312877, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2536, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46242, "no exec duration": 2690031000000, "no exec requests": 8603, "pending": 0, "prog exec time": 321, "reproducing": 2, "rpc recv": 8413690844, "rpc sent": 2750477656, "signal": 303540, "smash jobs": 5, "triage jobs": 7, "vm output": 73834387, "vm restarts [base]": 51, "vm restarts [new]": 93 } 2025/07/18 14:52:44 runner 3 connected 2025/07/18 14:52:50 runner 2 connected 2025/07/18 14:52:51 runner 1 connected 2025/07/18 14:53:03 patched crashed: possible deadlock in blk_mq_update_nr_hw_queues [need repro = false] 2025/07/18 14:53:06 runner 0 connected 2025/07/18 14:53:09 runner 0 connected 2025/07/18 14:53:10 base crash: possible deadlock in ocfs2_init_acl 2025/07/18 14:53:18 patched crashed: WARNING in comedi_unlocked_ioctl [need repro = true] 2025/07/18 14:53:18 scheduled a reproduction of 'WARNING in comedi_unlocked_ioctl' 2025/07/18 14:53:18 start reproducing 'WARNING in comedi_unlocked_ioctl' 2025/07/18 14:53:43 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:53:51 runner 4 connected 2025/07/18 14:53:59 runner 2 connected 2025/07/18 14:54:07 runner 2 connected 2025/07/18 14:54:25 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:54:31 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 14:55:10 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:55:13 runner 5 connected 2025/07/18 14:55:32 base crash: possible deadlock in blk_mq_update_nr_hw_queues 2025/07/18 14:55:57 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:56:14 runner 3 connected 2025/07/18 14:56:24 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:57:14 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:57:16 base crash: possible deadlock in blk_mq_update_nr_hw_queues 2025/07/18 14:57:27 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "corpus": 45058, "corpus [modified]": 769, "coverage": 308567, "distributor delayed": 43653, "distributor undelayed": 43653, "distributor violated": 104, "exec candidate": 79104, "exec collide": 10720, "exec fuzz": 20054, "exec gen": 1063, "exec hints": 9439, "exec inject": 0, "exec minimize": 4849, "exec retries": 32, "exec seeds": 522, "exec smash": 4448, "exec total [base]": 199241, "exec total [new]": 385048, "exec triage": 144202, "executor restarts": 1221, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 4, "max signal": 313042, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3064, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46326, "no exec duration": 2698197000000, "no exec requests": 8624, "pending": 0, "prog exec time": 341, "reproducing": 3, "rpc recv": 8767918356, "rpc sent": 2999136248, "signal": 303576, "smash jobs": 4, "triage jobs": 3, "vm output": 77202184, "vm restarts [base]": 56, "vm restarts [new]": 98 } 2025/07/18 14:57:43 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:58:04 runner 1 connected 2025/07/18 14:58:24 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:58:45 base crash: possible deadlock in team_del_slave 2025/07/18 14:58:55 base crash: lost connection to test machine 2025/07/18 14:58:58 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 14:58:59 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 14:59:28 runner 3 connected 2025/07/18 14:59:43 runner 1 connected 2025/07/18 14:59:48 runner 5 connected 2025/07/18 15:00:05 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:00:06 patched crashed: possible deadlock in blk_mq_update_nr_hw_queues [need repro = false] 2025/07/18 15:00:30 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:00:48 runner 3 connected 2025/07/18 15:00:55 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 15:01:44 runner 1 connected 2025/07/18 15:01:58 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:62964: connect: connection refused 2025/07/18 15:01:58 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:62964: connect: connection refused 2025/07/18 15:02:08 VM-6 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:54532: connect: connection refused 2025/07/18 15:02:08 VM-6 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:54532: connect: connection refused 2025/07/18 15:02:08 base crash: lost connection to test machine 2025/07/18 15:02:18 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 15:02:23 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:17283: connect: connection refused 2025/07/18 15:02:23 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:17283: connect: connection refused 2025/07/18 15:02:27 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "corpus": 45083, "corpus [modified]": 769, "coverage": 308600, "distributor delayed": 43806, "distributor undelayed": 43803, "distributor violated": 104, "exec candidate": 79104, "exec collide": 13232, "exec fuzz": 24947, "exec gen": 1315, "exec hints": 10669, "exec inject": 0, "exec minimize": 5388, "exec retries": 35, "exec seeds": 602, "exec smash": 5039, "exec total [base]": 206072, "exec total [new]": 395417, "exec triage": 144473, "executor restarts": 1322, "fault jobs": 0, "fuzzer jobs": 24, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 6, "max signal": 313469, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3367, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46420, "no exec duration": 2712996000000, "no exec requests": 8648, "pending": 0, "prog exec time": 898, "reproducing": 3, "rpc recv": 8993698344, "rpc sent": 3177884560, "signal": 303605, "smash jobs": 7, "triage jobs": 11, "vm output": 80750341, "vm restarts [base]": 59, "vm restarts [new]": 101 } 2025/07/18 15:02:33 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:50146: connect: connection refused 2025/07/18 15:02:33 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:50146: connect: connection refused 2025/07/18 15:02:33 base crash: lost connection to test machine 2025/07/18 15:02:43 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 15:02:44 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:60567: connect: connection refused 2025/07/18 15:02:44 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:60567: connect: connection refused 2025/07/18 15:02:54 base crash: lost connection to test machine 2025/07/18 15:02:58 runner 2 connected 2025/07/18 15:02:59 runner 6 connected 2025/07/18 15:03:15 runner 1 connected 2025/07/18 15:03:24 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:03:24 runner 1 connected 2025/07/18 15:03:43 runner 3 connected 2025/07/18 15:04:05 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:04:42 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:05:29 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:05:54 reproducing crash 'WARNING in comedi_unlocked_ioctl': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/page_alloc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:05:54 repro finished 'WARNING in comedi_unlocked_ioctl', repro=true crepro=false desc='WARNING in comedi_unlocked_ioctl' hub=false from_dashboard=false 2025/07/18 15:05:54 found repro for "WARNING in comedi_unlocked_ioctl" (orig title: "-SAME-", reliability: 1), took 12.60 minutes 2025/07/18 15:05:54 "WARNING in comedi_unlocked_ioctl": saved crash log into 1752851154.crash.log 2025/07/18 15:05:54 "WARNING in comedi_unlocked_ioctl": saved repro log into 1752851154.repro.log 2025/07/18 15:06:10 patched crashed: possible deadlock in blk_mq_update_nr_hw_queues [need repro = false] 2025/07/18 15:06:59 runner 2 connected 2025/07/18 15:07:06 attempt #0 to run "WARNING in comedi_unlocked_ioctl" on base: crashed with WARNING in comedi_unlocked_ioctl 2025/07/18 15:07:06 crashes both: WARNING in comedi_unlocked_ioctl / WARNING in comedi_unlocked_ioctl 2025/07/18 15:07:16 patched crashed: WARNING in __rate_control_send_low [need repro = true] 2025/07/18 15:07:16 scheduled a reproduction of 'WARNING in __rate_control_send_low' 2025/07/18 15:07:16 start reproducing 'WARNING in __rate_control_send_low' 2025/07/18 15:07:27 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "corpus": 45102, "corpus [modified]": 770, "coverage": 308635, "distributor delayed": 43951, "distributor undelayed": 43951, "distributor violated": 109, "exec candidate": 79104, "exec collide": 15918, "exec fuzz": 30056, "exec gen": 1572, "exec hints": 11818, "exec inject": 0, "exec minimize": 6133, "exec retries": 38, "exec seeds": 660, "exec smash": 5529, "exec total [base]": 212437, "exec total [new]": 406206, "exec triage": 144764, "executor restarts": 1416, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 6, "max signal": 313919, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3826, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46511, "no exec duration": 2737622000000, "no exec requests": 8685, "pending": 0, "prog exec time": 353, "reproducing": 3, "rpc recv": 9229811012, "rpc sent": 3380475864, "signal": 303635, "smash jobs": 1, "triage jobs": 5, "vm output": 84625245, "vm restarts [base]": 62, "vm restarts [new]": 104 } 2025/07/18 15:07:55 runner 0 connected 2025/07/18 15:08:05 runner 6 connected 2025/07/18 15:08:44 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/07/18 15:09:34 runner 3 connected 2025/07/18 15:10:19 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 15:10:51 patched crashed: WARNING in cm109_urb_irq_callback/usb_submit_urb [need repro = true] 2025/07/18 15:10:51 scheduled a reproduction of 'WARNING in cm109_urb_irq_callback/usb_submit_urb' 2025/07/18 15:10:51 start reproducing 'WARNING in cm109_urb_irq_callback/usb_submit_urb' 2025/07/18 15:11:01 runner 6 connected 2025/07/18 15:11:03 patched crashed: WARNING in cm109_urb_irq_callback/usb_submit_urb [need repro = true] 2025/07/18 15:11:03 scheduled a reproduction of 'WARNING in cm109_urb_irq_callback/usb_submit_urb' 2025/07/18 15:11:10 base crash: lost connection to test machine 2025/07/18 15:11:51 runner 3 connected 2025/07/18 15:11:59 runner 0 connected 2025/07/18 15:12:06 reproducing crash 'WARNING in cm109_urb_irq_callback/usb_submit_urb': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/usb/core/urb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:12:27 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "corpus": 45126, "corpus [modified]": 771, "coverage": 308666, "distributor delayed": 44081, "distributor undelayed": 44081, "distributor violated": 113, "exec candidate": 79104, "exec collide": 18267, "exec fuzz": 34490, "exec gen": 1805, "exec hints": 13723, "exec inject": 0, "exec minimize": 6896, "exec retries": 38, "exec seeds": 736, "exec smash": 6129, "exec total [base]": 221288, "exec total [new]": 416819, "exec triage": 145013, "executor restarts": 1499, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 7, "max signal": 314083, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4255, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46596, "no exec duration": 2748423000000, "no exec requests": 8703, "pending": 1, "prog exec time": 339, "reproducing": 4, "rpc recv": 9466900176, "rpc sent": 3590458944, "signal": 303664, "smash jobs": 4, "triage jobs": 4, "vm output": 88322780, "vm restarts [base]": 64, "vm restarts [new]": 108 } 2025/07/18 15:12:33 reproducing crash 'WARNING in cm109_urb_irq_callback/usb_submit_urb': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/usb/core/urb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:12:59 reproducing crash 'WARNING in cm109_urb_irq_callback/usb_submit_urb': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/usb/core/urb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:16:58 reproducing crash 'WARNING in cm109_urb_irq_callback/usb_submit_urb': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/usb/core/urb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:17:27 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "corpus": 45151, "corpus [modified]": 773, "coverage": 308729, "distributor delayed": 44185, "distributor undelayed": 44184, "distributor violated": 113, "exec candidate": 79104, "exec collide": 21006, "exec fuzz": 39458, "exec gen": 2079, "exec hints": 16153, "exec inject": 0, "exec minimize": 7648, "exec retries": 38, "exec seeds": 808, "exec smash": 6802, "exec total [base]": 232879, "exec total [new]": 428929, "exec triage": 145213, "executor restarts": 1532, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 6, "max signal": 314234, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4630, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46665, "no exec duration": 2770123000000, "no exec requests": 8735, "pending": 1, "prog exec time": 238, "reproducing": 4, "rpc recv": 9509428884, "rpc sent": 3815815232, "signal": 303712, "smash jobs": 1, "triage jobs": 5, "vm output": 91533569, "vm restarts [base]": 64, "vm restarts [new]": 108 } 2025/07/18 15:17:39 reproducing crash 'WARNING in cm109_urb_irq_callback/usb_submit_urb': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/usb/core/urb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:18:10 reproducing crash 'WARNING in cm109_urb_irq_callback/usb_submit_urb': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/usb/core/urb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:18:15 repro finished 'WARNING in __rate_control_send_low', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/18 15:18:15 failed repro for "WARNING in __rate_control_send_low", err=%!s() 2025/07/18 15:18:15 "WARNING in __rate_control_send_low": saved crash log into 1752851895.crash.log 2025/07/18 15:18:15 "WARNING in __rate_control_send_low": saved repro log into 1752851895.repro.log 2025/07/18 15:18:20 runner 0 connected 2025/07/18 15:18:50 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/18 15:18:51 reproducing crash 'WARNING in cm109_urb_irq_callback/usb_submit_urb': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/usb/core/urb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:18:51 repro finished 'WARNING in cm109_urb_irq_callback/usb_submit_urb', repro=true crepro=false desc='WARNING in cm109_urb_irq_callback/usb_submit_urb' hub=false from_dashboard=false 2025/07/18 15:18:51 found repro for "WARNING in cm109_urb_irq_callback/usb_submit_urb" (orig title: "-SAME-", reliability: 1), took 7.93 minutes 2025/07/18 15:18:51 start reproducing 'WARNING in cm109_urb_irq_callback/usb_submit_urb' 2025/07/18 15:18:51 "WARNING in cm109_urb_irq_callback/usb_submit_urb": saved crash log into 1752851931.crash.log 2025/07/18 15:18:51 "WARNING in cm109_urb_irq_callback/usb_submit_urb": saved repro log into 1752851931.repro.log 2025/07/18 15:19:05 runner 1 connected 2025/07/18 15:19:06 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 15:19:38 runner 3 connected 2025/07/18 15:19:55 runner 5 connected 2025/07/18 15:20:06 attempt #0 to run "WARNING in cm109_urb_irq_callback/usb_submit_urb" on base: crashed with WARNING in cm109_urb_irq_callback/usb_submit_urb 2025/07/18 15:20:06 crashes both: WARNING in cm109_urb_irq_callback/usb_submit_urb / WARNING in cm109_urb_irq_callback/usb_submit_urb 2025/07/18 15:20:29 base crash: lost connection to test machine 2025/07/18 15:20:34 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/07/18 15:20:55 runner 0 connected 2025/07/18 15:21:19 runner 1 connected 2025/07/18 15:21:22 runner 4 connected 2025/07/18 15:21:26 reproducing crash 'WARNING in cm109_urb_irq_callback/usb_submit_urb': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/usb/core/urb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:22:08 reproducing crash 'WARNING in cm109_urb_irq_callback/usb_submit_urb': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/usb/core/urb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:22:27 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "corpus": 45170, "corpus [modified]": 773, "coverage": 308756, "distributor delayed": 44310, "distributor undelayed": 44310, "distributor violated": 113, "exec candidate": 79104, "exec collide": 23794, "exec fuzz": 44722, "exec gen": 2360, "exec hints": 17242, "exec inject": 0, "exec minimize": 8120, "exec retries": 69, "exec seeds": 854, "exec smash": 7225, "exec total [base]": 240125, "exec total [new]": 439567, "exec triage": 145452, "executor restarts": 1653, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 6, "hints jobs": 4, "max signal": 314490, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4931, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46746, "no exec duration": 2787641000000, "no exec requests": 8767, "pending": 0, "prog exec time": 375, "reproducing": 3, "rpc recv": 9762994612, "rpc sent": 4027190488, "signal": 303736, "smash jobs": 6, "triage jobs": 5, "vm output": 95547704, "vm restarts [base]": 67, "vm restarts [new]": 112 } 2025/07/18 15:23:20 reproducing crash 'WARNING in cm109_urb_irq_callback/usb_submit_urb': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/usb/core/urb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:23:23 patched crashed: possible deadlock in __del_gendisk [need repro = false] 2025/07/18 15:24:12 runner 3 connected 2025/07/18 15:24:47 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 15:25:35 runner 3 connected 2025/07/18 15:26:08 base crash: kernel BUG in may_open 2025/07/18 15:26:32 base crash: kernel BUG in txAbort 2025/07/18 15:26:56 runner 2 connected 2025/07/18 15:27:20 runner 1 connected 2025/07/18 15:27:26 reproducing crash 'WARNING in cm109_urb_irq_callback/usb_submit_urb': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/usb/core/urb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:27:27 STAT { "buffer too small": 5, "candidate triage jobs": 0, "candidates": 0, "corpus": 45195, "corpus [modified]": 776, "coverage": 308807, "distributor delayed": 44411, "distributor undelayed": 44411, "distributor violated": 113, "exec candidate": 79104, "exec collide": 26519, "exec fuzz": 49821, "exec gen": 2641, "exec hints": 18662, "exec inject": 0, "exec minimize": 8857, "exec retries": 69, "exec seeds": 923, "exec smash": 7938, "exec total [base]": 246503, "exec total [new]": 450836, "exec triage": 145678, "executor restarts": 1742, "fault jobs": 0, "fuzzer jobs": 9, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 4, "max signal": 314666, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5357, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46819, "no exec duration": 2801884000000, "no exec requests": 8789, "pending": 0, "prog exec time": 322, "reproducing": 3, "rpc recv": 9928602208, "rpc sent": 4207980032, "signal": 303779, "smash jobs": 0, "triage jobs": 5, "vm output": 100049507, "vm restarts [base]": 69, "vm restarts [new]": 114 } 2025/07/18 15:28:28 reproducing crash 'WARNING in cm109_urb_irq_callback/usb_submit_urb': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/usb/core/urb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:28:58 reproducing crash 'WARNING in cm109_urb_irq_callback/usb_submit_urb': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/usb/core/urb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:29:16 patched crashed: INFO: task hung in __iterate_supers [need repro = true] 2025/07/18 15:29:16 scheduled a reproduction of 'INFO: task hung in __iterate_supers' 2025/07/18 15:29:16 start reproducing 'INFO: task hung in __iterate_supers' 2025/07/18 15:29:18 base crash: lost connection to test machine 2025/07/18 15:29:34 reproducing crash 'WARNING in cm109_urb_irq_callback/usb_submit_urb': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/usb/core/urb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:29:34 repro finished 'WARNING in cm109_urb_irq_callback/usb_submit_urb', repro=true crepro=false desc='WARNING in cm109_urb_irq_callback/usb_submit_urb' hub=false from_dashboard=false 2025/07/18 15:29:34 found repro for "WARNING in cm109_urb_irq_callback/usb_submit_urb" (orig title: "-SAME-", reliability: 1), took 10.67 minutes 2025/07/18 15:29:34 "WARNING in cm109_urb_irq_callback/usb_submit_urb": saved crash log into 1752852574.crash.log 2025/07/18 15:29:34 "WARNING in cm109_urb_irq_callback/usb_submit_urb": saved repro log into 1752852574.repro.log 2025/07/18 15:29:58 runner 0 connected 2025/07/18 15:29:58 runner 1 connected 2025/07/18 15:30:06 runner 5 connected 2025/07/18 15:30:11 patched crashed: possible deadlock in team_del_slave [need repro = false] 2025/07/18 15:30:39 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/07/18 15:30:59 runner 6 connected 2025/07/18 15:31:20 runner 2 connected 2025/07/18 15:31:24 attempt #0 to run "WARNING in cm109_urb_irq_callback/usb_submit_urb" on base: crashed with WARNING in cm109_urb_irq_callback/usb_submit_urb 2025/07/18 15:31:24 crashes both: WARNING in cm109_urb_irq_callback/usb_submit_urb / WARNING in cm109_urb_irq_callback/usb_submit_urb 2025/07/18 15:32:13 runner 0 connected 2025/07/18 15:32:27 STAT { "buffer too small": 5, "candidate triage jobs": 0, "candidates": 0, "corpus": 45221, "corpus [modified]": 778, "coverage": 308844, "distributor delayed": 44525, "distributor undelayed": 44525, "distributor violated": 113, "exec candidate": 79104, "exec collide": 29128, "exec fuzz": 54931, "exec gen": 2934, "exec hints": 19746, "exec inject": 0, "exec minimize": 9463, "exec retries": 71, "exec seeds": 998, "exec smash": 8520, "exec total [base]": 254424, "exec total [new]": 461428, "exec triage": 145905, "executor restarts": 1829, "fault jobs": 0, "fuzzer jobs": 10, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 6, "hints jobs": 4, "max signal": 314818, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5685, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46895, "no exec duration": 2816315000000, "no exec requests": 8820, "pending": 0, "prog exec time": 331, "reproducing": 3, "rpc recv": 10152192436, "rpc sent": 4408637968, "signal": 303811, "smash jobs": 2, "triage jobs": 4, "vm output": 103894686, "vm restarts [base]": 71, "vm restarts [new]": 118 } 2025/07/18 15:35:50 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 15:36:40 runner 5 connected 2025/07/18 15:36:46 base crash: kernel BUG in txAbort 2025/07/18 15:37:09 patched crashed: possible deadlock in team_del_slave [need repro = false] 2025/07/18 15:37:10 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/07/18 15:37:27 STAT { "buffer too small": 5, "candidate triage jobs": 0, "candidates": 0, "corpus": 45243, "corpus [modified]": 779, "coverage": 308870, "distributor delayed": 44622, "distributor undelayed": 44620, "distributor violated": 113, "exec candidate": 79104, "exec collide": 32203, "exec fuzz": 61085, "exec gen": 3230, "exec hints": 21454, "exec inject": 0, "exec minimize": 10009, "exec retries": 73, "exec seeds": 1058, "exec smash": 9073, "exec total [base]": 263733, "exec total [new]": 474026, "exec triage": 146111, "executor restarts": 1904, "fault jobs": 0, "fuzzer jobs": 13, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 7, "max signal": 314961, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5979, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46965, "no exec duration": 2827350000000, "no exec requests": 8843, "pending": 0, "prog exec time": 314, "reproducing": 3, "rpc recv": 10222219180, "rpc sent": 4642014488, "signal": 303836, "smash jobs": 2, "triage jobs": 4, "vm output": 108038532, "vm restarts [base]": 71, "vm restarts [new]": 119 } 2025/07/18 15:37:30 base crash: KASAN: out-of-bounds Read in ext4_xattr_set_entry 2025/07/18 15:37:34 runner 2 connected 2025/07/18 15:37:58 runner 0 connected 2025/07/18 15:37:59 runner 1 connected 2025/07/18 15:38:11 runner 1 connected 2025/07/18 15:38:31 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 15:38:40 patched crashed: KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry [need repro = true] 2025/07/18 15:38:40 scheduled a reproduction of 'KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry' 2025/07/18 15:38:40 start reproducing 'KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry' 2025/07/18 15:38:42 base crash: lost connection to test machine 2025/07/18 15:38:55 base crash: lost connection to test machine 2025/07/18 15:38:56 base crash: KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry 2025/07/18 15:39:29 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 15:39:30 runner 1 connected 2025/07/18 15:39:37 runner 0 connected 2025/07/18 15:39:44 runner 2 connected 2025/07/18 15:40:07 base crash: lost connection to test machine 2025/07/18 15:40:11 runner 6 connected 2025/07/18 15:40:24 patched crashed: possible deadlock in team_del_slave [need repro = false] 2025/07/18 15:40:49 runner 0 connected 2025/07/18 15:41:14 runner 5 connected 2025/07/18 15:41:18 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 15:41:54 base crash: lost connection to test machine 2025/07/18 15:42:07 runner 3 connected 2025/07/18 15:42:08 patched crashed: kernel BUG in ocfs2_set_new_buffer_uptodate [need repro = true] 2025/07/18 15:42:08 scheduled a reproduction of 'kernel BUG in ocfs2_set_new_buffer_uptodate' 2025/07/18 15:42:08 start reproducing 'kernel BUG in ocfs2_set_new_buffer_uptodate' 2025/07/18 15:42:08 failed to recv *flatrpc.InfoRequestRawT: EOF 2025/07/18 15:42:27 STAT { "buffer too small": 5, "candidate triage jobs": 0, "candidates": 0, "corpus": 45252, "corpus [modified]": 779, "coverage": 308881, "distributor delayed": 44686, "distributor undelayed": 44681, "distributor violated": 115, "exec candidate": 79104, "exec collide": 33752, "exec fuzz": 64137, "exec gen": 3366, "exec hints": 22706, "exec inject": 0, "exec minimize": 10188, "exec retries": 74, "exec seeds": 1078, "exec smash": 9251, "exec total [base]": 270280, "exec total [new]": 480485, "exec triage": 146208, "executor restarts": 1962, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 6, "max signal": 315021, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6087, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46998, "no exec duration": 2833497000000, "no exec requests": 8860, "pending": 0, "prog exec time": 537, "reproducing": 5, "rpc recv": 10547591908, "rpc sent": 4773038792, "signal": 303846, "smash jobs": 1, "triage jobs": 5, "vm output": 112422807, "vm restarts [base]": 77, "vm restarts [new]": 124 } 2025/07/18 15:42:42 runner 2 connected 2025/07/18 15:42:53 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:42:57 runner 5 connected 2025/07/18 15:43:40 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:44:21 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:45:02 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:45:34 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:46:15 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:46:49 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:47:06 base crash: lost connection to test machine 2025/07/18 15:47:06 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 15:47:27 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "corpus": 45258, "corpus [modified]": 779, "coverage": 308887, "distributor delayed": 44723, "distributor undelayed": 44720, "distributor violated": 118, "exec candidate": 79104, "exec collide": 35145, "exec fuzz": 66842, "exec gen": 3499, "exec hints": 24209, "exec inject": 0, "exec minimize": 10382, "exec retries": 74, "exec seeds": 1096, "exec smash": 9399, "exec total [base]": 278681, "exec total [new]": 486650, "exec triage": 146281, "executor restarts": 2006, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 5, "max signal": 315061, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6226, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47023, "no exec duration": 3421994000000, "no exec requests": 11074, "pending": 0, "prog exec time": 240, "reproducing": 5, "rpc recv": 10625240356, "rpc sent": 4898846936, "signal": 303852, "smash jobs": 0, "triage jobs": 3, "vm output": 116434135, "vm restarts [base]": 78, "vm restarts [new]": 125 } 2025/07/18 15:47:29 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:47:54 runner 3 connected 2025/07/18 15:47:54 runner 5 connected 2025/07/18 15:48:04 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:48:33 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 15:48:41 repro finished 'KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry', repro=true crepro=false desc='KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry' hub=false from_dashboard=false 2025/07/18 15:48:41 found repro for "KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry" (orig title: "-SAME-", reliability: 1), took 9.71 minutes 2025/07/18 15:48:41 "KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry": saved crash log into 1752853721.crash.log 2025/07/18 15:48:41 "KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry": saved repro log into 1752853721.repro.log 2025/07/18 15:49:21 runner 5 connected 2025/07/18 15:49:21 base crash: possible deadlock in blk_mq_update_nr_hw_queues 2025/07/18 15:49:30 runner 0 connected 2025/07/18 15:49:48 attempt #0 to run "KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry" on base: crashed with KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry 2025/07/18 15:49:48 crashes both: KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry / KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry 2025/07/18 15:50:00 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/07/18 15:50:02 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/18 15:50:09 runner 2 connected 2025/07/18 15:50:28 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:50:29 runner 0 connected 2025/07/18 15:50:34 patched crashed: possible deadlock in team_del_slave [need repro = false] 2025/07/18 15:50:43 runner 1 connected 2025/07/18 15:50:49 runner 6 connected 2025/07/18 15:50:54 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:50:58 base crash: possible deadlock in team_del_slave 2025/07/18 15:51:16 runner 5 connected 2025/07/18 15:51:40 runner 3 connected 2025/07/18 15:52:23 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:52:27 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "corpus": 45260, "corpus [modified]": 779, "coverage": 308894, "distributor delayed": 44758, "distributor undelayed": 44758, "distributor violated": 119, "exec candidate": 79104, "exec collide": 36328, "exec fuzz": 68988, "exec gen": 3611, "exec hints": 24898, "exec inject": 0, "exec minimize": 10512, "exec retries": 74, "exec seeds": 1108, "exec smash": 9494, "exec total [base]": 282936, "exec total [new]": 491088, "exec triage": 146344, "executor restarts": 2075, "fault jobs": 0, "fuzzer jobs": 7, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 3, "max signal": 315094, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6331, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47042, "no exec duration": 3670654000000, "no exec requests": 11821, "pending": 0, "prog exec time": 336, "reproducing": 4, "rpc recv": 10946992492, "rpc sent": 5000223744, "signal": 303858, "smash jobs": 2, "triage jobs": 2, "vm output": 119825552, "vm restarts [base]": 83, "vm restarts [new]": 130 } 2025/07/18 15:53:13 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:53:39 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/07/18 15:53:40 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:53:43 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/18 15:54:24 runner 0 connected 2025/07/18 15:54:25 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:54:28 runner 0 connected 2025/07/18 15:54:50 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:55:06 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 15:55:45 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:55:55 runner 6 connected 2025/07/18 15:56:07 patched crashed: possible deadlock in team_device_event [need repro = false] 2025/07/18 15:56:46 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:56:57 runner 0 connected 2025/07/18 15:56:58 base crash: possible deadlock in team_device_event 2025/07/18 15:57:12 VM-5 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:25763: connect: connection refused 2025/07/18 15:57:12 VM-5 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:25763: connect: connection refused 2025/07/18 15:57:15 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:57:22 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 15:57:27 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "corpus": 45267, "corpus [modified]": 779, "coverage": 308916, "distributor delayed": 44844, "distributor undelayed": 44844, "distributor violated": 125, "exec candidate": 79104, "exec collide": 37990, "exec fuzz": 72159, "exec gen": 3754, "exec hints": 25510, "exec inject": 0, "exec minimize": 10867, "exec retries": 74, "exec seeds": 1129, "exec smash": 9688, "exec total [base]": 289394, "exec total [new]": 497375, "exec triage": 146474, "executor restarts": 2125, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 1, "max signal": 315194, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6569, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47084, "no exec duration": 3937898000000, "no exec requests": 12685, "pending": 0, "prog exec time": 441, "reproducing": 4, "rpc recv": 11096762524, "rpc sent": 5146407432, "signal": 303871, "smash jobs": 0, "triage jobs": 5, "vm output": 122767701, "vm restarts [base]": 84, "vm restarts [new]": 133 } 2025/07/18 15:57:34 base crash: kernel BUG in ocfs2_write_cluster_by_desc 2025/07/18 15:57:38 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/18 15:57:39 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/07/18 15:57:48 runner 3 connected 2025/07/18 15:58:05 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:58:12 runner 5 connected 2025/07/18 15:58:22 runner 0 connected 2025/07/18 15:58:27 runner 2 connected 2025/07/18 15:58:28 runner 4 connected 2025/07/18 15:58:34 reproducing crash 'kernel BUG in ocfs2_set_new_buffer_uptodate': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/uptodate.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 15:58:34 repro finished 'kernel BUG in ocfs2_set_new_buffer_uptodate', repro=true crepro=false desc='kernel BUG in ocfs2_set_new_buffer_uptodate' hub=false from_dashboard=false 2025/07/18 15:58:34 found repro for "kernel BUG in ocfs2_set_new_buffer_uptodate" (orig title: "-SAME-", reliability: 1), took 16.39 minutes 2025/07/18 15:58:34 "kernel BUG in ocfs2_set_new_buffer_uptodate": saved crash log into 1752854314.crash.log 2025/07/18 15:58:34 "kernel BUG in ocfs2_set_new_buffer_uptodate": saved repro log into 1752854314.repro.log 2025/07/18 15:58:54 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/18 15:59:02 patched crashed: KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry [need repro = false] 2025/07/18 15:59:26 patched crashed: KASAN: out-of-bounds Read in ext4_xattr_set_entry [need repro = false] 2025/07/18 15:59:30 runner 2 connected 2025/07/18 15:59:46 runner 1 connected 2025/07/18 15:59:50 runner 0 connected 2025/07/18 15:59:55 attempt #0 to run "kernel BUG in ocfs2_set_new_buffer_uptodate" on base: crashed with kernel BUG in ocfs2_set_new_buffer_uptodate 2025/07/18 15:59:55 crashes both: kernel BUG in ocfs2_set_new_buffer_uptodate / kernel BUG in ocfs2_set_new_buffer_uptodate 2025/07/18 16:00:23 runner 5 connected 2025/07/18 16:00:31 runner 1 connected 2025/07/18 16:00:52 runner 0 connected 2025/07/18 16:00:57 patched crashed: WARNING in path_noexec [need repro = true] 2025/07/18 16:00:57 scheduled a reproduction of 'WARNING in path_noexec' 2025/07/18 16:00:57 start reproducing 'WARNING in path_noexec' 2025/07/18 16:01:05 base crash: WARNING in path_noexec 2025/07/18 16:01:12 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/07/18 16:01:30 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:01:54 runner 5 connected 2025/07/18 16:02:02 runner 1 connected 2025/07/18 16:02:09 runner 6 connected 2025/07/18 16:02:26 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:02:27 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "corpus": 45282, "corpus [modified]": 779, "coverage": 308945, "distributor delayed": 44897, "distributor undelayed": 44897, "distributor violated": 125, "exec candidate": 79104, "exec collide": 39282, "exec fuzz": 74661, "exec gen": 3876, "exec hints": 25840, "exec inject": 0, "exec minimize": 11417, "exec retries": 74, "exec seeds": 1171, "exec smash": 10036, "exec total [base]": 294407, "exec total [new]": 502649, "exec triage": 146550, "executor restarts": 2184, "fault jobs": 0, "fuzzer jobs": 7, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 2, "max signal": 315238, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6895, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47111, "no exec duration": 3938742000000, "no exec requests": 12690, "pending": 0, "prog exec time": 461, "reproducing": 4, "rpc recv": 11557519044, "rpc sent": 5293881896, "signal": 303895, "smash jobs": 1, "triage jobs": 4, "vm output": 125751626, "vm restarts [base]": 90, "vm restarts [new]": 141 } 2025/07/18 16:02:58 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:03:34 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:04:10 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:04:18 base crash: lost connection to test machine 2025/07/18 16:04:49 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/07/18 16:05:02 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/07/18 16:05:03 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:05:05 base crash: unregister_netdevice: waiting for DEV to become free 2025/07/18 16:05:15 runner 0 connected 2025/07/18 16:05:37 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:05:45 runner 5 connected 2025/07/18 16:05:59 runner 6 connected 2025/07/18 16:06:02 runner 3 connected 2025/07/18 16:06:18 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 16:06:32 base crash: lost connection to test machine 2025/07/18 16:07:17 runner 4 connected 2025/07/18 16:07:18 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:07:27 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "corpus": 45287, "corpus [modified]": 779, "coverage": 308952, "distributor delayed": 44936, "distributor undelayed": 44936, "distributor violated": 125, "exec candidate": 79104, "exec collide": 40769, "exec fuzz": 77457, "exec gen": 3999, "exec hints": 26509, "exec inject": 0, "exec minimize": 11501, "exec retries": 75, "exec seeds": 1186, "exec smash": 10160, "exec total [base]": 300024, "exec total [new]": 508005, "exec triage": 146613, "executor restarts": 2246, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 2, "max signal": 315272, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6968, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47131, "no exec duration": 4074764000000, "no exec requests": 13120, "pending": 0, "prog exec time": 514, "reproducing": 4, "rpc recv": 11725993372, "rpc sent": 5415564496, "signal": 303900, "smash jobs": 0, "triage jobs": 3, "vm output": 128599099, "vm restarts [base]": 92, "vm restarts [new]": 144 } 2025/07/18 16:07:30 runner 1 connected 2025/07/18 16:07:49 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:08:23 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:08:54 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:09:53 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:10:26 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:11:25 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:11:42 patched crashed: lost connection to test machine [need repro = false] 2025/07/18 16:11:57 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:12:23 status reporting terminated 2025/07/18 16:12:23 bug reporting terminated 2025/07/18 16:12:23 syz-diff (base): kernel context loop terminated 2025/07/18 16:12:28 reproducing crash 'WARNING in path_noexec': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/exec.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/07/18 16:12:28 repro finished 'WARNING in path_noexec', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/18 16:15:06 repro finished 'WARNING in io_ring_exit_work', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/18 16:16:04 repro finished 'INFO: task hung in __iterate_supers', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/18 16:16:22 repro finished 'possible deadlock in ocfs2_try_remove_refcount_tree', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/07/18 16:16:22 syz-diff (new): kernel context loop terminated 2025/07/18 16:16:22 diff fuzzing terminated 2025/07/18 16:16:22 fuzzing is finished 2025/07/18 16:16:22 status at the end: Title On-Base On-Patched INFO: task hung in __iterate_supers 1 crashes INFO: task hung in corrupted 1 crashes INFO: trying to register non-static key in ocfs2_dlm_shutdown 1 crashes KASAN: out-of-bounds Read in ext4_xattr_set_entry 1 crashes 1 crashes KASAN: slab-out-of-bounds Write in ext4_xattr_set_entry 2 crashes 2 crashes[reproduced] WARNING in __rate_control_send_low 1 crashes WARNING in cm109_urb_irq_callback/usb_submit_urb 2 crashes 2 crashes[reproduced] WARNING in comedi_unlocked_ioctl 1 crashes 1 crashes[reproduced] WARNING in dbAdjTree 1 crashes 3 crashes WARNING in io_ring_exit_work 1 crashes WARNING in path_noexec 1 crashes 1 crashes kernel BUG in dnotify_free_mark 2 crashes 2 crashes[reproduced] kernel BUG in jfs_evict_inode 2 crashes 9 crashes[reproduced] kernel BUG in may_open 2 crashes 1 crashes[reproduced] kernel BUG in ocfs2_set_new_buffer_uptodate 1 crashes 1 crashes[reproduced] kernel BUG in ocfs2_write_cluster_by_desc 1 crashes kernel BUG in txAbort 2 crashes kernel BUG in txUnlock 1 crashes 4 crashes lost connection to test machine 22 crashes 36 crashes no output from test machine 10 crashes 1 crashes possible deadlock in __del_gendisk 4 crashes 10 crashes possible deadlock in attr_data_get_block 1 crashes possible deadlock in blk_mq_update_nr_hw_queues 6 crashes 8 crashes possible deadlock in input_inject_event 1 crashes possible deadlock in ntfs_fiemap 1 crashes possible deadlock in ocfs2_acquire_dquot 1 crashes 1 crashes possible deadlock in ocfs2_init_acl 2 crashes 4 crashes possible deadlock in ocfs2_reserve_local_alloc_bits 1 crashes possible deadlock in ocfs2_reserve_suballoc_bits 4 crashes 2 crashes possible deadlock in ocfs2_try_remove_refcount_tree 2 crashes 3 crashes possible deadlock in ocfs2_write_begin_nolock 1 crashes possible deadlock in ocfs2_xattr_set 1 crashes 5 crashes possible deadlock in team_del_slave 3 crashes 4 crashes possible deadlock in team_device_event 4 crashes 4 crashes unregister_netdevice: waiting for DEV to become free 9 crashes 11 crashes