2025/08/16 12:32:35 extracted 303751 symbol hashes for base and 303751 for patched 2025/08/16 12:32:36 adding modified_functions to focus areas: ["__folio_end_writeback" "__folio_mark_dirty" "__folio_start_writeback" "__wb_update_bandwidth" "balance_dirty_pages" "balance_dirty_pages_ratelimited_flags" "balance_wb_limits" "dirty_bytes_handler" "dirty_ratio_handler" "nvmet_execute_disc_identify" "wb_dirty_limits"] 2025/08/16 12:32:36 adding directly modified files to focus areas: ["mm/page-writeback.c"] 2025/08/16 12:32:37 downloaded the corpus from https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db 2025/08/16 12:33:34 runner 3 connected 2025/08/16 12:33:34 runner 7 connected 2025/08/16 12:33:35 runner 2 connected 2025/08/16 12:33:35 runner 3 connected 2025/08/16 12:33:42 runner 1 connected 2025/08/16 12:33:42 runner 5 connected 2025/08/16 12:33:42 runner 2 connected 2025/08/16 12:33:42 initializing coverage information... 2025/08/16 12:33:42 executor cover filter: 0 PCs 2025/08/16 12:33:42 runner 4 connected 2025/08/16 12:33:43 runner 0 connected 2025/08/16 12:33:43 runner 9 connected 2025/08/16 12:33:43 runner 1 connected 2025/08/16 12:33:43 runner 8 connected 2025/08/16 12:33:43 runner 6 connected 2025/08/16 12:33:48 discovered 7699 source files, 338620 symbols 2025/08/16 12:33:48 coverage filter: __folio_end_writeback: [__folio_end_writeback] 2025/08/16 12:33:48 coverage filter: __folio_mark_dirty: [__folio_mark_dirty] 2025/08/16 12:33:48 coverage filter: __folio_start_writeback: [__folio_start_writeback] 2025/08/16 12:33:48 coverage filter: __wb_update_bandwidth: [__wb_update_bandwidth] 2025/08/16 12:33:48 coverage filter: balance_dirty_pages: [__bpf_trace_balance_dirty_pages __probestub_balance_dirty_pages __traceiter_balance_dirty_pages balance_dirty_pages balance_dirty_pages_ratelimited balance_dirty_pages_ratelimited_flags perf_trace_balance_dirty_pages trace_balance_dirty_pages trace_event_raw_event_balance_dirty_pages trace_raw_output_balance_dirty_pages] 2025/08/16 12:33:48 coverage filter: balance_dirty_pages_ratelimited_flags: [] 2025/08/16 12:33:48 coverage filter: balance_wb_limits: [balance_wb_limits] 2025/08/16 12:33:48 coverage filter: dirty_bytes_handler: [dirty_bytes_handler] 2025/08/16 12:33:48 coverage filter: dirty_ratio_handler: [dirty_ratio_handler] 2025/08/16 12:33:48 coverage filter: nvmet_execute_disc_identify: [nvmet_execute_disc_identify] 2025/08/16 12:33:48 coverage filter: wb_dirty_limits: [wb_dirty_limits] 2025/08/16 12:33:48 coverage filter: mm/page-writeback.c: [mm/page-writeback.c] 2025/08/16 12:33:48 area "symbols": 543 PCs in the cover filter 2025/08/16 12:33:48 area "files": 1191 PCs in the cover filter 2025/08/16 12:33:48 area "": 0 PCs in the cover filter 2025/08/16 12:33:48 executor cover filter: 0 PCs 2025/08/16 12:33:51 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/16 12:33:51 base: machine check complete 2025/08/16 12:33:54 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/16 12:33:54 new: machine check complete 2025/08/16 12:33:55 new: adding 77726 seeds 2025/08/16 12:34:24 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 12:34:45 base crash: lost connection to test machine 2025/08/16 12:34:47 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 12:35:21 runner 8 connected 2025/08/16 12:35:41 runner 1 connected 2025/08/16 12:36:28 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = true] 2025/08/16 12:36:28 scheduled a reproduction of 'WARNING in xfrm6_tunnel_net_exit' 2025/08/16 12:36:28 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = true] 2025/08/16 12:36:28 scheduled a reproduction of 'WARNING in xfrm6_tunnel_net_exit' 2025/08/16 12:36:45 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = true] 2025/08/16 12:36:45 scheduled a reproduction of 'WARNING in xfrm6_tunnel_net_exit' 2025/08/16 12:37:24 runner 0 connected 2025/08/16 12:37:25 runner 2 connected 2025/08/16 12:37:38 STAT { "buffer too small": 0, "candidate triage jobs": 52, "candidates": 73781, "comps overflows": 0, "corpus": 3857, "corpus [files]": 554, "corpus [symbols]": 423, "cover overflows": 2225, "coverage": 157760, "distributor delayed": 4087, "distributor undelayed": 4085, "distributor violated": 0, "exec candidate": 3945, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 7019, "exec total [new]": 17552, "exec triage": 12269, "executor restarts": 117, "fault jobs": 0, "fuzzer jobs": 52, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 159888, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 3945, "no exec duration": 39657000000, "no exec requests": 312, "pending": 3, "prog exec time": 265, "reproducing": 0, "rpc recv": 914332224, "rpc sent": 88359064, "signal": 155083, "smash jobs": 0, "triage jobs": 0, "vm output": 2272767, "vm restarts [base]": 4, "vm restarts [new]": 13 } 2025/08/16 12:37:42 runner 7 connected 2025/08/16 12:38:55 base crash: kernel BUG in txUnlock 2025/08/16 12:39:24 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 12:39:35 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = true] 2025/08/16 12:39:35 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_alloc_spi' 2025/08/16 12:39:42 base crash: possible deadlock in ocfs2_acquire_dquot 2025/08/16 12:39:45 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = true] 2025/08/16 12:39:45 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_alloc_spi' 2025/08/16 12:39:52 runner 1 connected 2025/08/16 12:39:57 base crash: KASAN: slab-use-after-free Read in xfrm_state_find 2025/08/16 12:40:21 runner 6 connected 2025/08/16 12:40:32 runner 0 connected 2025/08/16 12:40:39 runner 3 connected 2025/08/16 12:40:54 runner 2 connected 2025/08/16 12:42:01 patched crashed: possible deadlock in attr_data_get_block [need repro = true] 2025/08/16 12:42:01 scheduled a reproduction of 'possible deadlock in attr_data_get_block' 2025/08/16 12:42:26 base crash: WARNING in ext4_xattr_inode_lookup_create 2025/08/16 12:42:38 STAT { "buffer too small": 0, "candidate triage jobs": 33, "candidates": 68345, "comps overflows": 0, "corpus": 9227, "corpus [files]": 1079, "corpus [symbols]": 832, "cover overflows": 5771, "coverage": 202170, "distributor delayed": 10289, "distributor undelayed": 10289, "distributor violated": 0, "exec candidate": 9381, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 14555, "exec total [new]": 42853, "exec triage": 29411, "executor restarts": 181, "fault jobs": 0, "fuzzer jobs": 33, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 204655, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 9381, "no exec duration": 39724000000, "no exec requests": 316, "pending": 6, "prog exec time": 280, "reproducing": 0, "rpc recv": 1526990600, "rpc sent": 198034312, "signal": 197746, "smash jobs": 0, "triage jobs": 0, "vm output": 4479284, "vm restarts [base]": 7, "vm restarts [new]": 16 } 2025/08/16 12:42:39 base crash: WARNING in l2cap_chan_del 2025/08/16 12:42:44 base: boot error: can't ssh into the instance 2025/08/16 12:42:52 runner 7 connected 2025/08/16 12:43:24 runner 1 connected 2025/08/16 12:43:34 runner 0 connected 2025/08/16 12:43:35 runner 3 connected 2025/08/16 12:44:20 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/16 12:44:20 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/16 12:44:39 patched crashed: INFO: task hung in v9fs_evict_inode [need repro = true] 2025/08/16 12:44:39 scheduled a reproduction of 'INFO: task hung in v9fs_evict_inode' 2025/08/16 12:44:53 new: boot error: can't ssh into the instance 2025/08/16 12:44:53 patched crashed: INFO: task hung in v9fs_evict_inode [need repro = true] 2025/08/16 12:44:53 scheduled a reproduction of 'INFO: task hung in v9fs_evict_inode' 2025/08/16 12:44:57 patched crashed: INFO: task hung in v9fs_evict_inode [need repro = true] 2025/08/16 12:44:57 scheduled a reproduction of 'INFO: task hung in v9fs_evict_inode' 2025/08/16 12:45:10 runner 7 connected 2025/08/16 12:45:12 base crash: INFO: task hung in v9fs_evict_inode 2025/08/16 12:45:12 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/16 12:45:12 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/16 12:45:35 runner 6 connected 2025/08/16 12:45:43 runner 9 connected 2025/08/16 12:45:47 runner 8 connected 2025/08/16 12:45:50 runner 1 connected 2025/08/16 12:46:01 runner 2 connected 2025/08/16 12:46:09 runner 0 connected 2025/08/16 12:46:24 patched crashed: no output from test machine [need repro = false] 2025/08/16 12:46:36 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/16 12:47:22 runner 3 connected 2025/08/16 12:47:34 runner 3 connected 2025/08/16 12:47:38 STAT { "buffer too small": 0, "candidate triage jobs": 50, "candidates": 63903, "comps overflows": 0, "corpus": 13614, "corpus [files]": 1429, "corpus [symbols]": 1098, "cover overflows": 8608, "coverage": 224860, "distributor delayed": 15907, "distributor undelayed": 15906, "distributor violated": 283, "exec candidate": 13823, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 24357, "exec total [new]": 64366, "exec triage": 43196, "executor restarts": 229, "fault jobs": 0, "fuzzer jobs": 50, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 227445, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 13823, "no exec duration": 39724000000, "no exec requests": 316, "pending": 11, "prog exec time": 299, "reproducing": 0, "rpc recv": 2242160988, "rpc sent": 307335832, "signal": 219889, "smash jobs": 0, "triage jobs": 0, "vm output": 6644724, "vm restarts [base]": 12, "vm restarts [new]": 24 } 2025/08/16 12:47:42 base crash: general protection fault in pcl818_ai_cancel 2025/08/16 12:47:56 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/16 12:47:56 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/16 12:48:01 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/08/16 12:48:11 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/08/16 12:48:23 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/08/16 12:48:40 runner 2 connected 2025/08/16 12:48:52 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/16 12:48:54 runner 2 connected 2025/08/16 12:48:57 runner 6 connected 2025/08/16 12:49:08 runner 0 connected 2025/08/16 12:49:20 runner 7 connected 2025/08/16 12:49:49 runner 0 connected 2025/08/16 12:49:51 new: boot error: can't ssh into the instance 2025/08/16 12:50:22 base crash: lost connection to test machine 2025/08/16 12:50:48 runner 4 connected 2025/08/16 12:51:18 runner 3 connected 2025/08/16 12:51:46 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/16 12:51:46 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/16 12:51:49 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/16 12:51:49 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/16 12:52:01 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/16 12:52:01 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/16 12:52:07 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/16 12:52:14 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/16 12:52:15 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/16 12:52:38 STAT { "buffer too small": 0, "candidate triage jobs": 33, "candidates": 58677, "comps overflows": 0, "corpus": 18799, "corpus [files]": 1853, "corpus [symbols]": 1405, "cover overflows": 11852, "coverage": 245544, "distributor delayed": 21266, "distributor undelayed": 21265, "distributor violated": 288, "exec candidate": 19049, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 8, "exec seeds": 0, "exec smash": 0, "exec total [base]": 33992, "exec total [new]": 90508, "exec triage": 59440, "executor restarts": 279, "fault jobs": 0, "fuzzer jobs": 33, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 247839, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 19049, "no exec duration": 39897000000, "no exec requests": 319, "pending": 15, "prog exec time": 243, "reproducing": 0, "rpc recv": 2927514416, "rpc sent": 415940048, "signal": 239957, "smash jobs": 0, "triage jobs": 0, "vm output": 9042463, "vm restarts [base]": 15, "vm restarts [new]": 29 } 2025/08/16 12:52:42 runner 6 connected 2025/08/16 12:52:45 runner 0 connected 2025/08/16 12:52:58 runner 4 connected 2025/08/16 12:53:06 runner 3 connected 2025/08/16 12:53:11 runner 2 connected 2025/08/16 12:53:13 runner 3 connected 2025/08/16 12:53:30 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/16 12:53:30 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/16 12:53:31 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 12:54:23 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/16 12:54:23 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/16 12:54:28 runner 9 connected 2025/08/16 12:54:36 runner 4 connected 2025/08/16 12:56:33 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/16 12:56:33 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/16 12:57:17 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 12:57:31 runner 7 connected 2025/08/16 12:57:38 STAT { "buffer too small": 0, "candidate triage jobs": 36, "candidates": 53532, "comps overflows": 0, "corpus": 23872, "corpus [files]": 2240, "corpus [symbols]": 1691, "cover overflows": 15395, "coverage": 261400, "distributor delayed": 26303, "distributor undelayed": 26303, "distributor violated": 288, "exec candidate": 24194, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 9, "exec seeds": 0, "exec smash": 0, "exec total [base]": 46261, "exec total [new]": 117421, "exec triage": 75368, "executor restarts": 327, "fault jobs": 0, "fuzzer jobs": 36, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 263718, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 24194, "no exec duration": 40210000000, "no exec requests": 321, "pending": 18, "prog exec time": 156, "reproducing": 0, "rpc recv": 3568899340, "rpc sent": 561329688, "signal": 255773, "smash jobs": 0, "triage jobs": 0, "vm output": 11870743, "vm restarts [base]": 17, "vm restarts [new]": 36 } 2025/08/16 12:58:15 runner 6 connected 2025/08/16 12:58:24 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/16 12:58:46 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/16 12:58:46 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/16 12:58:52 patched crashed: KASAN: slab-use-after-free Read in xfrm_state_find [need repro = false] 2025/08/16 12:59:12 base crash: KASAN: slab-use-after-free Write in __xfrm_state_delete 2025/08/16 12:59:21 runner 2 connected 2025/08/16 12:59:43 runner 0 connected 2025/08/16 12:59:50 runner 3 connected 2025/08/16 13:00:57 base crash: general protection fault in __xfrm_state_insert 2025/08/16 13:01:54 runner 1 connected 2025/08/16 13:02:22 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = true] 2025/08/16 13:02:22 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_alloc_spi' 2025/08/16 13:02:25 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/16 13:02:25 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/16 13:02:38 STAT { "buffer too small": 0, "candidate triage jobs": 30, "candidates": 48046, "comps overflows": 0, "corpus": 29143, "corpus [files]": 2504, "corpus [symbols]": 1881, "cover overflows": 19377, "coverage": 274395, "distributor delayed": 31778, "distributor undelayed": 31778, "distributor violated": 290, "exec candidate": 29680, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 58097, "exec total [new]": 151210, "exec triage": 92892, "executor restarts": 363, "fault jobs": 0, "fuzzer jobs": 30, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 277292, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 29680, "no exec duration": 40442000000, "no exec requests": 328, "pending": 21, "prog exec time": 188, "reproducing": 0, "rpc recv": 4132837048, "rpc sent": 708939448, "signal": 268426, "smash jobs": 0, "triage jobs": 0, "vm output": 14155669, "vm restarts [base]": 19, "vm restarts [new]": 39 } 2025/08/16 13:03:22 runner 4 connected 2025/08/16 13:04:29 new: boot error: can't ssh into the instance 2025/08/16 13:04:38 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = true] 2025/08/16 13:04:38 scheduled a reproduction of 'WARNING in xfrm6_tunnel_net_exit' 2025/08/16 13:05:16 base crash: general protection fault in pcl818_ai_cancel 2025/08/16 13:05:35 runner 9 connected 2025/08/16 13:06:13 runner 3 connected 2025/08/16 13:07:02 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/16 13:07:02 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/16 13:07:38 STAT { "buffer too small": 0, "candidate triage jobs": 33, "candidates": 44291, "comps overflows": 0, "corpus": 32730, "corpus [files]": 2688, "corpus [symbols]": 2015, "cover overflows": 22643, "coverage": 282325, "distributor delayed": 36482, "distributor undelayed": 36481, "distributor violated": 320, "exec candidate": 33435, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 70738, "exec total [new]": 176954, "exec triage": 104924, "executor restarts": 404, "fault jobs": 0, "fuzzer jobs": 33, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 285909, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 33435, "no exec duration": 40778000000, "no exec requests": 335, "pending": 23, "prog exec time": 727, "reproducing": 0, "rpc recv": 4481429276, "rpc sent": 823822280, "signal": 276187, "smash jobs": 0, "triage jobs": 0, "vm output": 15958259, "vm restarts [base]": 20, "vm restarts [new]": 41 } 2025/08/16 13:07:50 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/16 13:07:55 patched crashed: WARNING in io_ring_exit_work [need repro = true] 2025/08/16 13:07:55 scheduled a reproduction of 'WARNING in io_ring_exit_work' 2025/08/16 13:07:59 runner 4 connected 2025/08/16 13:08:18 patched crashed: WARNING in io_ring_exit_work [need repro = true] 2025/08/16 13:08:18 scheduled a reproduction of 'WARNING in io_ring_exit_work' 2025/08/16 13:08:23 base crash: WARNING in io_ring_exit_work 2025/08/16 13:08:24 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/16 13:08:24 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/16 13:08:46 runner 1 connected 2025/08/16 13:08:53 runner 0 connected 2025/08/16 13:08:57 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/16 13:08:57 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/16 13:09:13 runner 2 connected 2025/08/16 13:09:14 runner 7 connected 2025/08/16 13:09:18 base: boot error: can't ssh into the instance 2025/08/16 13:09:21 runner 5 connected 2025/08/16 13:09:27 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:09:55 runner 9 connected 2025/08/16 13:10:14 runner 0 connected 2025/08/16 13:10:15 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:10:24 runner 0 connected 2025/08/16 13:10:32 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/16 13:10:38 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:11:12 runner 4 connected 2025/08/16 13:11:29 runner 3 connected 2025/08/16 13:11:35 runner 5 connected 2025/08/16 13:12:02 base crash: WARNING in xfrm_state_fini 2025/08/16 13:12:11 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/16 13:12:28 new: boot error: can't ssh into the instance 2025/08/16 13:12:34 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/16 13:12:38 STAT { "buffer too small": 0, "candidate triage jobs": 18, "candidates": 41277, "comps overflows": 0, "corpus": 35711, "corpus [files]": 2979, "corpus [symbols]": 2238, "cover overflows": 25200, "coverage": 289336, "distributor delayed": 40606, "distributor undelayed": 40606, "distributor violated": 333, "exec candidate": 36449, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 79383, "exec total [new]": 197344, "exec triage": 114386, "executor restarts": 448, "fault jobs": 0, "fuzzer jobs": 18, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 292831, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 36449, "no exec duration": 40849000000, "no exec requests": 338, "pending": 27, "prog exec time": 209, "reproducing": 0, "rpc recv": 5090849460, "rpc sent": 962710984, "signal": 283189, "smash jobs": 0, "triage jobs": 0, "vm output": 18399807, "vm restarts [base]": 24, "vm restarts [new]": 49 } 2025/08/16 13:12:57 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/16 13:12:57 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/16 13:12:58 runner 1 connected 2025/08/16 13:13:08 runner 2 connected 2025/08/16 13:13:25 runner 8 connected 2025/08/16 13:13:32 runner 2 connected 2025/08/16 13:13:54 runner 6 connected 2025/08/16 13:14:34 new: boot error: can't ssh into the instance 2025/08/16 13:15:25 base crash: WARNING in ext4_xattr_inode_lookup_create 2025/08/16 13:15:27 base crash: lost connection to test machine 2025/08/16 13:15:31 runner 1 connected 2025/08/16 13:15:38 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/16 13:16:16 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/16 13:16:23 runner 2 connected 2025/08/16 13:16:25 runner 0 connected 2025/08/16 13:16:35 runner 3 connected 2025/08/16 13:16:55 base crash: lost connection to test machine 2025/08/16 13:17:13 runner 0 connected 2025/08/16 13:17:33 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/16 13:17:38 STAT { "buffer too small": 0, "candidate triage jobs": 23, "candidates": 39058, "comps overflows": 0, "corpus": 37805, "corpus [files]": 3224, "corpus [symbols]": 2430, "cover overflows": 29196, "coverage": 294718, "distributor delayed": 42777, "distributor undelayed": 42777, "distributor violated": 334, "exec candidate": 38668, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 87947, "exec total [new]": 223653, "exec triage": 121718, "executor restarts": 507, "fault jobs": 0, "fuzzer jobs": 23, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 298431, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 38668, "no exec duration": 42016000000, "no exec requests": 340, "pending": 28, "prog exec time": 153, "reproducing": 0, "rpc recv": 5599785812, "rpc sent": 1130694592, "signal": 288588, "smash jobs": 0, "triage jobs": 0, "vm output": 21318964, "vm restarts [base]": 28, "vm restarts [new]": 55 } 2025/08/16 13:17:43 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/16 13:17:54 runner 3 connected 2025/08/16 13:18:29 runner 1 connected 2025/08/16 13:18:40 runner 7 connected 2025/08/16 13:19:39 base crash: WARNING in xfrm_state_fini 2025/08/16 13:19:47 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/16 13:20:21 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/16 13:20:36 runner 2 connected 2025/08/16 13:20:43 runner 4 connected 2025/08/16 13:21:17 runner 0 connected 2025/08/16 13:21:30 base crash: possible deadlock in ocfs2_init_acl 2025/08/16 13:21:32 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/16 13:21:39 patched crashed: KASAN: slab-use-after-free Read in xfrm_state_find [need repro = false] 2025/08/16 13:22:26 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/16 13:22:28 runner 3 connected 2025/08/16 13:22:36 runner 7 connected 2025/08/16 13:22:38 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/16 13:22:38 STAT { "buffer too small": 0, "candidate triage jobs": 25, "candidates": 36658, "comps overflows": 0, "corpus": 40094, "corpus [files]": 3506, "corpus [symbols]": 2651, "cover overflows": 33017, "coverage": 300170, "distributor delayed": 44788, "distributor undelayed": 44787, "distributor violated": 334, "exec candidate": 41068, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 95828, "exec total [new]": 251035, "exec triage": 129474, "executor restarts": 552, "fault jobs": 0, "fuzzer jobs": 25, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 304101, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 41068, "no exec duration": 42022000000, "no exec requests": 341, "pending": 28, "prog exec time": 304, "reproducing": 0, "rpc recv": 6032744560, "rpc sent": 1291386576, "signal": 294005, "smash jobs": 0, "triage jobs": 0, "vm output": 24390016, "vm restarts [base]": 32, "vm restarts [new]": 59 } 2025/08/16 13:22:43 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/16 13:23:09 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/16 13:23:09 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/16 13:23:20 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/16 13:23:24 runner 1 connected 2025/08/16 13:23:35 runner 3 connected 2025/08/16 13:23:39 runner 4 connected 2025/08/16 13:24:06 runner 1 connected 2025/08/16 13:24:17 runner 2 connected 2025/08/16 13:24:49 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/16 13:24:56 patched crashed: WARNING in io_ring_exit_work [need repro = false] 2025/08/16 13:25:32 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:25:35 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/16 13:25:47 runner 1 connected 2025/08/16 13:25:51 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = false] 2025/08/16 13:25:52 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = false] 2025/08/16 13:25:53 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = false] 2025/08/16 13:25:53 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = false] 2025/08/16 13:25:54 runner 5 connected 2025/08/16 13:25:57 base crash: possible deadlock in ocfs2_init_acl 2025/08/16 13:26:28 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/16 13:26:28 runner 6 connected 2025/08/16 13:26:31 runner 9 connected 2025/08/16 13:26:34 base crash: WARNING in ext4_xattr_inode_lookup_create 2025/08/16 13:26:44 base crash: WARNING in xfrm_state_fini 2025/08/16 13:26:48 runner 2 connected 2025/08/16 13:26:48 runner 3 connected 2025/08/16 13:26:49 runner 1 connected 2025/08/16 13:26:50 runner 7 connected 2025/08/16 13:26:54 runner 3 connected 2025/08/16 13:27:24 runner 4 connected 2025/08/16 13:27:27 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:27:31 runner 1 connected 2025/08/16 13:27:38 STAT { "buffer too small": 0, "candidate triage jobs": 15, "candidates": 35659, "comps overflows": 0, "corpus": 41012, "corpus [files]": 3649, "corpus [symbols]": 2756, "cover overflows": 35965, "coverage": 302186, "distributor delayed": 45985, "distributor undelayed": 45985, "distributor violated": 345, "exec candidate": 42067, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 21, "exec seeds": 0, "exec smash": 0, "exec total [base]": 103076, "exec total [new]": 268466, "exec triage": 132670, "executor restarts": 613, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 306185, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 42038, "no exec duration": 43240000000, "no exec requests": 344, "pending": 29, "prog exec time": 281, "reproducing": 0, "rpc recv": 6625485040, "rpc sent": 1424606416, "signal": 296044, "smash jobs": 0, "triage jobs": 0, "vm output": 26285285, "vm restarts [base]": 36, "vm restarts [new]": 71 } 2025/08/16 13:27:41 runner 0 connected 2025/08/16 13:27:54 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/16 13:28:15 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/16 13:28:26 runner 3 connected 2025/08/16 13:28:41 base crash: possible deadlock in ocfs2_init_acl 2025/08/16 13:28:49 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:28:53 runner 2 connected 2025/08/16 13:29:14 runner 9 connected 2025/08/16 13:29:38 runner 1 connected 2025/08/16 13:29:41 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/16 13:29:46 runner 8 connected 2025/08/16 13:30:38 runner 3 connected 2025/08/16 13:30:41 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/08/16 13:31:38 new: boot error: can't ssh into the instance 2025/08/16 13:31:38 runner 1 connected 2025/08/16 13:32:36 runner 0 connected 2025/08/16 13:32:38 STAT { "buffer too small": 0, "candidate triage jobs": 10, "candidates": 32250, "comps overflows": 0, "corpus": 42123, "corpus [files]": 3798, "corpus [symbols]": 2874, "cover overflows": 40407, "coverage": 305156, "distributor delayed": 47229, "distributor undelayed": 47229, "distributor violated": 345, "exec candidate": 45476, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 23, "exec seeds": 0, "exec smash": 0, "exec total [base]": 113091, "exec total [new]": 296408, "exec triage": 136688, "executor restarts": 651, "fault jobs": 0, "fuzzer jobs": 10, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 309359, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43234, "no exec duration": 43240000000, "no exec requests": 344, "pending": 29, "prog exec time": 262, "reproducing": 0, "rpc recv": 7021845228, "rpc sent": 1578803624, "signal": 298675, "smash jobs": 0, "triage jobs": 0, "vm output": 28626054, "vm restarts [base]": 40, "vm restarts [new]": 76 } 2025/08/16 13:33:18 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/16 13:33:19 patched crashed: KASAN: slab-use-after-free Read in xfrm_state_find [need repro = false] 2025/08/16 13:33:29 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/16 13:33:53 base crash: possible deadlock in ocfs2_init_acl 2025/08/16 13:33:56 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/16 13:34:16 runner 7 connected 2025/08/16 13:34:17 runner 9 connected 2025/08/16 13:34:26 runner 4 connected 2025/08/16 13:34:32 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:34:51 runner 1 connected 2025/08/16 13:34:53 runner 3 connected 2025/08/16 13:35:08 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:35:29 runner 5 connected 2025/08/16 13:35:50 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/16 13:36:04 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/16 13:36:05 runner 2 connected 2025/08/16 13:36:22 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/16 13:36:24 base crash: lost connection to test machine 2025/08/16 13:36:47 runner 6 connected 2025/08/16 13:37:01 runner 9 connected 2025/08/16 13:37:05 base crash: possible deadlock in ocfs2_init_acl 2025/08/16 13:37:19 runner 7 connected 2025/08/16 13:37:21 runner 2 connected 2025/08/16 13:37:38 STAT { "buffer too small": 0, "candidate triage jobs": 19, "candidates": 11080, "comps overflows": 0, "corpus": 42834, "corpus [files]": 3907, "corpus [symbols]": 2955, "cover overflows": 44221, "coverage": 306607, "distributor delayed": 47993, "distributor undelayed": 47993, "distributor violated": 345, "exec candidate": 66646, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 26, "exec seeds": 0, "exec smash": 0, "exec total [base]": 123277, "exec total [new]": 320086, "exec triage": 139165, "executor restarts": 706, "fault jobs": 0, "fuzzer jobs": 19, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 10, "hints jobs": 0, "max signal": 310864, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44004, "no exec duration": 43786000000, "no exec requests": 347, "pending": 29, "prog exec time": 400, "reproducing": 0, "rpc recv": 7480352648, "rpc sent": 1728621528, "signal": 300050, "smash jobs": 0, "triage jobs": 0, "vm output": 31022451, "vm restarts [base]": 42, "vm restarts [new]": 85 } 2025/08/16 13:38:02 runner 3 connected 2025/08/16 13:38:36 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/16 13:38:38 triaged 93.1% of the corpus 2025/08/16 13:38:38 starting bug reproductions 2025/08/16 13:38:38 starting bug reproductions (max 10 VMs, 7 repros) 2025/08/16 13:38:38 reproduction of "WARNING in xfrm6_tunnel_net_exit" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "WARNING in xfrm6_tunnel_net_exit" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "WARNING in xfrm6_tunnel_net_exit" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "KASAN: slab-use-after-free Read in xfrm_alloc_spi" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "KASAN: slab-use-after-free Read in xfrm_alloc_spi" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "INFO: task hung in v9fs_evict_inode" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "INFO: task hung in v9fs_evict_inode" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "INFO: task hung in v9fs_evict_inode" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "KASAN: slab-use-after-free Read in xfrm_alloc_spi" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "WARNING in xfrm6_tunnel_net_exit" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "WARNING in io_ring_exit_work" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "WARNING in io_ring_exit_work" aborted: it's no longer needed 2025/08/16 13:38:38 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/16 13:38:38 start reproducing 'possible deadlock in ocfs2_xattr_set' 2025/08/16 13:38:38 start reproducing 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/16 13:38:38 start reproducing 'possible deadlock in attr_data_get_block' 2025/08/16 13:38:52 base crash: lost connection to test machine 2025/08/16 13:39:34 runner 0 connected 2025/08/16 13:39:35 runner 5 connected 2025/08/16 13:39:35 runner 1 connected 2025/08/16 13:39:35 runner 2 connected 2025/08/16 13:39:36 runner 3 connected 2025/08/16 13:39:36 runner 4 connected 2025/08/16 13:39:36 runner 0 connected 2025/08/16 13:39:49 runner 2 connected 2025/08/16 13:40:04 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:41:55 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/16 13:42:04 base crash: WARNING in xfrm_state_fini 2025/08/16 13:42:21 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:42:38 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 0, "comps overflows": 3, "corpus": 43388, "corpus [files]": 3977, "corpus [symbols]": 3001, "cover overflows": 46535, "coverage": 307717, "distributor delayed": 48596, "distributor undelayed": 48595, "distributor violated": 346, "exec candidate": 77726, "exec collide": 258, "exec fuzz": 469, "exec gen": 21, "exec hints": 18, "exec inject": 0, "exec minimize": 140, "exec retries": 26, "exec seeds": 14, "exec smash": 27, "exec total [base]": 132455, "exec total [new]": 333999, "exec triage": 141041, "executor restarts": 749, "fault jobs": 0, "fuzzer jobs": 26, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 6, "max signal": 312103, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 114, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44593, "no exec duration": 43974000000, "no exec requests": 349, "pending": 2, "prog exec time": 499, "reproducing": 3, "rpc recv": 7835621268, "rpc sent": 1850306832, "signal": 301162, "smash jobs": 7, "triage jobs": 11, "vm output": 33715027, "vm restarts [base]": 45, "vm restarts [new]": 91 } 2025/08/16 13:42:52 runner 4 connected 2025/08/16 13:43:03 runner 1 connected 2025/08/16 13:43:39 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/16 13:43:52 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:44:36 runner 3 connected 2025/08/16 13:44:39 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:45:02 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:45:22 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:45:31 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:45:37 runner 2 connected 2025/08/16 13:46:01 runner 4 connected 2025/08/16 13:46:17 base crash: lost connection to test machine 2025/08/16 13:46:25 patched crashed: WARNING in hfsplus_bnode_create [need repro = true] 2025/08/16 13:46:25 scheduled a reproduction of 'WARNING in hfsplus_bnode_create' 2025/08/16 13:46:25 start reproducing 'WARNING in hfsplus_bnode_create' 2025/08/16 13:46:48 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:46:59 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:47:02 base crash: lost connection to test machine 2025/08/16 13:47:14 runner 3 connected 2025/08/16 13:47:19 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/16 13:47:31 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:47:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 39, "corpus": 43434, "corpus [files]": 3983, "corpus [symbols]": 3007, "cover overflows": 48774, "coverage": 307868, "distributor delayed": 48732, "distributor undelayed": 48722, "distributor violated": 348, "exec candidate": 77726, "exec collide": 1484, "exec fuzz": 2631, "exec gen": 140, "exec hints": 1511, "exec inject": 0, "exec minimize": 1077, "exec retries": 26, "exec seeds": 155, "exec smash": 1125, "exec total [base]": 140679, "exec total [new]": 341391, "exec triage": 141262, "executor restarts": 784, "fault jobs": 0, "fuzzer jobs": 30, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 10, "max signal": 312291, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 594, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44672, "no exec duration": 57960000000, "no exec requests": 384, "pending": 2, "prog exec time": 0, "reproducing": 4, "rpc recv": 8065015732, "rpc sent": 2166455824, "signal": 301304, "smash jobs": 9, "triage jobs": 11, "vm output": 35758066, "vm restarts [base]": 47, "vm restarts [new]": 95 } 2025/08/16 13:47:47 runner 4 connected 2025/08/16 13:47:51 runner 2 connected 2025/08/16 13:47:56 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:48:07 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:48:17 base crash: WARNING in hfsplus_bnode_create 2025/08/16 13:49:07 runner 3 connected 2025/08/16 13:49:21 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:49:30 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:50:09 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:50:13 new: boot error: can't ssh into the instance 2025/08/16 13:50:46 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:50:59 runner 4 connected 2025/08/16 13:51:27 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:51:59 base crash: lost connection to test machine 2025/08/16 13:52:26 repro finished 'possible deadlock in attr_data_get_block', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/16 13:52:26 failed repro for "possible deadlock in attr_data_get_block", err=%!s() 2025/08/16 13:52:26 "possible deadlock in attr_data_get_block": saved crash log into 1755352346.crash.log 2025/08/16 13:52:26 "possible deadlock in attr_data_get_block": saved repro log into 1755352346.repro.log 2025/08/16 13:52:29 repro finished 'KASAN: slab-use-after-free Read in __xfrm_state_lookup', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/16 13:52:29 start reproducing 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/16 13:52:29 failed repro for "KASAN: slab-use-after-free Read in __xfrm_state_lookup", err=%!s() 2025/08/16 13:52:29 "KASAN: slab-use-after-free Read in __xfrm_state_lookup": saved crash log into 1755352349.crash.log 2025/08/16 13:52:29 "KASAN: slab-use-after-free Read in __xfrm_state_lookup": saved repro log into 1755352349.repro.log 2025/08/16 13:52:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 53, "corpus": 43439, "corpus [files]": 3983, "corpus [symbols]": 3007, "cover overflows": 49182, "coverage": 307883, "distributor delayed": 48748, "distributor undelayed": 48732, "distributor violated": 355, "exec candidate": 77726, "exec collide": 1639, "exec fuzz": 2892, "exec gen": 159, "exec hints": 1730, "exec inject": 0, "exec minimize": 1188, "exec retries": 26, "exec seeds": 172, "exec smash": 1323, "exec total [base]": 143900, "exec total [new]": 342388, "exec triage": 141275, "executor restarts": 797, "fault jobs": 0, "fuzzer jobs": 30, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 1, "hints jobs": 9, "max signal": 312331, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 654, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44687, "no exec duration": 407126000000, "no exec requests": 1090, "pending": 1, "prog exec time": 708, "reproducing": 3, "rpc recv": 8195908356, "rpc sent": 2252717896, "signal": 301318, "smash jobs": 5, "triage jobs": 16, "vm output": 38302032, "vm restarts [base]": 49, "vm restarts [new]": 97 } 2025/08/16 13:52:47 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:52:48 runner 3 connected 2025/08/16 13:53:17 runner 0 connected 2025/08/16 13:53:21 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:53:32 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:54:01 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:54:10 runner 4 connected 2025/08/16 13:54:47 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:54:48 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:55:19 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:55:37 runner 4 connected 2025/08/16 13:55:37 new: boot error: can't ssh into the instance 2025/08/16 13:56:01 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:56:11 new: boot error: can't ssh into the instance 2025/08/16 13:56:25 runner 5 connected 2025/08/16 13:56:39 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:57:21 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:57:25 new: boot error: can't ssh into the instance 2025/08/16 13:57:25 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:57:36 new: boot error: can't ssh into the instance 2025/08/16 13:57:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 68, "corpus": 43466, "corpus [files]": 3990, "corpus [symbols]": 3012, "cover overflows": 49830, "coverage": 307994, "distributor delayed": 48793, "distributor undelayed": 48781, "distributor violated": 358, "exec candidate": 77726, "exec collide": 1770, "exec fuzz": 3183, "exec gen": 172, "exec hints": 1888, "exec inject": 0, "exec minimize": 1802, "exec retries": 26, "exec seeds": 247, "exec smash": 1525, "exec total [base]": 145494, "exec total [new]": 343985, "exec triage": 141383, "executor restarts": 821, "fault jobs": 0, "fuzzer jobs": 66, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 23, "max signal": 312469, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1038, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44724, "no exec duration": 1063887000000, "no exec requests": 2571, "pending": 1, "prog exec time": 740, "reproducing": 3, "rpc recv": 8381122052, "rpc sent": 2340466672, "signal": 301389, "smash jobs": 29, "triage jobs": 14, "vm output": 39683147, "vm restarts [base]": 50, "vm restarts [new]": 101 } 2025/08/16 13:57:52 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:57:58 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:58:10 runner 5 connected 2025/08/16 13:58:14 runner 3 connected 2025/08/16 13:58:27 runner 2 connected 2025/08/16 13:58:59 base crash: lost connection to test machine 2025/08/16 13:59:01 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 13:59:11 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:59:17 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 13:59:17 repro finished 'WARNING in hfsplus_bnode_create', repro=true crepro=false desc='WARNING in hfsplus_bnode_create' hub=false from_dashboard=false 2025/08/16 13:59:17 found repro for "WARNING in hfsplus_bnode_create" (orig title: "-SAME-", reliability: 1), took 12.23 minutes 2025/08/16 13:59:17 "WARNING in hfsplus_bnode_create": saved crash log into 1755352757.crash.log 2025/08/16 13:59:17 "WARNING in hfsplus_bnode_create": saved repro log into 1755352757.repro.log 2025/08/16 13:59:39 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/16 13:59:56 runner 3 connected 2025/08/16 13:59:58 runner 2 connected 2025/08/16 14:00:29 runner 5 connected 2025/08/16 14:00:37 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 14:00:42 base crash: WARNING in dbAdjTree 2025/08/16 14:00:48 attempt #0 to run "WARNING in hfsplus_bnode_create" on base: crashed with WARNING in hfsplus_bnode_create 2025/08/16 14:00:48 crashes both: WARNING in hfsplus_bnode_create / WARNING in hfsplus_bnode_create 2025/08/16 14:00:54 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/08/16 14:01:03 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:5274: connect: connection refused 2025/08/16 14:01:03 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:5274: connect: connection refused 2025/08/16 14:01:13 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:01:39 runner 3 connected 2025/08/16 14:01:43 runner 2 connected 2025/08/16 14:01:49 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:01:58 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 14:02:01 runner 3 connected 2025/08/16 14:02:30 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 14:02:30 repro finished 'possible deadlock in ocfs2_xattr_set', repro=true crepro=false desc='possible deadlock in ocfs2_reserve_suballoc_bits' hub=false from_dashboard=false 2025/08/16 14:02:30 found repro for "possible deadlock in ocfs2_reserve_suballoc_bits" (orig title: "possible deadlock in ocfs2_xattr_set", reliability: 1), took 23.85 minutes 2025/08/16 14:02:30 start reproducing 'possible deadlock in ocfs2_xattr_set' 2025/08/16 14:02:30 "possible deadlock in ocfs2_reserve_suballoc_bits": saved crash log into 1755352950.crash.log 2025/08/16 14:02:30 "possible deadlock in ocfs2_reserve_suballoc_bits": saved repro log into 1755352950.repro.log 2025/08/16 14:02:31 new: boot error: can't ssh into the instance 2025/08/16 14:02:38 runner 5 connected 2025/08/16 14:02:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 102, "corpus": 43501, "corpus [files]": 3994, "corpus [symbols]": 3015, "cover overflows": 51513, "coverage": 308135, "distributor delayed": 48950, "distributor undelayed": 48949, "distributor violated": 359, "exec candidate": 77726, "exec collide": 2305, "exec fuzz": 4145, "exec gen": 214, "exec hints": 2373, "exec inject": 0, "exec minimize": 2721, "exec retries": 26, "exec seeds": 349, "exec smash": 2477, "exec total [base]": 149733, "exec total [new]": 348221, "exec triage": 141616, "executor restarts": 900, "fault jobs": 0, "fuzzer jobs": 48, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 11, "max signal": 312773, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1640, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44796, "no exec duration": 1373107000000, "no exec requests": 3491, "pending": 0, "prog exec time": 736, "reproducing": 2, "rpc recv": 8708278600, "rpc sent": 2566112256, "signal": 301473, "smash jobs": 30, "triage jobs": 7, "vm output": 41921631, "vm restarts [base]": 52, "vm restarts [new]": 109 } 2025/08/16 14:02:38 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = true] 2025/08/16 14:02:38 scheduled a reproduction of 'INFO: task hung in bch2_journal_reclaim_thread' 2025/08/16 14:02:38 start reproducing 'INFO: task hung in bch2_journal_reclaim_thread' 2025/08/16 14:02:45 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:47148: connect: connection refused 2025/08/16 14:02:45 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:47148: connect: connection refused 2025/08/16 14:02:55 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:02:58 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:39391: connect: connection refused 2025/08/16 14:02:58 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:39391: connect: connection refused 2025/08/16 14:03:08 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:03:17 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/16 14:03:21 runner 1 connected 2025/08/16 14:03:27 runner 4 connected 2025/08/16 14:03:33 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:23042: connect: connection refused 2025/08/16 14:03:33 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:23042: connect: connection refused 2025/08/16 14:03:43 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:03:44 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:19776: connect: connection refused 2025/08/16 14:03:44 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:19776: connect: connection refused 2025/08/16 14:03:50 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:36589: connect: connection refused 2025/08/16 14:03:50 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:36589: connect: connection refused 2025/08/16 14:03:51 VM-4 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:12814: connect: connection refused 2025/08/16 14:03:51 VM-4 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:12814: connect: connection refused 2025/08/16 14:03:54 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:03:57 runner 2 connected 2025/08/16 14:04:00 base crash: lost connection to test machine 2025/08/16 14:04:01 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:04:07 runner 5 connected 2025/08/16 14:04:28 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:25536: connect: connection refused 2025/08/16 14:04:28 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:25536: connect: connection refused 2025/08/16 14:04:32 runner 3 connected 2025/08/16 14:04:38 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:04:43 runner 1 connected 2025/08/16 14:04:49 runner 3 connected 2025/08/16 14:04:52 runner 4 connected 2025/08/16 14:04:55 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:54643: connect: connection refused 2025/08/16 14:04:55 VM-3 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:54643: connect: connection refused 2025/08/16 14:05:05 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:05:19 VM-4 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:10304: connect: connection refused 2025/08/16 14:05:19 VM-4 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:10304: connect: connection refused 2025/08/16 14:05:29 runner 2 connected 2025/08/16 14:05:29 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:05:42 VM-5 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:45438: connect: connection refused 2025/08/16 14:05:42 VM-5 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:45438: connect: connection refused 2025/08/16 14:05:52 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:05:54 runner 3 connected 2025/08/16 14:06:18 runner 4 connected 2025/08/16 14:06:49 runner 5 connected 2025/08/16 14:07:31 new: boot error: can't ssh into the instance 2025/08/16 14:07:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 164, "corpus": 43531, "corpus [files]": 4002, "corpus [symbols]": 3022, "cover overflows": 52537, "coverage": 308260, "distributor delayed": 49082, "distributor undelayed": 49082, "distributor violated": 359, "exec candidate": 77726, "exec collide": 2565, "exec fuzz": 4672, "exec gen": 235, "exec hints": 2593, "exec inject": 0, "exec minimize": 3306, "exec retries": 27, "exec seeds": 427, "exec smash": 2988, "exec total [base]": 152071, "exec total [new]": 350608, "exec triage": 141778, "executor restarts": 966, "fault jobs": 0, "fuzzer jobs": 67, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 17, "max signal": 313123, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1989, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44866, "no exec duration": 1832696000000, "no exec requests": 4694, "pending": 0, "prog exec time": 887, "reproducing": 3, "rpc recv": 9171197212, "rpc sent": 2727897424, "signal": 301575, "smash jobs": 31, "triage jobs": 19, "vm output": 44286281, "vm restarts [base]": 53, "vm restarts [new]": 120 } 2025/08/16 14:07:51 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/08/16 14:07:52 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/08/16 14:07:52 start reproducing 'kernel BUG in jfs_evict_inode' 2025/08/16 14:08:28 runner 6 connected 2025/08/16 14:09:07 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:10:54 base: boot error: can't ssh into the instance 2025/08/16 14:11:07 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13984: connect: connection refused 2025/08/16 14:11:07 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13984: connect: connection refused 2025/08/16 14:12:01 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/16 14:12:01 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/16 14:12:11 attempt #0 to run "possible deadlock in ocfs2_reserve_suballoc_bits" on base: crashed with possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/16 14:12:11 crashes both: possible deadlock in ocfs2_reserve_suballoc_bits / possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/16 14:12:12 new: boot error: can't ssh into the instance 2025/08/16 14:12:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 242, "corpus": 43555, "corpus [files]": 4009, "corpus [symbols]": 3028, "cover overflows": 54341, "coverage": 308352, "distributor delayed": 49210, "distributor undelayed": 49205, "distributor violated": 380, "exec candidate": 77726, "exec collide": 2802, "exec fuzz": 5168, "exec gen": 260, "exec hints": 2782, "exec inject": 0, "exec minimize": 4030, "exec retries": 27, "exec seeds": 506, "exec smash": 3475, "exec total [base]": 154510, "exec total [new]": 353034, "exec triage": 141971, "executor restarts": 999, "fault jobs": 0, "fuzzer jobs": 66, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 18, "max signal": 313767, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2388, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44925, "no exec duration": 2367854000000, "no exec requests": 6259, "pending": 1, "prog exec time": 1131, "reproducing": 4, "rpc recv": 9273898076, "rpc sent": 2875172800, "signal": 301663, "smash jobs": 37, "triage jobs": 11, "vm output": 47278232, "vm restarts [base]": 53, "vm restarts [new]": 121 } 2025/08/16 14:12:52 runner 3 connected 2025/08/16 14:13:01 runner 0 connected 2025/08/16 14:13:19 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:14:15 runner 6 connected 2025/08/16 14:15:08 base crash: WARNING in __udf_add_aext 2025/08/16 14:15:12 patched crashed: WARNING in __udf_add_aext [need repro = false] 2025/08/16 14:15:49 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/16 14:16:05 runner 3 connected 2025/08/16 14:16:07 repro finished 'possible deadlock in ocfs2_xattr_set', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/16 14:16:07 failed repro for "possible deadlock in ocfs2_xattr_set", err=%!s() 2025/08/16 14:16:07 start reproducing 'possible deadlock in ocfs2_xattr_set' 2025/08/16 14:16:07 "possible deadlock in ocfs2_xattr_set": saved crash log into 1755353767.crash.log 2025/08/16 14:16:07 "possible deadlock in ocfs2_xattr_set": saved repro log into 1755353767.repro.log 2025/08/16 14:16:08 runner 6 connected 2025/08/16 14:16:37 runner 5 connected 2025/08/16 14:17:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 291, "corpus": 43570, "corpus [files]": 4012, "corpus [symbols]": 3031, "cover overflows": 55415, "coverage": 308396, "distributor delayed": 49277, "distributor undelayed": 49277, "distributor violated": 383, "exec candidate": 77726, "exec collide": 2982, "exec fuzz": 5567, "exec gen": 285, "exec hints": 2950, "exec inject": 0, "exec minimize": 4416, "exec retries": 27, "exec seeds": 546, "exec smash": 3873, "exec total [base]": 156219, "exec total [new]": 354746, "exec triage": 142080, "executor restarts": 1038, "fault jobs": 0, "fuzzer jobs": 55, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 3, "hints jobs": 16, "max signal": 313963, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2603, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44960, "no exec duration": 2939459000000, "no exec requests": 7696, "pending": 0, "prog exec time": 394, "reproducing": 4, "rpc recv": 9498874700, "rpc sent": 2990129064, "signal": 301736, "smash jobs": 31, "triage jobs": 8, "vm output": 50483310, "vm restarts [base]": 55, "vm restarts [new]": 125 } 2025/08/16 14:19:12 new: boot error: can't ssh into the instance 2025/08/16 14:19:58 new: boot error: can't ssh into the instance 2025/08/16 14:20:01 runner 4 connected 2025/08/16 14:20:05 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:20:34 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:42130: connect: connection refused 2025/08/16 14:20:34 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:42130: connect: connection refused 2025/08/16 14:22:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 338, "corpus": 43594, "corpus [files]": 4020, "corpus [symbols]": 3038, "cover overflows": 56542, "coverage": 308449, "distributor delayed": 49363, "distributor undelayed": 49357, "distributor violated": 385, "exec candidate": 77726, "exec collide": 3242, "exec fuzz": 6110, "exec gen": 301, "exec hints": 3213, "exec inject": 0, "exec minimize": 4919, "exec retries": 27, "exec seeds": 628, "exec smash": 4346, "exec total [base]": 158495, "exec total [new]": 357019, "exec triage": 142211, "executor restarts": 1087, "fault jobs": 0, "fuzzer jobs": 51, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 3, "hints jobs": 13, "max signal": 314066, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2967, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45013, "no exec duration": 3680524000000, "no exec requests": 9637, "pending": 0, "prog exec time": 643, "reproducing": 4, "rpc recv": 9578240156, "rpc sent": 3131758448, "signal": 301786, "smash jobs": 24, "triage jobs": 14, "vm output": 53556374, "vm restarts [base]": 55, "vm restarts [new]": 126 } 2025/08/16 14:24:23 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:25:12 runner 4 connected 2025/08/16 14:25:35 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:38091: connect: connection refused 2025/08/16 14:25:35 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:38091: connect: connection refused 2025/08/16 14:26:46 base crash: WARNING in xfrm_state_fini 2025/08/16 14:27:00 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:34910: connect: connection refused 2025/08/16 14:27:00 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:34910: connect: connection refused 2025/08/16 14:27:10 base crash: lost connection to test machine 2025/08/16 14:27:35 runner 1 connected 2025/08/16 14:27:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 379, "corpus": 43619, "corpus [files]": 4028, "corpus [symbols]": 3043, "cover overflows": 57538, "coverage": 308513, "distributor delayed": 49439, "distributor undelayed": 49439, "distributor violated": 391, "exec candidate": 77726, "exec collide": 3500, "exec fuzz": 6583, "exec gen": 334, "exec hints": 3427, "exec inject": 0, "exec minimize": 5730, "exec retries": 27, "exec seeds": 696, "exec smash": 4829, "exec total [base]": 160947, "exec total [new]": 359535, "exec triage": 142387, "executor restarts": 1110, "fault jobs": 0, "fuzzer jobs": 58, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 17, "max signal": 314353, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3436, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45069, "no exec duration": 4232059000000, "no exec requests": 11135, "pending": 0, "prog exec time": 1002, "reproducing": 4, "rpc recv": 9659776348, "rpc sent": 3303713176, "signal": 301843, "smash jobs": 33, "triage jobs": 8, "vm output": 55684826, "vm restarts [base]": 56, "vm restarts [new]": 127 } 2025/08/16 14:27:57 new: boot error: can't ssh into the instance 2025/08/16 14:28:00 runner 0 connected 2025/08/16 14:28:08 new: boot error: can't ssh into the instance 2025/08/16 14:28:56 new: boot error: can't ssh into the instance 2025/08/16 14:29:08 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:3407: connect: connection refused 2025/08/16 14:29:08 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:3407: connect: connection refused 2025/08/16 14:30:04 new: boot error: can't ssh into the instance 2025/08/16 14:30:10 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/16 14:30:11 new: boot error: can't ssh into the instance 2025/08/16 14:31:07 runner 5 connected 2025/08/16 14:31:08 runner 6 connected 2025/08/16 14:31:13 VM-9 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:42038: connect: connection refused 2025/08/16 14:31:13 VM-9 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:42038: connect: connection refused 2025/08/16 14:32:19 patched crashed: WARNING in rate_control_rate_init [need repro = true] 2025/08/16 14:32:19 scheduled a reproduction of 'WARNING in rate_control_rate_init' 2025/08/16 14:32:19 start reproducing 'WARNING in rate_control_rate_init' 2025/08/16 14:32:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 434, "corpus": 43637, "corpus [files]": 4034, "corpus [symbols]": 3048, "cover overflows": 59026, "coverage": 308630, "distributor delayed": 49529, "distributor undelayed": 49527, "distributor violated": 391, "exec candidate": 77726, "exec collide": 3872, "exec fuzz": 7272, "exec gen": 370, "exec hints": 3806, "exec inject": 0, "exec minimize": 6439, "exec retries": 27, "exec seeds": 756, "exec smash": 5484, "exec total [base]": 164077, "exec total [new]": 362602, "exec triage": 142552, "executor restarts": 1135, "fault jobs": 0, "fuzzer jobs": 42, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 3, "hints jobs": 17, "max signal": 314600, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3782, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45126, "no exec duration": 4751534000000, "no exec requests": 12594, "pending": 0, "prog exec time": 803, "reproducing": 5, "rpc recv": 9825905388, "rpc sent": 3522130608, "signal": 301914, "smash jobs": 15, "triage jobs": 10, "vm output": 58220107, "vm restarts [base]": 57, "vm restarts [new]": 129 } 2025/08/16 14:34:37 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = true] 2025/08/16 14:34:37 scheduled a reproduction of 'INFO: task hung in bch2_journal_reclaim_thread' 2025/08/16 14:35:34 runner 4 connected 2025/08/16 14:35:58 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/16 14:36:54 runner 5 connected 2025/08/16 14:37:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 489, "corpus": 43664, "corpus [files]": 4038, "corpus [symbols]": 3051, "cover overflows": 60281, "coverage": 308688, "distributor delayed": 49600, "distributor undelayed": 49600, "distributor violated": 398, "exec candidate": 77726, "exec collide": 4201, "exec fuzz": 7888, "exec gen": 422, "exec hints": 4277, "exec inject": 0, "exec minimize": 6969, "exec retries": 27, "exec seeds": 840, "exec smash": 5931, "exec total [base]": 166722, "exec total [new]": 365251, "exec triage": 142671, "executor restarts": 1165, "fault jobs": 0, "fuzzer jobs": 47, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 3, "hints jobs": 19, "max signal": 314689, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4098, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45170, "no exec duration": 5560131000000, "no exec requests": 14810, "pending": 1, "prog exec time": 685, "reproducing": 5, "rpc recv": 9928781896, "rpc sent": 3690347792, "signal": 301968, "smash jobs": 22, "triage jobs": 6, "vm output": 62033700, "vm restarts [base]": 57, "vm restarts [new]": 131 } 2025/08/16 14:37:55 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:8078: connect: connection refused 2025/08/16 14:37:55 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:8078: connect: connection refused 2025/08/16 14:38:19 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/16 14:39:32 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:24967: connect: connection refused 2025/08/16 14:39:32 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:24967: connect: connection refused 2025/08/16 14:40:30 reproducing crash 'WARNING in rate_control_rate_init': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/extent_map.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 14:41:28 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/16 14:42:14 patched crashed: INFO: task hung in __iterate_supers [need repro = true] 2025/08/16 14:42:14 scheduled a reproduction of 'INFO: task hung in __iterate_supers' 2025/08/16 14:42:14 start reproducing 'INFO: task hung in __iterate_supers' 2025/08/16 14:42:17 runner 6 connected 2025/08/16 14:42:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 502, "corpus": 43679, "corpus [files]": 4040, "corpus [symbols]": 3052, "cover overflows": 61133, "coverage": 308715, "distributor delayed": 49645, "distributor undelayed": 49635, "distributor violated": 408, "exec candidate": 77726, "exec collide": 4485, "exec fuzz": 8389, "exec gen": 451, "exec hints": 4586, "exec inject": 0, "exec minimize": 7356, "exec retries": 27, "exec seeds": 881, "exec smash": 6392, "exec total [base]": 168805, "exec total [new]": 367336, "exec triage": 142751, "executor restarts": 1185, "fault jobs": 0, "fuzzer jobs": 48, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 1, "hints jobs": 19, "max signal": 314769, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4305, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45204, "no exec duration": 6277355000000, "no exec requests": 16654, "pending": 1, "prog exec time": 752, "reproducing": 6, "rpc recv": 9979133020, "rpc sent": 3805557152, "signal": 302020, "smash jobs": 17, "triage jobs": 12, "vm output": 65039741, "vm restarts [base]": 57, "vm restarts [new]": 132 } 2025/08/16 14:43:03 runner 5 connected 2025/08/16 14:44:26 new: boot error: can't ssh into the instance 2025/08/16 14:45:21 new: boot error: can't ssh into the instance 2025/08/16 14:47:03 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:65432: connect: connection refused 2025/08/16 14:47:03 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:65432: connect: connection refused 2025/08/16 14:47:16 base crash: WARNING in xfrm_state_fini 2025/08/16 14:47:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 540, "corpus": 43702, "corpus [files]": 4045, "corpus [symbols]": 3056, "cover overflows": 62101, "coverage": 308791, "distributor delayed": 49673, "distributor undelayed": 49673, "distributor violated": 416, "exec candidate": 77726, "exec collide": 4754, "exec fuzz": 8873, "exec gen": 480, "exec hints": 4862, "exec inject": 0, "exec minimize": 8084, "exec retries": 27, "exec seeds": 962, "exec smash": 6819, "exec total [base]": 171246, "exec total [new]": 369781, "exec triage": 142897, "executor restarts": 1213, "fault jobs": 0, "fuzzer jobs": 58, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 19, "max signal": 314906, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4750, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45246, "no exec duration": 7073834000000, "no exec requests": 18825, "pending": 1, "prog exec time": 752, "reproducing": 6, "rpc recv": 10056598992, "rpc sent": 3928795000, "signal": 302232, "smash jobs": 32, "triage jobs": 7, "vm output": 69332766, "vm restarts [base]": 57, "vm restarts [new]": 133 } 2025/08/16 14:48:00 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/16 14:48:02 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/16 14:48:06 runner 3 connected 2025/08/16 14:48:25 new: boot error: can't ssh into the instance 2025/08/16 14:48:27 reproducing crash 'WARNING in rate_control_rate_init': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/extent_map.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 14:48:40 runner 2 connected 2025/08/16 14:48:57 runner 0 connected 2025/08/16 14:49:39 base crash: KASAN: slab-out-of-bounds Write in bch2_dirent_init_name 2025/08/16 14:50:08 reproducing crash 'WARNING in rate_control_rate_init': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/extent_map.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 14:50:30 runner 2 connected 2025/08/16 14:51:01 reproducing crash 'WARNING in rate_control_rate_init': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/extent_map.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 14:51:43 patched crashed: KASAN: slab-out-of-bounds Write in bch2_dirent_init_name [need repro = false] 2025/08/16 14:51:45 reproducing crash 'WARNING in rate_control_rate_init': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/extent_map.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 14:52:31 runner 5 connected 2025/08/16 14:52:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 560, "corpus": 43709, "corpus [files]": 4045, "corpus [symbols]": 3056, "cover overflows": 62612, "coverage": 308859, "distributor delayed": 49681, "distributor undelayed": 49674, "distributor violated": 416, "exec candidate": 77726, "exec collide": 4882, "exec fuzz": 9108, "exec gen": 497, "exec hints": 4980, "exec inject": 0, "exec minimize": 8186, "exec retries": 27, "exec seeds": 985, "exec smash": 7057, "exec total [base]": 172113, "exec total [new]": 370645, "exec triage": 142904, "executor restarts": 1220, "fault jobs": 0, "fuzzer jobs": 50, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 16, "max signal": 314932, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4784, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45254, "no exec duration": 7312610000000, "no exec requests": 19325, "pending": 1, "prog exec time": 0, "reproducing": 6, "rpc recv": 10219364656, "rpc sent": 3978539512, "signal": 302280, "smash jobs": 27, "triage jobs": 7, "vm output": 72129868, "vm restarts [base]": 61, "vm restarts [new]": 134 } 2025/08/16 14:54:56 new: boot error: can't ssh into the instance 2025/08/16 14:56:49 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/16 14:57:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 560, "corpus": 43712, "corpus [files]": 4046, "corpus [symbols]": 3057, "cover overflows": 62963, "coverage": 308866, "distributor delayed": 49691, "distributor undelayed": 49679, "distributor violated": 421, "exec candidate": 77726, "exec collide": 4966, "exec fuzz": 9267, "exec gen": 506, "exec hints": 5090, "exec inject": 0, "exec minimize": 8405, "exec retries": 27, "exec seeds": 985, "exec smash": 7198, "exec total [base]": 172873, "exec total [new]": 371400, "exec triage": 142935, "executor restarts": 1231, "fault jobs": 0, "fuzzer jobs": 49, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 16, "max signal": 314966, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4912, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45269, "no exec duration": 7767421000000, "no exec requests": 20080, "pending": 1, "prog exec time": 0, "reproducing": 6, "rpc recv": 10225443876, "rpc sent": 4031069912, "signal": 302287, "smash jobs": 17, "triage jobs": 16, "vm output": 75512526, "vm restarts [base]": 61, "vm restarts [new]": 134 } 2025/08/16 14:57:38 runner 5 connected 2025/08/16 14:58:07 new: boot error: can't ssh into the instance 2025/08/16 14:58:27 base crash: lost connection to test machine 2025/08/16 14:58:27 patched crashed: lost connection to test machine [need repro = false] 2025/08/16 14:58:57 runner 6 connected 2025/08/16 14:58:59 reproducing crash 'WARNING in rate_control_rate_init': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/extent_map.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 14:59:19 runner 3 connected 2025/08/16 14:59:19 runner 5 connected 2025/08/16 15:00:22 reproducing crash 'WARNING in rate_control_rate_init': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/extent_map.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 15:01:37 new: boot error: can't ssh into the instance 2025/08/16 15:02:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 591, "corpus": 43736, "corpus [files]": 4050, "corpus [symbols]": 3061, "cover overflows": 63328, "coverage": 308932, "distributor delayed": 49710, "distributor undelayed": 49710, "distributor violated": 421, "exec candidate": 77726, "exec collide": 5092, "exec fuzz": 9511, "exec gen": 521, "exec hints": 5205, "exec inject": 0, "exec minimize": 8886, "exec retries": 27, "exec seeds": 1060, "exec smash": 7392, "exec total [base]": 174199, "exec total [new]": 372735, "exec triage": 143013, "executor restarts": 1247, "fault jobs": 0, "fuzzer jobs": 71, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 28, "max signal": 315015, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5205, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45291, "no exec duration": 8221339000000, "no exec requests": 21198, "pending": 1, "prog exec time": 1083, "reproducing": 6, "rpc recv": 10383480576, "rpc sent": 4121850888, "signal": 302345, "smash jobs": 36, "triage jobs": 7, "vm output": 78631821, "vm restarts [base]": 62, "vm restarts [new]": 137 } 2025/08/16 15:03:19 base crash: INFO: task hung in bch2_journal_reclaim_thread 2025/08/16 15:05:08 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/08/16 15:05:58 runner 6 connected 2025/08/16 15:06:20 patched crashed: INFO: task hung in __closure_sync [need repro = true] 2025/08/16 15:06:20 scheduled a reproduction of 'INFO: task hung in __closure_sync' 2025/08/16 15:06:20 start reproducing 'INFO: task hung in __closure_sync' 2025/08/16 15:07:02 new: boot error: can't ssh into the instance 2025/08/16 15:07:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 615, "corpus": 43750, "corpus [files]": 4056, "corpus [symbols]": 3066, "cover overflows": 63657, "coverage": 308962, "distributor delayed": 49732, "distributor undelayed": 49732, "distributor violated": 421, "exec candidate": 77726, "exec collide": 5239, "exec fuzz": 9759, "exec gen": 544, "exec hints": 5337, "exec inject": 0, "exec minimize": 9098, "exec retries": 27, "exec seeds": 1102, "exec smash": 7639, "exec total [base]": 175314, "exec total [new]": 373852, "exec triage": 143079, "executor restarts": 1259, "fault jobs": 0, "fuzzer jobs": 78, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 30, "max signal": 315119, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5325, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45316, "no exec duration": 8564738000000, "no exec requests": 21989, "pending": 1, "prog exec time": 0, "reproducing": 7, "rpc recv": 10438522532, "rpc sent": 4240379072, "signal": 302374, "smash jobs": 40, "triage jobs": 8, "vm output": 82246787, "vm restarts [base]": 62, "vm restarts [new]": 138 } 2025/08/16 15:07:43 reproducing crash 'WARNING in rate_control_rate_init': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/extent_map.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 15:08:20 reproducing crash 'WARNING in rate_control_rate_init': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/extent_map.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 15:09:06 reproducing crash 'WARNING in rate_control_rate_init': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/extent_map.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 15:09:38 reproducing crash 'WARNING in rate_control_rate_init': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/extent_map.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 15:10:30 reproducing crash 'WARNING in rate_control_rate_init': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/extent_map.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 15:10:39 new: boot error: can't ssh into the instance 2025/08/16 15:11:25 base crash: no output from test machine 2025/08/16 15:11:26 base crash: no output from test machine 2025/08/16 15:11:28 base crash: no output from test machine 2025/08/16 15:12:22 runner 1 connected 2025/08/16 15:12:23 runner 0 connected 2025/08/16 15:12:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 615, "corpus": 43750, "corpus [files]": 4056, "corpus [symbols]": 3066, "cover overflows": 63657, "coverage": 308962, "distributor delayed": 49732, "distributor undelayed": 49732, "distributor violated": 421, "exec candidate": 77726, "exec collide": 5239, "exec fuzz": 9759, "exec gen": 544, "exec hints": 5337, "exec inject": 0, "exec minimize": 9098, "exec retries": 27, "exec seeds": 1102, "exec smash": 7639, "exec total [base]": 175314, "exec total [new]": 373852, "exec triage": 143079, "executor restarts": 1259, "fault jobs": 0, "fuzzer jobs": 78, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 0, "hints jobs": 30, "max signal": 315119, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5333, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45316, "no exec duration": 8564738000000, "no exec requests": 21989, "pending": 1, "prog exec time": 0, "reproducing": 7, "rpc recv": 10500314908, "rpc sent": 4240379632, "signal": 302374, "smash jobs": 40, "triage jobs": 8, "vm output": 85896143, "vm restarts [base]": 64, "vm restarts [new]": 138 } 2025/08/16 15:13:25 base: boot error: can't ssh into the instance 2025/08/16 15:14:16 runner 2 connected 2025/08/16 15:17:21 base crash: no output from test machine 2025/08/16 15:17:23 base crash: no output from test machine 2025/08/16 15:17:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 615, "corpus": 43750, "corpus [files]": 4056, "corpus [symbols]": 3066, "cover overflows": 63657, "coverage": 308962, "distributor delayed": 49732, "distributor undelayed": 49732, "distributor violated": 421, "exec candidate": 77726, "exec collide": 5239, "exec fuzz": 9759, "exec gen": 544, "exec hints": 5337, "exec inject": 0, "exec minimize": 9098, "exec retries": 27, "exec seeds": 1102, "exec smash": 7639, "exec total [base]": 175314, "exec total [new]": 373852, "exec triage": 143079, "executor restarts": 1259, "fault jobs": 0, "fuzzer jobs": 78, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 0, "hints jobs": 30, "max signal": 315119, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5334, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45316, "no exec duration": 8564738000000, "no exec requests": 21989, "pending": 1, "prog exec time": 0, "reproducing": 7, "rpc recv": 10531211100, "rpc sent": 4240379912, "signal": 302374, "smash jobs": 40, "triage jobs": 8, "vm output": 89482577, "vm restarts [base]": 65, "vm restarts [new]": 138 } 2025/08/16 15:18:10 runner 1 connected 2025/08/16 15:18:15 runner 0 connected 2025/08/16 15:19:16 base crash: no output from test machine 2025/08/16 15:19:24 new: boot error: can't ssh into the instance 2025/08/16 15:20:02 reproducing crash 'INFO: task hung in __closure_sync': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/bcachefs/journal_reclaim.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 15:20:05 runner 2 connected 2025/08/16 15:20:44 new: boot error: can't ssh into the instance 2025/08/16 15:21:34 base: boot error: can't ssh into the instance 2025/08/16 15:22:07 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:8011: connect: connection refused 2025/08/16 15:22:07 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:8011: connect: connection refused 2025/08/16 15:22:24 runner 3 connected 2025/08/16 15:22:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 615, "corpus": 43750, "corpus [files]": 4056, "corpus [symbols]": 3066, "cover overflows": 63657, "coverage": 308962, "distributor delayed": 49732, "distributor undelayed": 49732, "distributor violated": 421, "exec candidate": 77726, "exec collide": 5239, "exec fuzz": 9759, "exec gen": 544, "exec hints": 5337, "exec inject": 0, "exec minimize": 9098, "exec retries": 27, "exec seeds": 1102, "exec smash": 7639, "exec total [base]": 175314, "exec total [new]": 373852, "exec triage": 143079, "executor restarts": 1259, "fault jobs": 0, "fuzzer jobs": 78, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 30, "max signal": 315119, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5338, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45316, "no exec duration": 8564738000000, "no exec requests": 21989, "pending": 1, "prog exec time": 0, "reproducing": 7, "rpc recv": 10654795860, "rpc sent": 4240381032, "signal": 302374, "smash jobs": 40, "triage jobs": 8, "vm output": 91909124, "vm restarts [base]": 69, "vm restarts [new]": 138 } 2025/08/16 15:23:09 VM-9 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:31573: connect: connection refused 2025/08/16 15:23:09 VM-9 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:31573: connect: connection refused 2025/08/16 15:23:09 base crash: no output from test machine 2025/08/16 15:23:14 base crash: no output from test machine 2025/08/16 15:24:01 runner 1 connected 2025/08/16 15:24:04 runner 0 connected 2025/08/16 15:24:15 VM-6 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:11889: connect: connection refused 2025/08/16 15:24:15 VM-6 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:11889: connect: connection refused 2025/08/16 15:24:24 new: boot error: can't ssh into the instance 2025/08/16 15:24:30 reproducing crash 'INFO: task hung in __closure_sync': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/bcachefs/journal_reclaim.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 15:25:05 base crash: no output from test machine 2025/08/16 15:25:19 VM-9 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:3633: connect: connection refused 2025/08/16 15:25:19 VM-9 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:3633: connect: connection refused 2025/08/16 15:25:54 runner 2 connected 2025/08/16 15:26:51 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:8699: connect: connection refused 2025/08/16 15:26:51 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:8699: connect: connection refused 2025/08/16 15:27:24 base crash: no output from test machine 2025/08/16 15:27:38 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 615, "corpus": 43750, "corpus [files]": 4056, "corpus [symbols]": 3066, "cover overflows": 63657, "coverage": 308962, "distributor delayed": 49732, "distributor undelayed": 49732, "distributor violated": 421, "exec candidate": 77726, "exec collide": 5239, "exec fuzz": 9759, "exec gen": 544, "exec hints": 5337, "exec inject": 0, "exec minimize": 9098, "exec retries": 27, "exec seeds": 1102, "exec smash": 7639, "exec total [base]": 175314, "exec total [new]": 373852, "exec triage": 143079, "executor restarts": 1259, "fault jobs": 0, "fuzzer jobs": 78, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 30, "max signal": 315119, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5343, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45316, "no exec duration": 8564738000000, "no exec requests": 21989, "pending": 1, "prog exec time": 0, "reproducing": 7, "rpc recv": 10747484428, "rpc sent": 4240381872, "signal": 302374, "smash jobs": 40, "triage jobs": 8, "vm output": 96464139, "vm restarts [base]": 72, "vm restarts [new]": 138 } 2025/08/16 15:27:55 VM-9 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:21534: connect: connection refused 2025/08/16 15:27:55 VM-9 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:21534: connect: connection refused 2025/08/16 15:28:09 new: boot error: can't ssh into the instance 2025/08/16 15:28:19 reproducing crash 'WARNING in rate_control_rate_init': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/extent_map.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 15:29:00 base crash: no output from test machine 2025/08/16 15:29:03 base crash: no output from test machine 2025/08/16 15:29:05 VM-4 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:7559: connect: connection refused 2025/08/16 15:29:05 VM-4 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:7559: connect: connection refused 2025/08/16 15:29:31 reproducing crash 'WARNING in rate_control_rate_init': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/extent_map.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 15:29:49 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:4641: connect: connection refused 2025/08/16 15:29:49 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:4641: connect: connection refused 2025/08/16 15:29:52 runner 0 connected 2025/08/16 15:30:53 base crash: no output from test machine 2025/08/16 15:31:00 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:2157: connect: connection refused 2025/08/16 15:31:00 VM-0 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:2157: connect: connection refused 2025/08/16 15:31:42 runner 2 connected 2025/08/16 15:32:34 status reporting terminated 2025/08/16 15:32:34 bug reporting terminated 2025/08/16 15:32:34 reproducing crash 'INFO: task hung in bch2_journal_reclaim_thread': concatenation step failed with context deadline exceeded 2025/08/16 15:32:34 repro finished 'INFO: task hung in bch2_journal_reclaim_thread', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/16 15:32:48 reproducing crash 'possible deadlock in ocfs2_xattr_set': concatenation step failed with context deadline exceeded 2025/08/16 15:32:48 repro finished 'possible deadlock in ocfs2_xattr_set', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/16 15:34:00 repro finished 'kernel BUG in jfs_evict_inode', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/16 15:34:02 repro finished 'KASAN: slab-use-after-free Read in __xfrm_state_lookup', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/16 15:34:28 reproducing crash 'INFO: task hung in __closure_sync': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/bcachefs/journal_reclaim.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/16 15:34:28 repro finished 'INFO: task hung in __closure_sync', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/16 15:36:15 repro finished 'INFO: task hung in __iterate_supers', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/16 15:37:49 reproducing crash 'WARNING in rate_control_rate_init': concatenation step failed with context deadline exceeded 2025/08/16 15:37:49 repro finished 'WARNING in rate_control_rate_init', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/16 15:39:06 syz-diff (base): kernel context loop terminated 2025/08/16 15:40:55 syz-diff (new): kernel context loop terminated 2025/08/16 15:40:55 diff fuzzing terminated 2025/08/16 15:40:55 fuzzing is finished 2025/08/16 15:40:55 status at the end: Title On-Base On-Patched INFO: task hung in __closure_sync 1 crashes INFO: task hung in __iterate_supers 1 crashes INFO: task hung in bch2_journal_reclaim_thread 1 crashes 2 crashes INFO: task hung in v9fs_evict_inode 1 crashes 3 crashes KASAN: slab-out-of-bounds Write in bch2_dirent_init_name 1 crashes 1 crashes KASAN: slab-use-after-free Read in __xfrm_state_lookup 2 crashes KASAN: slab-use-after-free Read in xfrm_alloc_spi 2 crashes 8 crashes KASAN: slab-use-after-free Read in xfrm_state_find 1 crashes 3 crashes KASAN: slab-use-after-free Write in __xfrm_state_delete 1 crashes WARNING in __udf_add_aext 1 crashes 1 crashes WARNING in dbAdjTree 1 crashes WARNING in ext4_xattr_inode_lookup_create 3 crashes 4 crashes WARNING in hfsplus_bnode_create 2 crashes 1 crashes[reproduced] WARNING in io_ring_exit_work 1 crashes 3 crashes WARNING in l2cap_chan_del 1 crashes WARNING in rate_control_rate_init 1 crashes WARNING in xfrm6_tunnel_net_exit 1 crashes 5 crashes WARNING in xfrm_state_fini 6 crashes 10 crashes general protection fault in __xfrm_state_insert 1 crashes general protection fault in pcl818_ai_cancel 2 crashes 5 crashes kernel BUG in jfs_evict_inode 1 crashes kernel BUG in txUnlock 1 crashes 2 crashes lost connection to test machine 13 crashes 39 crashes no output from test machine 13 crashes 1 crashes possible deadlock in attr_data_get_block 1 crashes possible deadlock in ocfs2_acquire_dquot 1 crashes possible deadlock in ocfs2_init_acl 5 crashes 6 crashes possible deadlock in ocfs2_reserve_suballoc_bits 7 crashes 9 crashes[reproduced] possible deadlock in ocfs2_try_remove_refcount_tree 4 crashes 12 crashes possible deadlock in ocfs2_xattr_set 3 crashes unregister_netdevice: waiting for DEV to become free 2 crashes