2025/08/18 08:42:27 extracted 303751 symbol hashes for base and 303751 for patched 2025/08/18 08:42:27 adding modified_functions to focus areas: ["nvmet_execute_disc_identify" "set_zone_contiguous"] 2025/08/18 08:42:27 adding directly modified files to focus areas: [".clang-format" "include/linux/memblock.h" "mm/memblock.c" "mm/mm_init.c"] 2025/08/18 08:42:28 downloaded the corpus from https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db 2025/08/18 08:43:26 runner 2 connected 2025/08/18 08:43:26 runner 2 connected 2025/08/18 08:43:26 runner 3 connected 2025/08/18 08:43:26 runner 7 connected 2025/08/18 08:43:26 runner 8 connected 2025/08/18 08:43:26 runner 1 connected 2025/08/18 08:43:26 runner 9 connected 2025/08/18 08:43:26 runner 0 connected 2025/08/18 08:43:26 runner 4 connected 2025/08/18 08:43:26 runner 6 connected 2025/08/18 08:43:26 runner 1 connected 2025/08/18 08:43:27 runner 5 connected 2025/08/18 08:43:27 runner 0 connected 2025/08/18 08:43:27 runner 3 connected 2025/08/18 08:43:32 initializing coverage information... 2025/08/18 08:43:32 executor cover filter: 0 PCs 2025/08/18 08:43:36 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/18 08:43:36 base: machine check complete 2025/08/18 08:43:37 discovered 7699 source files, 338620 symbols 2025/08/18 08:43:37 coverage filter: nvmet_execute_disc_identify: [nvmet_execute_disc_identify] 2025/08/18 08:43:37 coverage filter: set_zone_contiguous: [set_zone_contiguous] 2025/08/18 08:43:37 coverage filter: .clang-format: [] 2025/08/18 08:43:37 coverage filter: include/linux/memblock.h: [] 2025/08/18 08:43:37 coverage filter: mm/memblock.c: [mm/memblock.c] 2025/08/18 08:43:37 coverage filter: mm/mm_init.c: [mm/mm_init.c] 2025/08/18 08:43:37 area "symbols": 21 PCs in the cover filter 2025/08/18 08:43:37 area "files": 872 PCs in the cover filter 2025/08/18 08:43:37 area "": 0 PCs in the cover filter 2025/08/18 08:43:37 executor cover filter: 0 PCs 2025/08/18 08:43:41 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/18 08:43:41 new: machine check complete 2025/08/18 08:43:42 new: adding 80339 seeds 2025/08/18 08:44:18 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/18 08:44:18 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/18 08:44:24 base crash: WARNING in xfrm_state_fini 2025/08/18 08:44:51 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/18 08:44:51 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/18 08:45:02 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/18 08:45:02 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/18 08:45:08 runner 5 connected 2025/08/18 08:45:20 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 08:45:21 runner 3 connected 2025/08/18 08:45:24 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 08:45:31 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/18 08:45:31 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/18 08:45:38 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 08:45:40 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 08:45:40 runner 9 connected 2025/08/18 08:45:42 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 08:46:00 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 08:46:10 runner 2 connected 2025/08/18 08:46:13 runner 0 connected 2025/08/18 08:46:20 runner 8 connected 2025/08/18 08:46:27 runner 1 connected 2025/08/18 08:46:29 runner 0 connected 2025/08/18 08:46:31 runner 7 connected 2025/08/18 08:46:48 runner 3 connected 2025/08/18 08:47:02 base crash: WARNING in dbAdjTree 2025/08/18 08:47:29 STAT { "buffer too small": 0, "candidate triage jobs": 58, "candidates": 76875, "comps overflows": 0, "corpus": 3382, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 2280, "coverage": 153823, "distributor delayed": 4676, "distributor undelayed": 4676, "distributor violated": 210, "exec candidate": 3464, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 8706, "exec total [new]": 15289, "exec triage": 10703, "executor restarts": 124, "fault jobs": 0, "fuzzer jobs": 58, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 156766, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 3464, "no exec duration": 47331000000, "no exec requests": 342, "pending": 4, "prog exec time": 402, "reproducing": 0, "rpc recv": 1112803480, "rpc sent": 109537288, "signal": 151923, "smash jobs": 0, "triage jobs": 0, "vm output": 2494362, "vm restarts [base]": 6, "vm restarts [new]": 18 } 2025/08/18 08:47:59 runner 0 connected 2025/08/18 08:48:03 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 08:48:59 runner 1 connected 2025/08/18 08:49:17 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 08:49:45 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 08:49:51 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/18 08:49:51 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/18 08:50:34 runner 0 connected 2025/08/18 08:50:40 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 08:50:42 runner 0 connected 2025/08/18 08:50:52 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 08:50:58 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/18 08:51:02 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 08:51:03 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 08:51:29 runner 7 connected 2025/08/18 08:51:42 runner 3 connected 2025/08/18 08:51:44 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 08:51:46 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 08:51:46 runner 3 connected 2025/08/18 08:51:52 runner 2 connected 2025/08/18 08:51:52 runner 1 connected 2025/08/18 08:51:58 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 08:52:29 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 08:52:29 STAT { "buffer too small": 0, "candidate triage jobs": 45, "candidates": 71801, "comps overflows": 0, "corpus": 8419, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 6181, "coverage": 202194, "distributor delayed": 11519, "distributor undelayed": 11509, "distributor violated": 237, "exec candidate": 8538, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 3, "exec seeds": 0, "exec smash": 0, "exec total [base]": 18567, "exec total [new]": 38828, "exec triage": 26658, "executor restarts": 178, "fault jobs": 0, "fuzzer jobs": 45, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 204179, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 8538, "no exec duration": 47442000000, "no exec requests": 348, "pending": 5, "prog exec time": 243, "reproducing": 0, "rpc recv": 1827644876, "rpc sent": 235989176, "signal": 198735, "smash jobs": 0, "triage jobs": 0, "vm output": 4889702, "vm restarts [base]": 11, "vm restarts [new]": 22 } 2025/08/18 08:52:33 runner 5 connected 2025/08/18 08:52:36 runner 1 connected 2025/08/18 08:52:48 runner 0 connected 2025/08/18 08:53:10 base crash: WARNING in xfrm_state_fini 2025/08/18 08:53:18 runner 3 connected 2025/08/18 08:53:23 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 08:53:38 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 08:53:41 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/18 08:53:48 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 08:53:59 runner 3 connected 2025/08/18 08:54:00 patched crashed: general protection fault in xfrm_alloc_spi [need repro = true] 2025/08/18 08:54:00 scheduled a reproduction of 'general protection fault in xfrm_alloc_spi' 2025/08/18 08:54:13 runner 0 connected 2025/08/18 08:54:27 runner 8 connected 2025/08/18 08:54:38 runner 9 connected 2025/08/18 08:55:08 new: boot error: can't ssh into the instance 2025/08/18 08:55:39 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 08:55:57 runner 6 connected 2025/08/18 08:57:12 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 08:57:26 base crash: WARNING in xfrm_state_fini 2025/08/18 08:57:29 STAT { "buffer too small": 0, "candidate triage jobs": 38, "candidates": 66953, "comps overflows": 0, "corpus": 13225, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 9445, "coverage": 228533, "distributor delayed": 17431, "distributor undelayed": 17430, "distributor violated": 243, "exec candidate": 13386, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 3, "exec seeds": 0, "exec smash": 0, "exec total [base]": 26742, "exec total [new]": 61278, "exec triage": 41664, "executor restarts": 229, "fault jobs": 0, "fuzzer jobs": 38, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 230414, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 13386, "no exec duration": 47442000000, "no exec requests": 348, "pending": 6, "prog exec time": 213, "reproducing": 0, "rpc recv": 2517240592, "rpc sent": 368589640, "signal": 224577, "smash jobs": 0, "triage jobs": 0, "vm output": 7515522, "vm restarts [base]": 13, "vm restarts [new]": 29 } 2025/08/18 08:58:01 runner 1 connected 2025/08/18 08:58:15 runner 3 connected 2025/08/18 08:58:20 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 08:58:36 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 08:58:47 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 08:58:50 base crash: kernel BUG in txUnlock 2025/08/18 08:59:09 runner 0 connected 2025/08/18 08:59:23 new: boot error: can't ssh into the instance 2025/08/18 08:59:26 runner 9 connected 2025/08/18 08:59:38 runner 6 connected 2025/08/18 08:59:39 runner 3 connected 2025/08/18 09:00:12 runner 4 connected 2025/08/18 09:01:53 patched crashed: INFO: task hung in sync_bdevs [need repro = true] 2025/08/18 09:01:53 scheduled a reproduction of 'INFO: task hung in sync_bdevs' 2025/08/18 09:02:24 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:02:29 STAT { "buffer too small": 0, "candidate triage jobs": 39, "candidates": 61466, "comps overflows": 0, "corpus": 18615, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 13416, "coverage": 249428, "distributor delayed": 23426, "distributor undelayed": 23425, "distributor violated": 247, "exec candidate": 18873, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 3, "exec seeds": 0, "exec smash": 0, "exec total [base]": 32432, "exec total [new]": 89217, "exec triage": 58775, "executor restarts": 281, "fault jobs": 0, "fuzzer jobs": 39, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 251607, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 18873, "no exec duration": 47442000000, "no exec requests": 348, "pending": 7, "prog exec time": 276, "reproducing": 0, "rpc recv": 3149695296, "rpc sent": 486015424, "signal": 245042, "smash jobs": 0, "triage jobs": 0, "vm output": 9934532, "vm restarts [base]": 15, "vm restarts [new]": 34 } 2025/08/18 09:02:42 runner 8 connected 2025/08/18 09:03:06 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/18 09:03:06 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/18 09:03:12 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 09:03:13 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/18 09:03:13 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/18 09:03:24 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/18 09:03:24 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/18 09:03:47 base: boot error: can't ssh into the instance 2025/08/18 09:03:50 patched crashed: general protection fault in pcl818_ai_cancel [need repro = true] 2025/08/18 09:03:50 scheduled a reproduction of 'general protection fault in pcl818_ai_cancel' 2025/08/18 09:03:55 runner 1 connected 2025/08/18 09:04:01 runner 3 connected 2025/08/18 09:04:01 runner 9 connected 2025/08/18 09:04:06 new: boot error: can't ssh into the instance 2025/08/18 09:04:24 patched crashed: KASAN: slab-use-after-free Write in txEnd [need repro = true] 2025/08/18 09:04:24 scheduled a reproduction of 'KASAN: slab-use-after-free Write in txEnd' 2025/08/18 09:04:38 runner 2 connected 2025/08/18 09:04:38 runner 6 connected 2025/08/18 09:05:00 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/18 09:05:00 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/18 09:05:11 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:05:13 runner 4 connected 2025/08/18 09:05:27 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 09:05:41 base crash: general protection fault in pcl818_ai_cancel 2025/08/18 09:05:45 base: boot error: can't ssh into the instance 2025/08/18 09:05:47 runner 0 connected 2025/08/18 09:05:51 base crash: possible deadlock in ocfs2_xattr_set 2025/08/18 09:06:00 runner 9 connected 2025/08/18 09:06:01 patched crashed: possible deadlock in attr_data_get_block [need repro = true] 2025/08/18 09:06:01 scheduled a reproduction of 'possible deadlock in attr_data_get_block' 2025/08/18 09:06:07 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:06:12 patched crashed: possible deadlock in attr_data_get_block [need repro = true] 2025/08/18 09:06:12 scheduled a reproduction of 'possible deadlock in attr_data_get_block' 2025/08/18 09:06:16 runner 2 connected 2025/08/18 09:06:30 runner 3 connected 2025/08/18 09:06:33 runner 1 connected 2025/08/18 09:06:41 runner 0 connected 2025/08/18 09:06:51 runner 3 connected 2025/08/18 09:06:56 runner 1 connected 2025/08/18 09:07:01 runner 8 connected 2025/08/18 09:07:29 STAT { "buffer too small": 0, "candidate triage jobs": 34, "candidates": 57986, "comps overflows": 0, "corpus": 22071, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 15405, "coverage": 260516, "distributor delayed": 28960, "distributor undelayed": 28957, "distributor violated": 263, "exec candidate": 22353, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 38997, "exec total [new]": 106567, "exec triage": 69446, "executor restarts": 331, "fault jobs": 0, "fuzzer jobs": 34, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 262947, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 22353, "no exec duration": 47976000000, "no exec requests": 354, "pending": 15, "prog exec time": 304, "reproducing": 0, "rpc recv": 3930515840, "rpc sent": 598851600, "signal": 256102, "smash jobs": 0, "triage jobs": 0, "vm output": 12538836, "vm restarts [base]": 21, "vm restarts [new]": 44 } 2025/08/18 09:07:57 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 09:08:14 base crash: WARNING in xfrm_state_fini 2025/08/18 09:08:32 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 09:08:44 base crash: possible deadlock in ntfs_fiemap 2025/08/18 09:08:46 runner 9 connected 2025/08/18 09:09:04 runner 2 connected 2025/08/18 09:09:17 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 09:09:21 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:09:32 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:09:32 runner 1 connected 2025/08/18 09:10:08 patched crashed: INFO: task hung in corrupted [need repro = true] 2025/08/18 09:10:08 scheduled a reproduction of 'INFO: task hung in corrupted' 2025/08/18 09:10:11 runner 8 connected 2025/08/18 09:10:21 runner 4 connected 2025/08/18 09:10:45 patched crashed: INFO: task hung in corrupted [need repro = true] 2025/08/18 09:10:45 scheduled a reproduction of 'INFO: task hung in corrupted' 2025/08/18 09:10:58 runner 3 connected 2025/08/18 09:11:23 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 09:11:36 runner 1 connected 2025/08/18 09:12:13 runner 2 connected 2025/08/18 09:12:29 STAT { "buffer too small": 0, "candidate triage jobs": 29, "candidates": 56295, "comps overflows": 0, "corpus": 23750, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 16251, "coverage": 265520, "distributor delayed": 32728, "distributor undelayed": 32728, "distributor violated": 867, "exec candidate": 24044, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 47647, "exec total [new]": 114598, "exec triage": 74572, "executor restarts": 369, "fault jobs": 0, "fuzzer jobs": 29, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 267918, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 24044, "no exec duration": 47976000000, "no exec requests": 354, "pending": 17, "prog exec time": 286, "reproducing": 0, "rpc recv": 4335196024, "rpc sent": 674404960, "signal": 261129, "smash jobs": 0, "triage jobs": 0, "vm output": 14288137, "vm restarts [base]": 24, "vm restarts [new]": 49 } 2025/08/18 09:12:30 new: boot error: can't ssh into the instance 2025/08/18 09:13:01 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:13:12 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:13:19 runner 7 connected 2025/08/18 09:13:26 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:13:29 new: boot error: can't ssh into the instance 2025/08/18 09:13:37 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:13:42 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:13:50 runner 8 connected 2025/08/18 09:13:51 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 09:13:53 base crash: INFO: task hung in corrupted 2025/08/18 09:13:59 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 09:14:01 runner 4 connected 2025/08/18 09:14:01 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 09:14:04 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:14:12 new: boot error: can't ssh into the instance 2025/08/18 09:14:14 runner 3 connected 2025/08/18 09:14:19 runner 5 connected 2025/08/18 09:14:25 runner 9 connected 2025/08/18 09:14:31 runner 1 connected 2025/08/18 09:14:39 runner 2 connected 2025/08/18 09:14:42 runner 3 connected 2025/08/18 09:14:48 runner 0 connected 2025/08/18 09:14:53 runner 7 connected 2025/08/18 09:15:01 runner 2 connected 2025/08/18 09:15:57 base crash: lost connection to test machine 2025/08/18 09:16:07 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 09:16:16 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 09:16:24 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:16:35 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 09:16:37 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 09:16:46 runner 3 connected 2025/08/18 09:16:48 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:16:52 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 09:16:56 runner 8 connected 2025/08/18 09:16:59 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:17:06 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 09:17:24 runner 2 connected 2025/08/18 09:17:25 runner 9 connected 2025/08/18 09:17:29 STAT { "buffer too small": 0, "candidate triage jobs": 265, "candidates": 53713, "comps overflows": 0, "corpus": 26060, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 17611, "coverage": 271858, "distributor delayed": 36574, "distributor undelayed": 36317, "distributor violated": 867, "exec candidate": 26626, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 52664, "exec total [new]": 126537, "exec triage": 81791, "executor restarts": 431, "fault jobs": 0, "fuzzer jobs": 265, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 274749, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 26626, "no exec duration": 48082000000, "no exec requests": 355, "pending": 17, "prog exec time": 272, "reproducing": 0, "rpc recv": 5001197536, "rpc sent": 768095912, "signal": 267587, "smash jobs": 0, "triage jobs": 0, "vm output": 16823282, "vm restarts [base]": 28, "vm restarts [new]": 61 } 2025/08/18 09:17:37 runner 3 connected 2025/08/18 09:17:40 runner 0 connected 2025/08/18 09:17:47 runner 5 connected 2025/08/18 09:17:48 runner 1 connected 2025/08/18 09:17:50 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:18:01 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:18:38 new: boot error: can't ssh into the instance 2025/08/18 09:18:38 runner 7 connected 2025/08/18 09:18:51 runner 8 connected 2025/08/18 09:19:23 new: boot error: can't ssh into the instance 2025/08/18 09:19:34 runner 0 connected 2025/08/18 09:20:12 runner 6 connected 2025/08/18 09:20:21 patched crashed: possible deadlock in ntfs_fiemap [need repro = false] 2025/08/18 09:20:32 patched crashed: possible deadlock in ntfs_fiemap [need repro = false] 2025/08/18 09:20:51 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 09:20:51 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 09:21:10 runner 2 connected 2025/08/18 09:21:21 runner 9 connected 2025/08/18 09:21:30 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:21:39 runner 0 connected 2025/08/18 09:21:40 runner 3 connected 2025/08/18 09:21:54 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:22:00 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:22:05 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:22:14 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:22:18 runner 3 connected 2025/08/18 09:22:29 STAT { "buffer too small": 0, "candidate triage jobs": 36, "candidates": 50305, "comps overflows": 0, "corpus": 29652, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 19670, "coverage": 280699, "distributor delayed": 41056, "distributor undelayed": 41048, "distributor violated": 899, "exec candidate": 30034, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 57326, "exec total [new]": 144889, "exec triage": 92671, "executor restarts": 523, "fault jobs": 0, "fuzzer jobs": 36, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 282978, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 30034, "no exec duration": 48099000000, "no exec requests": 357, "pending": 17, "prog exec time": 277, "reproducing": 0, "rpc recv": 5784172548, "rpc sent": 886505680, "signal": 276483, "smash jobs": 0, "triage jobs": 0, "vm output": 20279873, "vm restarts [base]": 31, "vm restarts [new]": 71 } 2025/08/18 09:22:43 runner 6 connected 2025/08/18 09:22:49 runner 0 connected 2025/08/18 09:22:54 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:22:55 runner 5 connected 2025/08/18 09:23:03 runner 8 connected 2025/08/18 09:23:11 patched crashed: KASAN: slab-use-after-free Read in xfrm_state_find [need repro = true] 2025/08/18 09:23:11 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_state_find' 2025/08/18 09:23:17 base crash: possible deadlock in ocfs2_xattr_set 2025/08/18 09:23:34 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 09:23:44 runner 7 connected 2025/08/18 09:23:52 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 09:24:00 runner 1 connected 2025/08/18 09:24:06 runner 3 connected 2025/08/18 09:24:07 base: boot error: can't ssh into the instance 2025/08/18 09:24:23 runner 0 connected 2025/08/18 09:24:41 runner 9 connected 2025/08/18 09:24:57 runner 1 connected 2025/08/18 09:25:41 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 09:26:13 patched crashed: possible deadlock in ocfs2_evict_inode [need repro = true] 2025/08/18 09:26:13 scheduled a reproduction of 'possible deadlock in ocfs2_evict_inode' 2025/08/18 09:26:18 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/18 09:26:18 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/18 09:26:22 base: boot error: can't ssh into the instance 2025/08/18 09:26:29 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/08/18 09:26:29 new: boot error: can't ssh into the instance 2025/08/18 09:26:30 runner 5 connected 2025/08/18 09:27:02 runner 1 connected 2025/08/18 09:27:04 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/18 09:27:06 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/18 09:27:07 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/18 09:27:12 runner 2 connected 2025/08/18 09:27:17 runner 0 connected 2025/08/18 09:27:18 runner 4 connected 2025/08/18 09:27:29 STAT { "buffer too small": 0, "candidate triage jobs": 32, "candidates": 46259, "comps overflows": 0, "corpus": 33640, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 22481, "coverage": 289780, "distributor delayed": 45947, "distributor undelayed": 45946, "distributor violated": 899, "exec candidate": 34080, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 64484, "exec total [new]": 168011, "exec triage": 105091, "executor restarts": 596, "fault jobs": 0, "fuzzer jobs": 32, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 292256, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 34080, "no exec duration": 48233000000, "no exec requests": 360, "pending": 20, "prog exec time": 229, "reproducing": 0, "rpc recv": 6562369700, "rpc sent": 1023874568, "signal": 285207, "smash jobs": 0, "triage jobs": 0, "vm output": 23217291, "vm restarts [base]": 35, "vm restarts [new]": 82 } 2025/08/18 09:27:53 runner 7 connected 2025/08/18 09:27:56 runner 0 connected 2025/08/18 09:27:56 runner 9 connected 2025/08/18 09:29:07 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 09:29:21 base crash: kernel BUG in txUnlock 2025/08/18 09:29:48 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:29:50 base crash: WARNING in xfrm_state_fini 2025/08/18 09:29:59 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:30:03 runner 8 connected 2025/08/18 09:30:10 runner 1 connected 2025/08/18 09:30:37 runner 4 connected 2025/08/18 09:30:39 runner 0 connected 2025/08/18 09:30:46 patched crashed: WARNING in __linkwatch_sync_dev [need repro = true] 2025/08/18 09:30:46 scheduled a reproduction of 'WARNING in __linkwatch_sync_dev' 2025/08/18 09:30:48 runner 1 connected 2025/08/18 09:30:59 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = true] 2025/08/18 09:30:59 scheduled a reproduction of 'WARNING in ext4_xattr_inode_lookup_create' 2025/08/18 09:31:01 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = true] 2025/08/18 09:31:01 scheduled a reproduction of 'WARNING in ext4_xattr_inode_lookup_create' 2025/08/18 09:31:09 base crash: WARNING in ext4_xattr_inode_lookup_create 2025/08/18 09:31:11 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 09:31:34 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 09:31:43 runner 9 connected 2025/08/18 09:31:48 runner 8 connected 2025/08/18 09:31:50 runner 0 connected 2025/08/18 09:31:58 runner 1 connected 2025/08/18 09:32:00 runner 3 connected 2025/08/18 09:32:25 runner 0 connected 2025/08/18 09:32:29 STAT { "buffer too small": 0, "candidate triage jobs": 33, "candidates": 41747, "comps overflows": 0, "corpus": 38084, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 25687, "coverage": 298683, "distributor delayed": 50484, "distributor undelayed": 50484, "distributor violated": 900, "exec candidate": 38592, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 74552, "exec total [new]": 195847, "exec triage": 118925, "executor restarts": 663, "fault jobs": 0, "fuzzer jobs": 33, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 301420, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 38592, "no exec duration": 48306000000, "no exec requests": 362, "pending": 23, "prog exec time": 231, "reproducing": 0, "rpc recv": 7303566592, "rpc sent": 1179642728, "signal": 293791, "smash jobs": 0, "triage jobs": 0, "vm output": 26442584, "vm restarts [base]": 40, "vm restarts [new]": 91 } 2025/08/18 09:32:57 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:33:11 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 09:33:26 base crash: possible deadlock in ocfs2_xattr_set 2025/08/18 09:33:47 runner 9 connected 2025/08/18 09:33:53 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:33:55 patched crashed: WARNING in dbAdjTree [need repro = false] 2025/08/18 09:34:05 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:34:06 patched crashed: WARNING in dbAdjTree [need repro = false] 2025/08/18 09:34:16 runner 1 connected 2025/08/18 09:34:29 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:34:43 runner 8 connected 2025/08/18 09:34:44 runner 5 connected 2025/08/18 09:34:55 runner 7 connected 2025/08/18 09:34:55 runner 1 connected 2025/08/18 09:35:06 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 09:35:15 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 09:35:18 runner 0 connected 2025/08/18 09:35:56 runner 1 connected 2025/08/18 09:36:01 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 09:36:05 runner 9 connected 2025/08/18 09:36:17 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 09:36:24 new: boot error: can't ssh into the instance 2025/08/18 09:36:54 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:36:57 runner 2 connected 2025/08/18 09:37:05 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:37:06 runner 2 connected 2025/08/18 09:37:12 runner 6 connected 2025/08/18 09:37:29 STAT { "buffer too small": 0, "candidate triage jobs": 20, "candidates": 39200, "comps overflows": 0, "corpus": 40562, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 29405, "coverage": 303621, "distributor delayed": 53267, "distributor undelayed": 53267, "distributor violated": 930, "exec candidate": 41139, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 81971, "exec total [new]": 220206, "exec triage": 126983, "executor restarts": 725, "fault jobs": 0, "fuzzer jobs": 20, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 306525, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 41139, "no exec duration": 48306000000, "no exec requests": 362, "pending": 23, "prog exec time": 239, "reproducing": 0, "rpc recv": 7918034364, "rpc sent": 1350152224, "signal": 298609, "smash jobs": 0, "triage jobs": 0, "vm output": 29205354, "vm restarts [base]": 43, "vm restarts [new]": 100 } 2025/08/18 09:37:34 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/18 09:37:34 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/18 09:37:43 runner 7 connected 2025/08/18 09:37:53 runner 4 connected 2025/08/18 09:37:54 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:38:05 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:38:22 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 09:38:26 runner 8 connected 2025/08/18 09:38:43 runner 1 connected 2025/08/18 09:38:44 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 09:38:48 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:38:55 runner 5 connected 2025/08/18 09:39:12 runner 7 connected 2025/08/18 09:39:23 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:39:33 runner 2 connected 2025/08/18 09:39:37 runner 9 connected 2025/08/18 09:39:47 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 09:40:00 patched crashed: possible deadlock in ntfs_fiemap [need repro = false] 2025/08/18 09:40:04 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 09:40:08 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 09:40:12 runner 5 connected 2025/08/18 09:40:36 runner 1 connected 2025/08/18 09:40:44 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:40:53 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:40:53 runner 4 connected 2025/08/18 09:40:53 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 09:40:55 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:40:56 runner 2 connected 2025/08/18 09:40:56 runner 6 connected 2025/08/18 09:41:05 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:41:32 runner 0 connected 2025/08/18 09:41:42 runner 7 connected 2025/08/18 09:41:43 runner 1 connected 2025/08/18 09:41:51 runner 8 connected 2025/08/18 09:41:54 runner 9 connected 2025/08/18 09:42:12 base crash: possible deadlock in attr_data_get_block 2025/08/18 09:42:20 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:42:29 STAT { "buffer too small": 0, "candidate triage jobs": 18, "candidates": 38118, "comps overflows": 0, "corpus": 41591, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 32270, "coverage": 306537, "distributor delayed": 54528, "distributor undelayed": 54528, "distributor violated": 931, "exec candidate": 42221, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 90723, "exec total [new]": 237669, "exec triage": 130368, "executor restarts": 833, "fault jobs": 0, "fuzzer jobs": 18, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 309888, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 42221, "no exec duration": 48318000000, "no exec requests": 363, "pending": 24, "prog exec time": 340, "reproducing": 0, "rpc recv": 8599026368, "rpc sent": 1500145736, "signal": 301561, "smash jobs": 0, "triage jobs": 0, "vm output": 32204947, "vm restarts [base]": 45, "vm restarts [new]": 116 } 2025/08/18 09:42:31 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:43:08 runner 1 connected 2025/08/18 09:43:17 runner 2 connected 2025/08/18 09:43:17 base: boot error: can't ssh into the instance 2025/08/18 09:43:22 runner 4 connected 2025/08/18 09:43:31 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:43:32 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 09:44:07 runner 3 connected 2025/08/18 09:44:20 runner 0 connected 2025/08/18 09:44:29 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/08/18 09:44:29 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 09:44:34 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:44:36 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 09:44:43 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:44:45 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:44:56 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:45:17 runner 8 connected 2025/08/18 09:45:18 runner 1 connected 2025/08/18 09:45:21 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:45:24 runner 3 connected 2025/08/18 09:45:31 runner 4 connected 2025/08/18 09:45:34 runner 5 connected 2025/08/18 09:45:40 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:45:47 runner 7 connected 2025/08/18 09:46:10 runner 0 connected 2025/08/18 09:46:11 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:46:12 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 09:46:17 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:46:19 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/18 09:46:23 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:46:26 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:46:30 runner 2 connected 2025/08/18 09:46:37 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:46:52 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 09:47:00 runner 9 connected 2025/08/18 09:47:02 runner 3 connected 2025/08/18 09:47:06 runner 8 connected 2025/08/18 09:47:07 runner 7 connected 2025/08/18 09:47:12 runner 3 connected 2025/08/18 09:47:15 runner 4 connected 2025/08/18 09:47:25 runner 5 connected 2025/08/18 09:47:29 STAT { "buffer too small": 0, "candidate triage jobs": 23, "candidates": 36866, "comps overflows": 0, "corpus": 42797, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 34261, "coverage": 309015, "distributor delayed": 56350, "distributor undelayed": 56350, "distributor violated": 934, "exec candidate": 43473, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 100237, "exec total [new]": 251379, "exec triage": 134197, "executor restarts": 915, "fault jobs": 0, "fuzzer jobs": 23, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 312483, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43473, "no exec duration": 48326000000, "no exec requests": 364, "pending": 24, "prog exec time": 282, "reproducing": 0, "rpc recv": 9326811988, "rpc sent": 1650865288, "signal": 304035, "smash jobs": 0, "triage jobs": 0, "vm output": 35100783, "vm restarts [base]": 49, "vm restarts [new]": 132 } 2025/08/18 09:47:36 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 09:47:37 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:47:48 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:47:57 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:48:09 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:48:25 runner 1 connected 2025/08/18 09:48:27 runner 3 connected 2025/08/18 09:48:37 runner 4 connected 2025/08/18 09:48:46 runner 7 connected 2025/08/18 09:48:58 runner 0 connected 2025/08/18 09:49:23 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:49:25 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 09:49:27 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:49:28 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 09:49:38 base crash: WARNING in xfrm_state_fini 2025/08/18 09:49:48 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:49:55 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/18 09:50:12 runner 5 connected 2025/08/18 09:50:15 runner 3 connected 2025/08/18 09:50:16 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:50:17 runner 7 connected 2025/08/18 09:50:17 runner 4 connected 2025/08/18 09:50:26 runner 1 connected 2025/08/18 09:50:28 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:50:36 runner 3 connected 2025/08/18 09:50:44 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:50:44 runner 0 connected 2025/08/18 09:50:45 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 09:50:55 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:51:04 runner 0 connected 2025/08/18 09:51:11 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:51:17 runner 9 connected 2025/08/18 09:51:33 runner 8 connected 2025/08/18 09:51:33 runner 7 connected 2025/08/18 09:51:42 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 09:51:43 runner 5 connected 2025/08/18 09:51:43 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 09:51:48 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 09:51:48 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 09:52:13 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/18 09:52:23 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:52:23 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/18 09:52:25 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/18 09:52:25 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/18 09:52:29 STAT { "buffer too small": 0, "candidate triage jobs": 30, "candidates": 35831, "comps overflows": 0, "corpus": 43753, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 36575, "coverage": 310834, "distributor delayed": 58065, "distributor undelayed": 58035, "distributor violated": 974, "exec candidate": 44508, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 106199, "exec total [new]": 265330, "exec triage": 137399, "executor restarts": 983, "fault jobs": 0, "fuzzer jobs": 30, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 314357, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44479, "no exec duration": 48351000000, "no exec requests": 366, "pending": 24, "prog exec time": 192, "reproducing": 0, "rpc recv": 9984772528, "rpc sent": 1794993384, "signal": 305833, "smash jobs": 0, "triage jobs": 0, "vm output": 37251305, "vm restarts [base]": 53, "vm restarts [new]": 145 } 2025/08/18 09:52:30 runner 3 connected 2025/08/18 09:52:32 runner 2 connected 2025/08/18 09:52:56 runner 8 connected 2025/08/18 09:53:06 runner 7 connected 2025/08/18 09:53:12 runner 9 connected 2025/08/18 09:53:13 runner 5 connected 2025/08/18 09:53:38 new: boot error: can't ssh into the instance 2025/08/18 09:54:29 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:54:42 new: boot error: can't ssh into the instance 2025/08/18 09:54:49 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 09:55:06 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/18 09:55:06 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/18 09:55:31 runner 6 connected 2025/08/18 09:55:38 runner 5 connected 2025/08/18 09:55:47 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 09:55:54 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 09:55:54 runner 9 connected 2025/08/18 09:56:15 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 09:56:26 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 09:56:36 runner 7 connected 2025/08/18 09:56:37 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:56:43 runner 3 connected 2025/08/18 09:56:56 runner 5 connected 2025/08/18 09:56:58 base: boot error: can't ssh into the instance 2025/08/18 09:57:07 runner 2 connected 2025/08/18 09:57:20 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 09:57:26 runner 6 connected 2025/08/18 09:57:27 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:57:29 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35310, "comps overflows": 0, "corpus": 44223, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 38796, "coverage": 311667, "distributor delayed": 58903, "distributor undelayed": 58900, "distributor violated": 994, "exec candidate": 45029, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108082, "exec total [new]": 277918, "exec triage": 138956, "executor restarts": 1031, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 315170, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44956, "no exec duration": 48351000000, "no exec requests": 366, "pending": 25, "prog exec time": 234, "reproducing": 0, "rpc recv": 10442256152, "rpc sent": 1879100056, "signal": 306675, "smash jobs": 0, "triage jobs": 0, "vm output": 39042187, "vm restarts [base]": 55, "vm restarts [new]": 157 } 2025/08/18 09:57:47 runner 2 connected 2025/08/18 09:58:09 runner 3 connected 2025/08/18 09:58:16 runner 7 connected 2025/08/18 09:58:20 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 09:58:59 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 09:59:10 runner 9 connected 2025/08/18 09:59:48 runner 2 connected 2025/08/18 10:01:08 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 10:01:17 new: boot error: can't ssh into the instance 2025/08/18 10:01:19 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 10:01:22 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 10:01:37 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 10:01:54 base: boot error: can't ssh into the instance 2025/08/18 10:01:54 base: boot error: can't ssh into the instance 2025/08/18 10:01:57 runner 3 connected 2025/08/18 10:02:06 runner 3 connected 2025/08/18 10:02:11 runner 6 connected 2025/08/18 10:02:25 runner 2 connected 2025/08/18 10:02:29 new: boot error: can't ssh into the instance 2025/08/18 10:02:29 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34905, "comps overflows": 0, "corpus": 44517, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 42334, "coverage": 312303, "distributor delayed": 59459, "distributor undelayed": 59459, "distributor violated": 1019, "exec candidate": 45434, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 112220, "exec total [new]": 296170, "exec triage": 140194, "executor restarts": 1065, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 315973, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45311, "no exec duration": 49172000000, "no exec requests": 371, "pending": 25, "prog exec time": 233, "reproducing": 0, "rpc recv": 10768057720, "rpc sent": 1973127152, "signal": 307267, "smash jobs": 0, "triage jobs": 0, "vm output": 40686345, "vm restarts [base]": 59, "vm restarts [new]": 162 } 2025/08/18 10:02:36 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 10:02:42 runner 1 connected 2025/08/18 10:02:45 runner 0 connected 2025/08/18 10:03:03 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 10:03:17 runner 0 connected 2025/08/18 10:03:27 runner 2 connected 2025/08/18 10:03:39 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 10:03:43 new: boot error: can't ssh into the instance 2025/08/18 10:04:27 runner 4 connected 2025/08/18 10:04:32 runner 1 connected 2025/08/18 10:04:34 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 10:04:35 new: boot error: can't ssh into the instance 2025/08/18 10:05:28 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 10:05:31 runner 9 connected 2025/08/18 10:05:31 runner 8 connected 2025/08/18 10:06:11 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 10:06:18 runner 0 connected 2025/08/18 10:06:22 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 10:06:24 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 10:06:39 base crash: WARNING in xfrm_state_fini 2025/08/18 10:07:00 runner 1 connected 2025/08/18 10:07:11 runner 3 connected 2025/08/18 10:07:20 runner 2 connected 2025/08/18 10:07:21 base crash: WARNING in xfrm_state_fini 2025/08/18 10:07:28 runner 1 connected 2025/08/18 10:07:29 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 16677, "comps overflows": 0, "corpus": 44997, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 46407, "coverage": 313245, "distributor delayed": 60162, "distributor undelayed": 60162, "distributor violated": 1019, "exec candidate": 63662, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 18, "exec seeds": 0, "exec smash": 0, "exec total [base]": 119976, "exec total [new]": 318493, "exec triage": 141958, "executor restarts": 1118, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 317025, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45844, "no exec duration": 49290000000, "no exec requests": 375, "pending": 25, "prog exec time": 269, "reproducing": 0, "rpc recv": 11232341152, "rpc sent": 2115645056, "signal": 308216, "smash jobs": 0, "triage jobs": 0, "vm output": 43090070, "vm restarts [base]": 64, "vm restarts [new]": 170 } 2025/08/18 10:08:17 runner 0 connected 2025/08/18 10:08:39 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 10:09:17 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 10:09:23 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 10:09:24 base crash: WARNING in xfrm_state_fini 2025/08/18 10:09:27 runner 5 connected 2025/08/18 10:09:29 triaged 91.1% of the corpus 2025/08/18 10:09:29 starting bug reproductions 2025/08/18 10:09:29 starting bug reproductions (max 10 VMs, 7 repros) 2025/08/18 10:09:29 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/18 10:09:29 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/18 10:09:29 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/18 10:09:29 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/18 10:09:29 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/18 10:09:29 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/18 10:09:29 reproduction of "general protection fault in pcl818_ai_cancel" aborted: it's no longer needed 2025/08/18 10:09:29 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/18 10:09:29 reproduction of "possible deadlock in attr_data_get_block" aborted: it's no longer needed 2025/08/18 10:09:29 reproduction of "possible deadlock in attr_data_get_block" aborted: it's no longer needed 2025/08/18 10:09:29 reproduction of "INFO: task hung in corrupted" aborted: it's no longer needed 2025/08/18 10:09:29 reproduction of "INFO: task hung in corrupted" aborted: it's no longer needed 2025/08/18 10:09:29 start reproducing 'INFO: task hung in sync_bdevs' 2025/08/18 10:09:29 start reproducing 'general protection fault in xfrm_alloc_spi' 2025/08/18 10:09:29 start reproducing 'possible deadlock in ocfs2_evict_inode' 2025/08/18 10:09:29 start reproducing 'KASAN: slab-use-after-free Write in txEnd' 2025/08/18 10:09:29 start reproducing 'KASAN: slab-use-after-free Read in xfrm_state_find' 2025/08/18 10:09:29 start reproducing 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/18 10:09:29 reproduction of "KASAN: slab-use-after-free Read in __xfrm_state_lookup" aborted: it's no longer needed 2025/08/18 10:09:29 failed to recv *flatrpc.InfoRequestRawT: EOF 2025/08/18 10:09:29 reproduction of "WARNING in ext4_xattr_inode_lookup_create" aborted: it's no longer needed 2025/08/18 10:09:29 reproduction of "WARNING in ext4_xattr_inode_lookup_create" aborted: it's no longer needed 2025/08/18 10:09:29 start reproducing 'WARNING in __linkwatch_sync_dev' 2025/08/18 10:10:13 runner 0 connected 2025/08/18 10:10:44 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:11:04 base crash: WARNING in xfrm_state_fini 2025/08/18 10:11:24 new: boot error: can't ssh into the instance 2025/08/18 10:11:53 runner 0 connected 2025/08/18 10:12:29 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 7130, "comps overflows": 0, "corpus": 45121, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 48210, "coverage": 313457, "distributor delayed": 60328, "distributor undelayed": 60328, "distributor violated": 1019, "exec candidate": 73209, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 127139, "exec total [new]": 328563, "exec triage": 142479, "executor restarts": 1138, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317370, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45996, "no exec duration": 57375000000, "no exec requests": 388, "pending": 3, "prog exec time": 0, "reproducing": 7, "rpc recv": 11377325784, "rpc sent": 2169328792, "signal": 308431, "smash jobs": 0, "triage jobs": 0, "vm output": 44701399, "vm restarts [base]": 67, "vm restarts [new]": 171 } 2025/08/18 10:13:09 base: boot error: can't ssh into the instance 2025/08/18 10:13:50 runner 3 connected 2025/08/18 10:14:49 reproducing crash 'KASAN: slab-use-after-free Read in xfrm_state_find': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:16:53 base crash: no output from test machine 2025/08/18 10:16:54 reproducing crash 'KASAN: slab-use-after-free Read in xfrm_state_find': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:16:57 base crash: no output from test machine 2025/08/18 10:17:07 base crash: no output from test machine 2025/08/18 10:17:29 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 7130, "comps overflows": 0, "corpus": 45121, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 48210, "coverage": 313457, "distributor delayed": 60328, "distributor undelayed": 60328, "distributor violated": 1019, "exec candidate": 73209, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 127139, "exec total [new]": 328563, "exec triage": 142479, "executor restarts": 1138, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317370, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45996, "no exec duration": 57375000000, "no exec requests": 388, "pending": 3, "prog exec time": 0, "reproducing": 7, "rpc recv": 11408221976, "rpc sent": 2169329072, "signal": 308431, "smash jobs": 0, "triage jobs": 0, "vm output": 45719569, "vm restarts [base]": 68, "vm restarts [new]": 171 } 2025/08/18 10:17:42 runner 0 connected 2025/08/18 10:17:46 runner 1 connected 2025/08/18 10:17:48 runner 2 connected 2025/08/18 10:18:50 base crash: no output from test machine 2025/08/18 10:19:01 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:19:08 reproducing crash 'KASAN: slab-use-after-free Read in xfrm_state_find': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:19:31 runner 3 connected 2025/08/18 10:20:42 new: boot error: can't ssh into the instance 2025/08/18 10:20:50 new: boot error: can't ssh into the instance 2025/08/18 10:20:58 new: boot error: can't ssh into the instance 2025/08/18 10:20:58 new: boot error: can't ssh into the instance 2025/08/18 10:21:17 new: boot error: can't ssh into the instance 2025/08/18 10:21:24 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:21:38 reproducing crash 'KASAN: slab-use-after-free Read in xfrm_state_find': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:22:04 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:22:07 reproducing crash 'KASAN: slab-use-after-free Read in xfrm_state_find': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:22:29 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 7130, "comps overflows": 0, "corpus": 45121, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 48210, "coverage": 313457, "distributor delayed": 60328, "distributor undelayed": 60328, "distributor violated": 1019, "exec candidate": 73209, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 127139, "exec total [new]": 328563, "exec triage": 142479, "executor restarts": 1138, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 317370, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 8, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45996, "no exec duration": 57375000000, "no exec requests": 388, "pending": 3, "prog exec time": 0, "reproducing": 7, "rpc recv": 11531806736, "rpc sent": 2169330192, "signal": 308431, "smash jobs": 0, "triage jobs": 0, "vm output": 47484335, "vm restarts [base]": 72, "vm restarts [new]": 171 } 2025/08/18 10:22:42 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:22:42 repro finished 'WARNING: suspicious RCU usage in get_callchain_entry', repro=true crepro=false desc='WARNING: suspicious RCU usage in get_callchain_entry' hub=false from_dashboard=false 2025/08/18 10:22:42 found repro for "WARNING: suspicious RCU usage in get_callchain_entry" (orig title: "-SAME-", reliability: 1), took 12.79 minutes 2025/08/18 10:22:42 start reproducing 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/18 10:22:42 "WARNING: suspicious RCU usage in get_callchain_entry": saved crash log into 1755512562.crash.log 2025/08/18 10:22:42 "WARNING: suspicious RCU usage in get_callchain_entry": saved repro log into 1755512562.repro.log 2025/08/18 10:22:45 base crash: no output from test machine 2025/08/18 10:22:48 base crash: no output from test machine 2025/08/18 10:23:01 reproducing crash 'KASAN: slab-use-after-free Read in xfrm_state_find': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:23:01 repro finished 'KASAN: slab-use-after-free Read in xfrm_state_find', repro=true crepro=false desc='general protection fault in lmLogSync' hub=false from_dashboard=false 2025/08/18 10:23:01 found repro for "general protection fault in lmLogSync" (orig title: "KASAN: slab-use-after-free Read in xfrm_state_find", reliability: 1), took 13.52 minutes 2025/08/18 10:23:01 "general protection fault in lmLogSync": saved crash log into 1755512581.crash.log 2025/08/18 10:23:01 "general protection fault in lmLogSync": saved repro log into 1755512581.repro.log 2025/08/18 10:23:35 repro finished 'possible deadlock in ocfs2_evict_inode', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 10:23:35 failed repro for "possible deadlock in ocfs2_evict_inode", err=%!s() 2025/08/18 10:23:35 "possible deadlock in ocfs2_evict_inode": saved crash log into 1755512615.crash.log 2025/08/18 10:23:35 "possible deadlock in ocfs2_evict_inode": saved repro log into 1755512615.repro.log 2025/08/18 10:23:37 runner 2 connected 2025/08/18 10:23:48 runner 1 connected 2025/08/18 10:23:53 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:23:58 attempt #0 to run "WARNING: suspicious RCU usage in get_callchain_entry" on base: crashed with WARNING: suspicious RCU usage in get_callchain_entry 2025/08/18 10:23:58 crashes both: WARNING: suspicious RCU usage in get_callchain_entry / WARNING: suspicious RCU usage in get_callchain_entry 2025/08/18 10:24:00 repro finished 'general protection fault in xfrm_alloc_spi', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 10:24:00 failed repro for "general protection fault in xfrm_alloc_spi", err=%!s() 2025/08/18 10:24:00 "general protection fault in xfrm_alloc_spi": saved crash log into 1755512640.crash.log 2025/08/18 10:24:00 "general protection fault in xfrm_alloc_spi": saved repro log into 1755512640.repro.log 2025/08/18 10:24:47 runner 0 connected 2025/08/18 10:24:49 runner 3 connected 2025/08/18 10:25:01 attempt #0 to run "general protection fault in lmLogSync" on base: crashed with general protection fault in lmLogSync 2025/08/18 10:25:01 crashes both: general protection fault in lmLogSync / general protection fault in lmLogSync 2025/08/18 10:25:51 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 10:26:40 runner 1 connected 2025/08/18 10:27:29 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 4087, "comps overflows": 0, "corpus": 45157, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 48823, "coverage": 313586, "distributor delayed": 60372, "distributor undelayed": 60371, "distributor violated": 1033, "exec candidate": 76252, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 130269, "exec total [new]": 331733, "exec triage": 142598, "executor restarts": 1152, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 317505, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 13, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46035, "no exec duration": 708191000000, "no exec requests": 2537, "pending": 2, "prog exec time": 254, "reproducing": 4, "rpc recv": 11693590524, "rpc sent": 2202994544, "signal": 308571, "smash jobs": 0, "triage jobs": 0, "vm output": 49588693, "vm restarts [base]": 74, "vm restarts [new]": 174 } 2025/08/18 10:27:50 runner 0 connected 2025/08/18 10:28:06 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:29:03 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:30:03 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:30:06 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 10:30:08 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 10:30:18 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 10:30:43 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:30:43 repro finished 'WARNING: suspicious RCU usage in get_callchain_entry', repro=true crepro=false desc='WARNING: suspicious RCU usage in get_callchain_entry' hub=false from_dashboard=false 2025/08/18 10:30:43 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/18 10:30:43 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/18 10:30:43 found repro for "WARNING: suspicious RCU usage in get_callchain_entry" (orig title: "-SAME-", reliability: 1), took 8.01 minutes 2025/08/18 10:30:43 "WARNING: suspicious RCU usage in get_callchain_entry": saved crash log into 1755513043.crash.log 2025/08/18 10:30:43 "WARNING: suspicious RCU usage in get_callchain_entry": saved repro log into 1755513043.repro.log 2025/08/18 10:30:50 runner 1 connected 2025/08/18 10:31:00 runner 3 connected 2025/08/18 10:31:33 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/18 10:31:54 attempt #0 to run "WARNING: suspicious RCU usage in get_callchain_entry" on base: crashed with WARNING: suspicious RCU usage in get_callchain_entry 2025/08/18 10:31:54 crashes both: WARNING: suspicious RCU usage in get_callchain_entry / WARNING: suspicious RCU usage in get_callchain_entry 2025/08/18 10:32:07 new: boot error: can't ssh into the instance 2025/08/18 10:32:16 new: boot error: can't ssh into the instance 2025/08/18 10:32:22 runner 1 connected 2025/08/18 10:32:29 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 0, "comps overflows": 0, "corpus": 45289, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 49728, "coverage": 313830, "distributor delayed": 60542, "distributor undelayed": 60535, "distributor violated": 1033, "exec candidate": 80339, "exec collide": 152, "exec fuzz": 265, "exec gen": 17, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 135238, "exec total [new]": 336706, "exec triage": 143044, "executor restarts": 1168, "fault jobs": 0, "fuzzer jobs": 13, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 317845, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 13, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46186, "no exec duration": 1252373000000, "no exec requests": 4397, "pending": 0, "prog exec time": 828, "reproducing": 3, "rpc recv": 11836404052, "rpc sent": 2265936960, "signal": 308816, "smash jobs": 0, "triage jobs": 10, "vm output": 51160328, "vm restarts [base]": 74, "vm restarts [new]": 178 } 2025/08/18 10:32:41 runner 0 connected 2025/08/18 10:32:51 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/18 10:33:07 runner 5 connected 2025/08/18 10:33:40 runner 3 connected 2025/08/18 10:35:07 base: boot error: can't ssh into the instance 2025/08/18 10:35:55 runner 1 connected 2025/08/18 10:37:29 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 22, "corpus": 45336, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 50719, "coverage": 313943, "distributor delayed": 60641, "distributor undelayed": 60641, "distributor violated": 1033, "exec candidate": 80339, "exec collide": 732, "exec fuzz": 1373, "exec gen": 94, "exec hints": 280, "exec inject": 0, "exec minimize": 1024, "exec retries": 20, "exec seeds": 128, "exec smash": 913, "exec total [base]": 139568, "exec total [new]": 341035, "exec triage": 143257, "executor restarts": 1191, "fault jobs": 0, "fuzzer jobs": 33, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 7, "max signal": 318082, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 630, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46251, "no exec duration": 1629569000000, "no exec requests": 5627, "pending": 0, "prog exec time": 573, "reproducing": 3, "rpc recv": 11997753112, "rpc sent": 2392775272, "signal": 308916, "smash jobs": 18, "triage jobs": 8, "vm output": 54518140, "vm restarts [base]": 77, "vm restarts [new]": 179 } 2025/08/18 10:37:36 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 10:37:39 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 10:38:01 repro finished 'WARNING in __linkwatch_sync_dev', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 10:38:01 failed repro for "WARNING in __linkwatch_sync_dev", err=%!s() 2025/08/18 10:38:01 "WARNING in __linkwatch_sync_dev": saved crash log into 1755513481.crash.log 2025/08/18 10:38:01 "WARNING in __linkwatch_sync_dev": saved repro log into 1755513481.repro.log 2025/08/18 10:38:24 runner 1 connected 2025/08/18 10:39:00 runner 6 connected 2025/08/18 10:39:26 base crash: possible deadlock in ocfs2_xattr_set 2025/08/18 10:39:28 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 10:40:12 new: boot error: can't ssh into the instance 2025/08/18 10:40:14 runner 0 connected 2025/08/18 10:40:17 runner 3 connected 2025/08/18 10:41:02 runner 0 connected 2025/08/18 10:41:21 new: boot error: can't ssh into the instance 2025/08/18 10:41:23 base crash: WARNING in io_ring_exit_work 2025/08/18 10:41:24 patched crashed: WARNING in io_ring_exit_work [need repro = false] 2025/08/18 10:41:43 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 10:41:56 base crash: lost connection to test machine 2025/08/18 10:42:04 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 10:42:08 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 10:42:12 runner 3 connected 2025/08/18 10:42:22 new: boot error: can't ssh into the instance 2025/08/18 10:42:25 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 10:42:29 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 39, "corpus": 45357, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 52260, "coverage": 314091, "distributor delayed": 60737, "distributor undelayed": 60732, "distributor violated": 1033, "exec candidate": 80339, "exec collide": 1364, "exec fuzz": 2671, "exec gen": 153, "exec hints": 1207, "exec inject": 0, "exec minimize": 1447, "exec retries": 20, "exec seeds": 190, "exec smash": 1579, "exec total [base]": 143154, "exec total [new]": 345264, "exec triage": 143421, "executor restarts": 1203, "fault jobs": 0, "fuzzer jobs": 19, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 2, "hints jobs": 7, "max signal": 318267, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 838, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46307, "no exec duration": 1886315000000, "no exec requests": 6405, "pending": 0, "prog exec time": 375, "reproducing": 2, "rpc recv": 12210107228, "rpc sent": 2531243944, "signal": 309024, "smash jobs": 2, "triage jobs": 10, "vm output": 56218035, "vm restarts [base]": 80, "vm restarts [new]": 182 } 2025/08/18 10:42:32 runner 5 connected 2025/08/18 10:42:38 runner 3 connected 2025/08/18 10:42:52 runner 0 connected 2025/08/18 10:42:57 runner 0 connected 2025/08/18 10:43:03 runner 2 connected 2025/08/18 10:43:14 runner 1 connected 2025/08/18 10:43:30 new: boot error: can't ssh into the instance 2025/08/18 10:43:47 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 10:44:19 runner 4 connected 2025/08/18 10:44:35 runner 5 connected 2025/08/18 10:46:00 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = true] 2025/08/18 10:46:00 scheduled a reproduction of 'unregister_netdevice: waiting for DEV to become free' 2025/08/18 10:46:00 start reproducing 'unregister_netdevice: waiting for DEV to become free' 2025/08/18 10:46:37 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 10:46:56 base crash: WARNING in xfrm_state_fini 2025/08/18 10:47:11 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 10:47:25 runner 6 connected 2025/08/18 10:47:29 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 67, "corpus": 45392, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 54048, "coverage": 314190, "distributor delayed": 60902, "distributor undelayed": 60895, "distributor violated": 1033, "exec candidate": 80339, "exec collide": 2382, "exec fuzz": 4570, "exec gen": 240, "exec hints": 2400, "exec inject": 0, "exec minimize": 2607, "exec retries": 21, "exec seeds": 271, "exec smash": 2317, "exec total [base]": 146630, "exec total [new]": 351708, "exec triage": 143683, "executor restarts": 1242, "fault jobs": 0, "fuzzer jobs": 35, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 12, "max signal": 318520, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1458, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46398, "no exec duration": 1889401000000, "no exec requests": 6410, "pending": 0, "prog exec time": 589, "reproducing": 3, "rpc recv": 12510044288, "rpc sent": 2713639928, "signal": 309121, "smash jobs": 14, "triage jobs": 9, "vm output": 60142810, "vm restarts [base]": 83, "vm restarts [new]": 188 } 2025/08/18 10:47:34 patched crashed: kernel BUG in may_open [need repro = true] 2025/08/18 10:47:34 scheduled a reproduction of 'kernel BUG in may_open' 2025/08/18 10:47:34 start reproducing 'kernel BUG in may_open' 2025/08/18 10:47:44 runner 1 connected 2025/08/18 10:47:44 new: boot error: can't ssh into the instance 2025/08/18 10:47:48 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = true] 2025/08/18 10:47:48 scheduled a reproduction of 'unregister_netdevice: waiting for DEV to become free' 2025/08/18 10:48:00 runner 3 connected 2025/08/18 10:48:21 base crash: kernel BUG in jfs_evict_inode 2025/08/18 10:48:23 runner 4 connected 2025/08/18 10:48:25 patched crashed: WARNING in dbAdjTree [need repro = false] 2025/08/18 10:49:02 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:49:10 runner 0 connected 2025/08/18 10:49:15 runner 6 connected 2025/08/18 10:49:15 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 10:50:04 runner 3 connected 2025/08/18 10:50:41 base crash: kernel BUG in may_open 2025/08/18 10:51:16 patched crashed: kernel BUG in txAbort [need repro = true] 2025/08/18 10:51:16 scheduled a reproduction of 'kernel BUG in txAbort' 2025/08/18 10:51:16 start reproducing 'kernel BUG in txAbort' 2025/08/18 10:51:28 patched crashed: kernel BUG in txAbort [need repro = true] 2025/08/18 10:51:28 scheduled a reproduction of 'kernel BUG in txAbort' 2025/08/18 10:51:28 base: boot error: can't ssh into the instance 2025/08/18 10:51:29 runner 3 connected 2025/08/18 10:52:04 runner 4 connected 2025/08/18 10:52:16 runner 6 connected 2025/08/18 10:52:17 runner 2 connected 2025/08/18 10:52:29 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 69, "corpus": 45413, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 54403, "coverage": 314227, "distributor delayed": 60973, "distributor undelayed": 60970, "distributor violated": 1039, "exec candidate": 80339, "exec collide": 2612, "exec fuzz": 5009, "exec gen": 264, "exec hints": 2652, "exec inject": 0, "exec minimize": 3007, "exec retries": 21, "exec seeds": 308, "exec smash": 2723, "exec total [base]": 149869, "exec total [new]": 353588, "exec triage": 143769, "executor restarts": 1276, "fault jobs": 0, "fuzzer jobs": 32, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 8, "max signal": 318621, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1725, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46431, "no exec duration": 1889403000000, "no exec requests": 6411, "pending": 2, "prog exec time": 535, "reproducing": 5, "rpc recv": 12870489852, "rpc sent": 2817161952, "signal": 309158, "smash jobs": 19, "triage jobs": 5, "vm output": 62064966, "vm restarts [base]": 87, "vm restarts [new]": 194 } 2025/08/18 10:52:31 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:53:29 base crash: kernel BUG in txAbort 2025/08/18 10:54:17 runner 2 connected 2025/08/18 10:54:51 base crash: WARNING in dbAdjTree 2025/08/18 10:54:52 new: boot error: can't ssh into the instance 2025/08/18 10:55:40 runner 3 connected 2025/08/18 10:56:06 new: boot error: can't ssh into the instance 2025/08/18 10:57:03 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:57:22 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:57:29 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 90, "corpus": 45447, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 55010, "coverage": 314324, "distributor delayed": 61034, "distributor undelayed": 61034, "distributor violated": 1062, "exec candidate": 80339, "exec collide": 2999, "exec fuzz": 5628, "exec gen": 290, "exec hints": 3102, "exec inject": 0, "exec minimize": 3745, "exec retries": 22, "exec seeds": 376, "exec smash": 3236, "exec total [base]": 155043, "exec total [new]": 356529, "exec triage": 143908, "executor restarts": 1289, "fault jobs": 0, "fuzzer jobs": 61, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 25, "max signal": 318730, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2110, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46482, "no exec duration": 1958032000000, "no exec requests": 6596, "pending": 2, "prog exec time": 545, "reproducing": 5, "rpc recv": 12965512408, "rpc sent": 2953325104, "signal": 309241, "smash jobs": 34, "triage jobs": 2, "vm output": 64243043, "vm restarts [base]": 89, "vm restarts [new]": 194 } 2025/08/18 10:57:46 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = true] 2025/08/18 10:57:46 scheduled a reproduction of 'unregister_netdevice: waiting for DEV to become free' 2025/08/18 10:57:54 new: boot error: can't ssh into the instance 2025/08/18 10:58:35 runner 4 connected 2025/08/18 10:58:41 runner 5 connected 2025/08/18 10:58:50 new: boot error: can't ssh into the instance 2025/08/18 10:59:02 new: boot error: can't ssh into the instance 2025/08/18 10:59:24 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 10:59:46 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 10:59:56 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:00:13 base crash: INFO: task hung in bdev_open 2025/08/18 11:00:14 runner 6 connected 2025/08/18 11:00:19 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 11:00:28 new: boot error: can't ssh into the instance 2025/08/18 11:01:01 runner 1 connected 2025/08/18 11:01:08 runner 4 connected 2025/08/18 11:01:10 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:01:24 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:01:43 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:02:17 patched crashed: INFO: task hung in bdev_open [need repro = false] 2025/08/18 11:02:29 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 107, "corpus": 45463, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 55866, "coverage": 314356, "distributor delayed": 61104, "distributor undelayed": 61102, "distributor violated": 1062, "exec candidate": 80339, "exec collide": 3308, "exec fuzz": 6235, "exec gen": 317, "exec hints": 3455, "exec inject": 0, "exec minimize": 4232, "exec retries": 22, "exec seeds": 420, "exec smash": 3785, "exec total [base]": 157544, "exec total [new]": 359029, "exec triage": 144030, "executor restarts": 1314, "fault jobs": 0, "fuzzer jobs": 75, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 29, "max signal": 318820, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2470, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46528, "no exec duration": 2610251000000, "no exec requests": 8188, "pending": 3, "prog exec time": 826, "reproducing": 5, "rpc recv": 13147460752, "rpc sent": 3065143360, "signal": 309273, "smash jobs": 37, "triage jobs": 9, "vm output": 67170034, "vm restarts [base]": 90, "vm restarts [new]": 198 } 2025/08/18 11:02:44 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:02:54 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:03:06 runner 5 connected 2025/08/18 11:03:23 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 11:03:58 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 11:03:59 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:04:03 base crash: INFO: task hung in read_part_sector 2025/08/18 11:04:11 runner 6 connected 2025/08/18 11:04:15 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:04:44 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 11:04:45 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 11:04:46 runner 0 connected 2025/08/18 11:04:56 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 11:05:00 runner 2 connected 2025/08/18 11:05:33 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:05:33 runner 3 connected 2025/08/18 11:05:33 runner 4 connected 2025/08/18 11:05:38 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 11:05:46 runner 1 connected 2025/08/18 11:05:46 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:05:47 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 11:06:26 runner 6 connected 2025/08/18 11:06:36 runner 0 connected 2025/08/18 11:06:53 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:07:09 new: boot error: can't ssh into the instance 2025/08/18 11:07:29 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 157, "corpus": 45491, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 56461, "coverage": 314408, "distributor delayed": 61167, "distributor undelayed": 61167, "distributor violated": 1062, "exec candidate": 80339, "exec collide": 3620, "exec fuzz": 6823, "exec gen": 355, "exec hints": 3843, "exec inject": 0, "exec minimize": 4795, "exec retries": 22, "exec seeds": 491, "exec smash": 4264, "exec total [base]": 160068, "exec total [new]": 361562, "exec triage": 144121, "executor restarts": 1339, "fault jobs": 0, "fuzzer jobs": 75, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 3, "hints jobs": 31, "max signal": 318917, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2779, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46566, "no exec duration": 2920356000000, "no exec requests": 8958, "pending": 3, "prog exec time": 808, "reproducing": 5, "rpc recv": 13467335796, "rpc sent": 3174186864, "signal": 309325, "smash jobs": 33, "triage jobs": 11, "vm output": 71532080, "vm restarts [base]": 95, "vm restarts [new]": 202 } 2025/08/18 11:07:58 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 11:08:28 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:08:48 runner 5 connected 2025/08/18 11:10:43 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:11:16 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 11:11:20 base crash: INFO: task hung in read_part_sector 2025/08/18 11:11:48 new: boot error: can't ssh into the instance 2025/08/18 11:11:52 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:12:04 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 11:12:05 runner 4 connected 2025/08/18 11:12:08 runner 2 connected 2025/08/18 11:12:29 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 194, "corpus": 45514, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 57555, "coverage": 314449, "distributor delayed": 61214, "distributor undelayed": 61214, "distributor violated": 1062, "exec candidate": 80339, "exec collide": 4150, "exec fuzz": 7696, "exec gen": 408, "exec hints": 4473, "exec inject": 0, "exec minimize": 5330, "exec retries": 22, "exec seeds": 545, "exec smash": 5035, "exec total [base]": 163617, "exec total [new]": 365111, "exec triage": 144223, "executor restarts": 1355, "fault jobs": 0, "fuzzer jobs": 62, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 26, "max signal": 318992, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3064, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46599, "no exec duration": 3274253000000, "no exec requests": 9920, "pending": 3, "prog exec time": 528, "reproducing": 5, "rpc recv": 13588674788, "rpc sent": 3318262288, "signal": 309365, "smash jobs": 25, "triage jobs": 11, "vm output": 74846527, "vm restarts [base]": 96, "vm restarts [new]": 204 } 2025/08/18 11:12:47 base crash: INFO: task hung in read_part_sector 2025/08/18 11:12:52 runner 3 connected 2025/08/18 11:12:58 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 11:13:08 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:13:08 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:13:17 base crash: INFO: task hung in bdev_open 2025/08/18 11:13:38 runner 1 connected 2025/08/18 11:13:48 runner 2 connected 2025/08/18 11:14:07 runner 0 connected 2025/08/18 11:14:23 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:14:27 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 11:14:27 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:14:45 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 11:15:16 runner 6 connected 2025/08/18 11:15:26 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 11:15:35 runner 0 connected 2025/08/18 11:15:41 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:15:52 new: boot error: can't ssh into the instance 2025/08/18 11:16:10 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = true] 2025/08/18 11:16:10 scheduled a reproduction of 'INFO: task hung in bch2_journal_reclaim_thread' 2025/08/18 11:16:10 start reproducing 'INFO: task hung in bch2_journal_reclaim_thread' 2025/08/18 11:16:15 runner 1 connected 2025/08/18 11:16:29 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/18 11:16:29 base crash: INFO: task hung in bch2_journal_reclaim_thread 2025/08/18 11:16:57 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:16:59 runner 5 connected 2025/08/18 11:16:59 new: boot error: can't ssh into the instance 2025/08/18 11:17:17 runner 3 connected 2025/08/18 11:17:23 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:17:29 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 226, "corpus": 45529, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 58408, "coverage": 314515, "distributor delayed": 61248, "distributor undelayed": 61247, "distributor violated": 1062, "exec candidate": 80339, "exec collide": 4514, "exec fuzz": 8363, "exec gen": 452, "exec hints": 5105, "exec inject": 0, "exec minimize": 5776, "exec retries": 22, "exec seeds": 589, "exec smash": 5432, "exec total [base]": 166154, "exec total [new]": 367784, "exec triage": 144300, "executor restarts": 1378, "fault jobs": 0, "fuzzer jobs": 48, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 28, "max signal": 319077, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3283, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46625, "no exec duration": 3277851000000, "no exec requests": 9932, "pending": 3, "prog exec time": 641, "reproducing": 6, "rpc recv": 13894530896, "rpc sent": 3429699096, "signal": 309428, "smash jobs": 16, "triage jobs": 4, "vm output": 78602256, "vm restarts [base]": 103, "vm restarts [new]": 206 } 2025/08/18 11:18:12 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:18:50 base crash: INFO: task hung in bch2_journal_reclaim_thread 2025/08/18 11:18:57 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:19:13 patched crashed: INFO: task hung in bdev_open [need repro = false] 2025/08/18 11:19:28 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:19:38 runner 0 connected 2025/08/18 11:19:54 runner 6 connected 2025/08/18 11:20:54 base crash: INFO: task hung in bch2_journal_reclaim_thread 2025/08/18 11:21:25 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:21:43 runner 3 connected 2025/08/18 11:22:10 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 11:22:29 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 235, "corpus": 45542, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 58983, "coverage": 314549, "distributor delayed": 61281, "distributor undelayed": 61280, "distributor violated": 1072, "exec candidate": 80339, "exec collide": 4743, "exec fuzz": 8836, "exec gen": 478, "exec hints": 5582, "exec inject": 0, "exec minimize": 6121, "exec retries": 22, "exec seeds": 607, "exec smash": 5667, "exec total [base]": 168156, "exec total [new]": 369657, "exec triage": 144369, "executor restarts": 1389, "fault jobs": 0, "fuzzer jobs": 60, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 1, "hints jobs": 33, "max signal": 319293, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3501, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46652, "no exec duration": 3529717000000, "no exec requests": 10547, "pending": 3, "prog exec time": 873, "reproducing": 6, "rpc recv": 14004360916, "rpc sent": 3509800464, "signal": 309459, "smash jobs": 22, "triage jobs": 5, "vm output": 82064136, "vm restarts [base]": 105, "vm restarts [new]": 207 } 2025/08/18 11:22:59 runner 6 connected 2025/08/18 11:23:32 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:24:33 new: boot error: can't ssh into the instance 2025/08/18 11:24:38 base crash: INFO: task hung in read_part_sector 2025/08/18 11:24:41 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = false] 2025/08/18 11:25:06 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 11:25:07 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 11:25:27 runner 1 connected 2025/08/18 11:25:29 runner 5 connected 2025/08/18 11:25:47 runner 3 connected 2025/08/18 11:25:48 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:25:49 runner 6 connected 2025/08/18 11:25:58 new: boot error: can't ssh into the instance 2025/08/18 11:25:58 base crash: INFO: task hung in bch2_journal_reclaim_thread 2025/08/18 11:26:34 base: boot error: can't ssh into the instance 2025/08/18 11:26:40 runner 0 connected 2025/08/18 11:27:23 runner 2 connected 2025/08/18 11:27:29 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 251, "corpus": 45561, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 59407, "coverage": 314712, "distributor delayed": 61312, "distributor undelayed": 61312, "distributor violated": 1072, "exec candidate": 80339, "exec collide": 4917, "exec fuzz": 9166, "exec gen": 487, "exec hints": 5852, "exec inject": 0, "exec minimize": 6550, "exec retries": 22, "exec seeds": 648, "exec smash": 5869, "exec total [base]": 169714, "exec total [new]": 371212, "exec triage": 144464, "executor restarts": 1402, "fault jobs": 0, "fuzzer jobs": 79, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 41, "max signal": 319388, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3753, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46686, "no exec duration": 3816368000000, "no exec requests": 11242, "pending": 3, "prog exec time": 889, "reproducing": 6, "rpc recv": 14218456828, "rpc sent": 3575677056, "signal": 309625, "smash jobs": 31, "triage jobs": 7, "vm output": 84977145, "vm restarts [base]": 109, "vm restarts [new]": 210 } 2025/08/18 11:27:38 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:29:22 new: boot error: can't ssh into the instance 2025/08/18 11:30:00 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/08/18 11:30:39 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:30:49 runner 5 connected 2025/08/18 11:31:07 base crash: INFO: task hung in bdev_open 2025/08/18 11:31:08 patched crashed: INFO: task hung in bdev_open [need repro = false] 2025/08/18 11:31:09 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:31:56 runner 1 connected 2025/08/18 11:31:57 runner 6 connected 2025/08/18 11:32:21 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:32:21 base crash: general protection fault in pcl818_ai_cancel 2025/08/18 11:32:29 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 272, "corpus": 45576, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 59867, "coverage": 314785, "distributor delayed": 61334, "distributor undelayed": 61334, "distributor violated": 1072, "exec candidate": 80339, "exec collide": 5107, "exec fuzz": 9528, "exec gen": 505, "exec hints": 6111, "exec inject": 0, "exec minimize": 7004, "exec retries": 23, "exec seeds": 695, "exec smash": 6131, "exec total [base]": 171368, "exec total [new]": 372875, "exec triage": 144534, "executor restarts": 1414, "fault jobs": 0, "fuzzer jobs": 98, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 47, "max signal": 319545, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4007, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46711, "no exec duration": 4385821000000, "no exec requests": 12493, "pending": 3, "prog exec time": 824, "reproducing": 6, "rpc recv": 14366393452, "rpc sent": 3636216504, "signal": 309681, "smash jobs": 45, "triage jobs": 6, "vm output": 88070575, "vm restarts [base]": 110, "vm restarts [new]": 212 } 2025/08/18 11:32:48 base crash: INFO: task hung in read_part_sector 2025/08/18 11:32:53 new: boot error: can't ssh into the instance 2025/08/18 11:33:06 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:33:10 runner 0 connected 2025/08/18 11:33:14 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 11:33:37 runner 3 connected 2025/08/18 11:33:58 base crash: INFO: trying to register non-static key in txEnd 2025/08/18 11:34:00 patched crashed: INFO: trying to register non-static key in txEnd [need repro = false] 2025/08/18 11:34:03 runner 5 connected 2025/08/18 11:34:07 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:34:20 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:34:38 new: boot error: can't ssh into the instance 2025/08/18 11:34:47 runner 0 connected 2025/08/18 11:34:49 runner 6 connected 2025/08/18 11:35:32 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:35:54 new: boot error: can't ssh into the instance 2025/08/18 11:36:07 base crash: INFO: task hung in read_part_sector 2025/08/18 11:36:22 patched crashed: INFO: rcu detected stall in corrupted [need repro = false] 2025/08/18 11:36:49 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:36:56 runner 2 connected 2025/08/18 11:37:05 base crash: possible deadlock in ocfs2_setattr 2025/08/18 11:37:08 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:37:12 runner 5 connected 2025/08/18 11:37:29 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 278, "corpus": 45581, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 60149, "coverage": 314833, "distributor delayed": 61344, "distributor undelayed": 61344, "distributor violated": 1072, "exec candidate": 80339, "exec collide": 5255, "exec fuzz": 9794, "exec gen": 521, "exec hints": 6326, "exec inject": 0, "exec minimize": 7118, "exec retries": 23, "exec seeds": 710, "exec smash": 6333, "exec total [base]": 172378, "exec total [new]": 373880, "exec triage": 144561, "executor restarts": 1429, "fault jobs": 0, "fuzzer jobs": 94, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 46, "max signal": 319562, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4085, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46722, "no exec duration": 4767067000000, "no exec requests": 13245, "pending": 3, "prog exec time": 759, "reproducing": 6, "rpc recv": 14591927168, "rpc sent": 3679383112, "signal": 309720, "smash jobs": 42, "triage jobs": 6, "vm output": 91688067, "vm restarts [base]": 114, "vm restarts [new]": 215 } 2025/08/18 11:37:39 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:37:55 runner 3 connected 2025/08/18 11:38:05 patched crashed: INFO: task hung in bdev_open [need repro = false] 2025/08/18 11:38:09 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:38:13 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 11:38:15 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 11:38:53 runner 6 connected 2025/08/18 11:38:55 runner 1 connected 2025/08/18 11:39:03 runner 5 connected 2025/08/18 11:39:27 reproducing crash 'kernel BUG in txAbort': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:39:27 repro finished 'kernel BUG in txAbort', repro=true crepro=false desc='kernel BUG in txAbort' hub=false from_dashboard=false 2025/08/18 11:39:27 reproduction of "kernel BUG in txAbort" aborted: it's no longer needed 2025/08/18 11:39:27 found repro for "kernel BUG in txAbort" (orig title: "-SAME-", reliability: 1), took 48.08 minutes 2025/08/18 11:39:27 "kernel BUG in txAbort": saved crash log into 1755517167.crash.log 2025/08/18 11:39:27 "kernel BUG in txAbort": saved repro log into 1755517167.repro.log 2025/08/18 11:39:30 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 11:39:34 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:40:20 runner 6 connected 2025/08/18 11:40:22 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 11:40:39 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:40:40 attempt #0 to run "kernel BUG in txAbort" on base: crashed with kernel BUG in txAbort 2025/08/18 11:40:40 crashes both: kernel BUG in txAbort / kernel BUG in txAbort 2025/08/18 11:41:04 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:41:10 runner 5 connected 2025/08/18 11:41:30 runner 0 connected 2025/08/18 11:41:58 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 11:41:58 repro finished 'kernel BUG in may_open', repro=true crepro=false desc='kernel BUG in may_open' hub=false from_dashboard=false 2025/08/18 11:41:58 found repro for "kernel BUG in may_open" (orig title: "-SAME-", reliability: 1), took 53.51 minutes 2025/08/18 11:41:58 "kernel BUG in may_open": saved crash log into 1755517318.crash.log 2025/08/18 11:41:58 "kernel BUG in may_open": saved repro log into 1755517318.repro.log 2025/08/18 11:42:26 status reporting terminated 2025/08/18 11:42:26 bug reporting terminated 2025/08/18 11:42:26 attempt #0 to run "kernel BUG in may_open" on base: skipping due to errors: context deadline exceeded / 2025/08/18 11:42:26 repro finished 'KASAN: slab-use-after-free Write in txEnd', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 11:42:33 repro finished 'INFO: task hung in bch2_journal_reclaim_thread', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 11:42:45 syz-diff (base): kernel context loop terminated 2025/08/18 11:43:30 repro finished 'unregister_netdevice: waiting for DEV to become free', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 11:45:03 repro finished 'INFO: task hung in sync_bdevs', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 11:52:06 syz-diff (new): kernel context loop terminated 2025/08/18 11:52:06 diff fuzzing terminated 2025/08/18 11:52:06 fuzzing is finished 2025/08/18 11:52:06 status at the end: Title On-Base On-Patched INFO: rcu detected stall in corrupted 1 crashes INFO: task hung in bch2_journal_reclaim_thread 4 crashes 2 crashes INFO: task hung in bdev_open 3 crashes 4 crashes INFO: task hung in corrupted 1 crashes 2 crashes INFO: task hung in read_part_sector 6 crashes INFO: task hung in sync_bdevs 1 crashes INFO: trying to register non-static key in txEnd 1 crashes 1 crashes KASAN: slab-use-after-free Read in __xfrm_state_lookup 1 crashes 1 crashes KASAN: slab-use-after-free Read in xfrm_alloc_spi 4 crashes 17 crashes KASAN: slab-use-after-free Read in xfrm_state_find 1 crashes KASAN: slab-use-after-free Write in txEnd 1 crashes WARNING in __linkwatch_sync_dev 1 crashes WARNING in dbAdjTree 2 crashes 3 crashes WARNING in ext4_xattr_inode_lookup_create 1 crashes 2 crashes WARNING in io_ring_exit_work 1 crashes 1 crashes WARNING in xfrm_state_fini 11 crashes 25 crashes WARNING: suspicious RCU usage in get_callchain_entry 2 crashes 4 crashes[reproduced] general protection fault in lmLogSync 1 crashes [reproduced] general protection fault in pcl818_ai_cancel 2 crashes 3 crashes general protection fault in xfrm_alloc_spi 1 crashes kernel BUG in jfs_evict_inode 1 crashes kernel BUG in may_open 1 crashes 1 crashes[reproduced] kernel BUG in txAbort 2 crashes 2 crashes[reproduced] kernel BUG in txUnlock 2 crashes 8 crashes lost connection to test machine 2 crashes 16 crashes no output from test machine 6 crashes possible deadlock in attr_data_get_block 1 crashes 2 crashes possible deadlock in ntfs_fiemap 1 crashes 3 crashes possible deadlock in ocfs2_evict_inode 1 crashes possible deadlock in ocfs2_init_acl 18 crashes 41 crashes possible deadlock in ocfs2_reserve_suballoc_bits 18 crashes 25 crashes possible deadlock in ocfs2_setattr 1 crashes possible deadlock in ocfs2_try_remove_refcount_tree 16 crashes 31 crashes possible deadlock in ocfs2_xattr_set 4 crashes 3 crashes unregister_netdevice: waiting for DEV to become free 1 crashes 3 crashes