2025/08/19 10:04:31 extracted 303749 symbol hashes for base and 303749 for patched 2025/08/19 10:04:31 adding directly modified files to focus areas: ["drivers/of/of_numa.c"] 2025/08/19 10:04:32 downloaded the corpus from https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db 2025/08/19 10:05:29 runner 3 connected 2025/08/19 10:05:29 runner 9 connected 2025/08/19 10:05:29 runner 6 connected 2025/08/19 10:05:29 runner 3 connected 2025/08/19 10:05:29 runner 0 connected 2025/08/19 10:05:30 runner 2 connected 2025/08/19 10:05:30 runner 5 connected 2025/08/19 10:05:30 runner 7 connected 2025/08/19 10:05:30 runner 1 connected 2025/08/19 10:05:30 runner 0 connected 2025/08/19 10:05:31 runner 4 connected 2025/08/19 10:05:31 runner 8 connected 2025/08/19 10:05:36 initializing coverage information... 2025/08/19 10:05:36 executor cover filter: 0 PCs 2025/08/19 10:05:40 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/19 10:05:40 base: machine check complete 2025/08/19 10:05:41 discovered 7699 source files, 338618 symbols 2025/08/19 10:05:41 coverage filter: drivers/of/of_numa.c: [drivers/of/of_numa.c] 2025/08/19 10:05:41 area "files": 12 PCs in the cover filter 2025/08/19 10:05:41 area "": 0 PCs in the cover filter 2025/08/19 10:05:41 executor cover filter: 0 PCs 2025/08/19 10:05:45 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/19 10:05:45 new: machine check complete 2025/08/19 10:05:46 new: adding 81150 seeds 2025/08/19 10:07:02 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/19 10:07:02 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/19 10:07:10 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = true] 2025/08/19 10:07:10 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_alloc_spi' 2025/08/19 10:07:15 base crash: WARNING in xfrm_state_fini 2025/08/19 10:07:36 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/19 10:07:36 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/19 10:07:47 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/19 10:07:47 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/19 10:07:51 runner 6 connected 2025/08/19 10:07:57 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 10:08:04 runner 0 connected 2025/08/19 10:08:26 runner 9 connected 2025/08/19 10:08:37 runner 7 connected 2025/08/19 10:08:46 runner 3 connected 2025/08/19 10:08:48 patched crashed: general protection fault in xfrm_alloc_spi [need repro = true] 2025/08/19 10:08:48 scheduled a reproduction of 'general protection fault in xfrm_alloc_spi' 2025/08/19 10:09:24 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 10:09:33 STAT { "buffer too small": 0, "candidate triage jobs": 46, "candidates": 78145, "comps overflows": 0, "corpus": 2930, "corpus [files]": 1, "cover overflows": 2057, "coverage": 148933, "distributor delayed": 4380, "distributor undelayed": 4369, "distributor violated": 12, "exec candidate": 3005, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 8209, "exec total [new]": 13428, "exec triage": 9372, "executor restarts": 115, "fault jobs": 0, "fuzzer jobs": 46, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 150443, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 3005, "no exec duration": 42775000000, "no exec requests": 341, "pending": 5, "prog exec time": 289, "reproducing": 0, "rpc recv": 847841052, "rpc sent": 77940312, "signal": 146961, "smash jobs": 0, "triage jobs": 0, "vm output": 2109380, "vm restarts [base]": 6, "vm restarts [new]": 11 } 2025/08/19 10:09:45 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:09:56 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:10:01 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 10:10:13 runner 8 connected 2025/08/19 10:10:35 runner 6 connected 2025/08/19 10:10:45 runner 4 connected 2025/08/19 10:10:52 runner 1 connected 2025/08/19 10:11:12 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 10:11:33 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 10:12:01 runner 8 connected 2025/08/19 10:12:01 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/19 10:12:01 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/19 10:12:23 runner 6 connected 2025/08/19 10:12:29 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 10:12:51 runner 9 connected 2025/08/19 10:13:18 runner 1 connected 2025/08/19 10:14:23 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 10:14:29 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/19 10:14:29 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/19 10:14:33 STAT { "buffer too small": 0, "candidate triage jobs": 37, "candidates": 74891, "comps overflows": 0, "corpus": 6169, "corpus [files]": 1, "cover overflows": 4744, "coverage": 185688, "distributor delayed": 10804, "distributor undelayed": 10791, "distributor violated": 399, "exec candidate": 6259, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 20619, "exec total [new]": 28324, "exec triage": 19612, "executor restarts": 147, "fault jobs": 0, "fuzzer jobs": 37, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 188178, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 6259, "no exec duration": 42806000000, "no exec requests": 343, "pending": 7, "prog exec time": 289, "reproducing": 0, "rpc recv": 1391935668, "rpc sent": 177146488, "signal": 182840, "smash jobs": 0, "triage jobs": 0, "vm output": 3531919, "vm restarts [base]": 8, "vm restarts [new]": 17 } 2025/08/19 10:14:39 new: boot error: can't ssh into the instance 2025/08/19 10:14:39 new: boot error: can't ssh into the instance 2025/08/19 10:14:40 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 10:15:06 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/19 10:15:06 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/19 10:15:13 runner 5 connected 2025/08/19 10:15:14 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/19 10:15:14 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/19 10:15:19 runner 4 connected 2025/08/19 10:15:27 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 10:15:28 runner 1 connected 2025/08/19 10:15:28 runner 2 connected 2025/08/19 10:15:29 runner 6 connected 2025/08/19 10:15:55 runner 9 connected 2025/08/19 10:16:03 runner 7 connected 2025/08/19 10:16:15 runner 8 connected 2025/08/19 10:16:59 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 10:17:15 new: boot error: can't ssh into the instance 2025/08/19 10:17:55 runner 0 connected 2025/08/19 10:18:05 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:18:09 base crash: possible deadlock in attr_data_get_block 2025/08/19 10:18:12 runner 3 connected 2025/08/19 10:18:16 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:18:22 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/19 10:18:22 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 10:18:33 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/19 10:18:33 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 10:18:53 new: boot error: can't ssh into the instance 2025/08/19 10:18:56 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 10:18:57 runner 3 connected 2025/08/19 10:19:01 runner 4 connected 2025/08/19 10:19:05 runner 8 connected 2025/08/19 10:19:12 runner 6 connected 2025/08/19 10:19:23 runner 1 connected 2025/08/19 10:19:32 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/19 10:19:33 STAT { "buffer too small": 0, "candidate triage jobs": 46, "candidates": 70918, "comps overflows": 0, "corpus": 10101, "corpus [files]": 1, "cover overflows": 7310, "coverage": 213338, "distributor delayed": 15809, "distributor undelayed": 15809, "distributor violated": 406, "exec candidate": 10232, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 31353, "exec total [new]": 46042, "exec triage": 31753, "executor restarts": 203, "fault jobs": 0, "fuzzer jobs": 46, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 215047, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 10232, "no exec duration": 42817000000, "no exec requests": 344, "pending": 11, "prog exec time": 215, "reproducing": 0, "rpc recv": 2210551044, "rpc sent": 304537456, "signal": 210216, "smash jobs": 0, "triage jobs": 0, "vm output": 6083752, "vm restarts [base]": 10, "vm restarts [new]": 30 } 2025/08/19 10:19:46 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/19 10:19:53 runner 0 connected 2025/08/19 10:20:29 runner 1 connected 2025/08/19 10:20:43 runner 5 connected 2025/08/19 10:20:45 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 10:20:55 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 10:20:56 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 10:21:28 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 10:21:42 runner 1 connected 2025/08/19 10:21:51 runner 3 connected 2025/08/19 10:21:52 runner 6 connected 2025/08/19 10:21:56 base crash: WARNING in xfrm_state_fini 2025/08/19 10:22:10 patched crashed: possible deadlock in ntfs_fiemap [need repro = true] 2025/08/19 10:22:10 scheduled a reproduction of 'possible deadlock in ntfs_fiemap' 2025/08/19 10:22:17 runner 0 connected 2025/08/19 10:22:53 runner 2 connected 2025/08/19 10:23:06 runner 9 connected 2025/08/19 10:23:37 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 10:24:03 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 10:24:09 base crash: WARNING in drv_unassign_vif_chanctx 2025/08/19 10:24:17 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 10:24:20 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/19 10:24:33 STAT { "buffer too small": 0, "candidate triage jobs": 38, "candidates": 65864, "comps overflows": 0, "corpus": 15099, "corpus [files]": 1, "cover overflows": 10519, "coverage": 236770, "distributor delayed": 21089, "distributor undelayed": 21087, "distributor violated": 406, "exec candidate": 15286, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 39954, "exec total [new]": 69547, "exec triage": 47327, "executor restarts": 268, "fault jobs": 0, "fuzzer jobs": 38, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 238668, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 15286, "no exec duration": 42817000000, "no exec requests": 344, "pending": 12, "prog exec time": 278, "reproducing": 0, "rpc recv": 2905286708, "rpc sent": 428198304, "signal": 233188, "smash jobs": 0, "triage jobs": 0, "vm output": 9103198, "vm restarts [base]": 15, "vm restarts [new]": 34 } 2025/08/19 10:24:52 runner 5 connected 2025/08/19 10:25:07 runner 3 connected 2025/08/19 10:25:10 runner 0 connected 2025/08/19 10:26:01 patched crashed: WARNING in drv_unassign_vif_chanctx [need repro = false] 2025/08/19 10:28:39 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 10:28:50 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 10:28:59 new: boot error: can't ssh into the instance 2025/08/19 10:29:01 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 10:29:05 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 10:29:14 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 10:29:24 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 10:29:28 runner 5 connected 2025/08/19 10:29:33 STAT { "buffer too small": 0, "candidate triage jobs": 280, "candidates": 61703, "comps overflows": 0, "corpus": 18976, "corpus [files]": 1, "cover overflows": 13209, "coverage": 250459, "distributor delayed": 26225, "distributor undelayed": 25947, "distributor violated": 407, "exec candidate": 19447, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 49814, "exec total [new]": 89186, "exec triage": 59619, "executor restarts": 311, "fault jobs": 0, "fuzzer jobs": 280, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 253161, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 19447, "no exec duration": 47955000000, "no exec requests": 356, "pending": 12, "prog exec time": 236, "reproducing": 0, "rpc recv": 3329463528, "rpc sent": 548504168, "signal": 246668, "smash jobs": 0, "triage jobs": 0, "vm output": 11638124, "vm restarts [base]": 16, "vm restarts [new]": 37 } 2025/08/19 10:29:38 runner 9 connected 2025/08/19 10:29:47 runner 0 connected 2025/08/19 10:29:50 runner 2 connected 2025/08/19 10:29:53 runner 1 connected 2025/08/19 10:29:56 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 10:30:04 runner 6 connected 2025/08/19 10:30:14 runner 3 connected 2025/08/19 10:30:16 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 10:30:25 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 10:30:45 runner 1 connected 2025/08/19 10:31:06 runner 8 connected 2025/08/19 10:31:21 runner 2 connected 2025/08/19 10:31:30 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 10:31:40 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 10:31:42 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 10:31:47 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 10:32:02 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 10:32:14 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 10:32:19 runner 2 connected 2025/08/19 10:32:22 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/19 10:32:25 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 10:32:26 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:32:29 runner 8 connected 2025/08/19 10:32:30 runner 1 connected 2025/08/19 10:32:35 runner 6 connected 2025/08/19 10:32:38 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:32:52 runner 5 connected 2025/08/19 10:32:56 base crash: possible deadlock in ocfs2_setattr 2025/08/19 10:33:03 runner 1 connected 2025/08/19 10:33:11 runner 3 connected 2025/08/19 10:33:14 runner 0 connected 2025/08/19 10:33:15 runner 0 connected 2025/08/19 10:33:28 runner 9 connected 2025/08/19 10:33:43 new: boot error: can't ssh into the instance 2025/08/19 10:33:45 runner 2 connected 2025/08/19 10:34:00 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 10:34:14 base: boot error: can't ssh into the instance 2025/08/19 10:34:15 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/08/19 10:34:15 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/08/19 10:34:24 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 10:34:33 STAT { "buffer too small": 0, "candidate triage jobs": 39, "candidates": 58561, "comps overflows": 0, "corpus": 22321, "corpus [files]": 1, "cover overflows": 14879, "coverage": 261486, "distributor delayed": 30825, "distributor undelayed": 30825, "distributor violated": 534, "exec candidate": 22589, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 9, "exec seeds": 0, "exec smash": 0, "exec total [base]": 55995, "exec total [new]": 104630, "exec triage": 69580, "executor restarts": 397, "fault jobs": 0, "fuzzer jobs": 39, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 263576, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 22589, "no exec duration": 49111000000, "no exec requests": 360, "pending": 13, "prog exec time": 276, "reproducing": 0, "rpc recv": 4272062332, "rpc sent": 670981128, "signal": 257719, "smash jobs": 0, "triage jobs": 0, "vm output": 14719370, "vm restarts [base]": 21, "vm restarts [new]": 52 } 2025/08/19 10:34:39 runner 7 connected 2025/08/19 10:34:49 runner 0 connected 2025/08/19 10:35:03 runner 3 connected 2025/08/19 10:35:04 runner 2 connected 2025/08/19 10:35:17 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/19 10:35:17 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/19 10:35:28 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/19 10:35:28 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/19 10:35:32 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 10:35:46 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/19 10:35:46 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/19 10:36:07 new: boot error: can't ssh into the instance 2025/08/19 10:36:13 runner 3 connected 2025/08/19 10:36:16 runner 8 connected 2025/08/19 10:36:21 runner 1 connected 2025/08/19 10:36:31 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/08/19 10:36:43 runner 6 connected 2025/08/19 10:36:59 runner 4 connected 2025/08/19 10:37:19 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:37:27 runner 2 connected 2025/08/19 10:37:39 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/08/19 10:37:41 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/08/19 10:38:06 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/08/19 10:38:16 runner 1 connected 2025/08/19 10:38:29 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/19 10:38:29 runner 9 connected 2025/08/19 10:38:38 runner 0 connected 2025/08/19 10:38:56 runner 6 connected 2025/08/19 10:39:15 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 10:39:16 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 10:39:26 runner 2 connected 2025/08/19 10:39:33 STAT { "buffer too small": 0, "candidate triage jobs": 41, "candidates": 55280, "comps overflows": 0, "corpus": 25553, "corpus [files]": 2, "cover overflows": 16911, "coverage": 270815, "distributor delayed": 35470, "distributor undelayed": 35470, "distributor violated": 549, "exec candidate": 25870, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 66699, "exec total [new]": 121146, "exec triage": 79578, "executor restarts": 477, "fault jobs": 0, "fuzzer jobs": 41, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 272822, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 25870, "no exec duration": 49111000000, "no exec requests": 360, "pending": 16, "prog exec time": 390, "reproducing": 0, "rpc recv": 5015629296, "rpc sent": 813250480, "signal": 266897, "smash jobs": 0, "triage jobs": 0, "vm output": 17396775, "vm restarts [base]": 25, "vm restarts [new]": 63 } 2025/08/19 10:39:38 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 10:40:05 runner 1 connected 2025/08/19 10:40:06 runner 3 connected 2025/08/19 10:40:09 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 10:40:35 runner 6 connected 2025/08/19 10:40:59 runner 4 connected 2025/08/19 10:41:14 base crash: kernel BUG in jfs_evict_inode 2025/08/19 10:41:23 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 10:41:40 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/19 10:41:44 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:41:56 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:42:03 runner 0 connected 2025/08/19 10:42:20 runner 6 connected 2025/08/19 10:42:22 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 10:42:28 runner 8 connected 2025/08/19 10:42:33 runner 4 connected 2025/08/19 10:42:45 runner 3 connected 2025/08/19 10:42:50 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 10:43:11 runner 9 connected 2025/08/19 10:43:16 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 10:43:19 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 10:43:30 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 10:43:32 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 10:43:35 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/08/19 10:43:39 runner 1 connected 2025/08/19 10:43:47 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/08/19 10:44:06 runner 4 connected 2025/08/19 10:44:09 runner 7 connected 2025/08/19 10:44:19 runner 3 connected 2025/08/19 10:44:24 runner 1 connected 2025/08/19 10:44:25 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 10:44:30 new: boot error: can't ssh into the instance 2025/08/19 10:44:33 STAT { "buffer too small": 0, "candidate triage jobs": 41, "candidates": 51745, "comps overflows": 0, "corpus": 29033, "corpus [files]": 2, "cover overflows": 19294, "coverage": 279459, "distributor delayed": 40443, "distributor undelayed": 40443, "distributor violated": 558, "exec candidate": 29405, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 78777, "exec total [new]": 140603, "exec triage": 90418, "executor restarts": 553, "fault jobs": 0, "fuzzer jobs": 41, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 281541, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 29405, "no exec duration": 49173000000, "no exec requests": 363, "pending": 16, "prog exec time": 221, "reproducing": 0, "rpc recv": 5753781000, "rpc sent": 949052288, "signal": 275258, "smash jobs": 0, "triage jobs": 0, "vm output": 19928390, "vm restarts [base]": 28, "vm restarts [new]": 75 } 2025/08/19 10:44:37 runner 3 connected 2025/08/19 10:44:52 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 10:45:19 runner 5 connected 2025/08/19 10:45:22 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:45:44 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:45:49 runner 0 connected 2025/08/19 10:45:55 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:45:56 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 10:46:11 runner 0 connected 2025/08/19 10:46:26 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 10:46:33 runner 4 connected 2025/08/19 10:46:45 runner 7 connected 2025/08/19 10:46:45 runner 9 connected 2025/08/19 10:47:15 runner 5 connected 2025/08/19 10:47:17 base crash: WARNING in xfrm_state_fini 2025/08/19 10:48:14 runner 3 connected 2025/08/19 10:48:19 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 10:48:19 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 10:48:29 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 10:48:29 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 10:48:40 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 10:48:40 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 10:48:47 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 10:49:08 runner 4 connected 2025/08/19 10:49:12 patched crashed: KASAN: slab-use-after-free Write in __xfrm_state_delete [need repro = true] 2025/08/19 10:49:12 scheduled a reproduction of 'KASAN: slab-use-after-free Write in __xfrm_state_delete' 2025/08/19 10:49:19 runner 7 connected 2025/08/19 10:49:30 runner 2 connected 2025/08/19 10:49:32 base crash: lost connection to test machine 2025/08/19 10:49:33 STAT { "buffer too small": 0, "candidate triage jobs": 27, "candidates": 47511, "comps overflows": 0, "corpus": 33198, "corpus [files]": 2, "cover overflows": 22775, "coverage": 288492, "distributor delayed": 45450, "distributor undelayed": 45449, "distributor violated": 582, "exec candidate": 33639, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 85511, "exec total [new]": 166805, "exec triage": 103710, "executor restarts": 615, "fault jobs": 0, "fuzzer jobs": 27, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 290799, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 33639, "no exec duration": 49796000000, "no exec requests": 367, "pending": 20, "prog exec time": 189, "reproducing": 0, "rpc recv": 6391613124, "rpc sent": 1077507440, "signal": 284017, "smash jobs": 0, "triage jobs": 0, "vm output": 22315009, "vm restarts [base]": 31, "vm restarts [new]": 84 } 2025/08/19 10:49:35 runner 0 connected 2025/08/19 10:49:53 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/08/19 10:50:01 runner 3 connected 2025/08/19 10:50:13 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/19 10:50:13 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/19 10:50:15 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/19 10:50:15 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/19 10:50:16 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/19 10:50:16 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/19 10:50:21 runner 3 connected 2025/08/19 10:50:43 runner 4 connected 2025/08/19 10:51:02 runner 7 connected 2025/08/19 10:51:05 runner 5 connected 2025/08/19 10:51:12 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 10:51:14 base crash: kernel BUG in jfs_evict_inode 2025/08/19 10:51:21 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:51:26 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 10:52:01 runner 1 connected 2025/08/19 10:52:03 runner 3 connected 2025/08/19 10:52:09 base crash: INFO: task hung in read_part_sector 2025/08/19 10:52:10 runner 4 connected 2025/08/19 10:52:16 runner 5 connected 2025/08/19 10:52:16 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 10:52:58 runner 1 connected 2025/08/19 10:53:05 runner 0 connected 2025/08/19 10:53:16 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 10:53:37 new: boot error: can't ssh into the instance 2025/08/19 10:54:12 patched crashed: INFO: rcu detected stall in worker_thread [need repro = false] 2025/08/19 10:54:31 base: boot error: can't ssh into the instance 2025/08/19 10:54:33 STAT { "buffer too small": 0, "candidate triage jobs": 31, "candidates": 44098, "comps overflows": 0, "corpus": 36561, "corpus [files]": 2, "cover overflows": 25406, "coverage": 295482, "distributor delayed": 50049, "distributor undelayed": 50049, "distributor violated": 585, "exec candidate": 37052, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 93836, "exec total [new]": 187676, "exec triage": 114235, "executor restarts": 655, "fault jobs": 0, "fuzzer jobs": 31, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 297683, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 37052, "no exec duration": 49941000000, "no exec requests": 370, "pending": 23, "prog exec time": 203, "reproducing": 0, "rpc recv": 7061524100, "rpc sent": 1208239880, "signal": 290947, "smash jobs": 0, "triage jobs": 0, "vm output": 24830437, "vm restarts [base]": 34, "vm restarts [new]": 93 } 2025/08/19 10:54:34 runner 6 connected 2025/08/19 10:54:45 patched crashed: KASAN: slab-use-after-free Read in xfrm_state_find [need repro = true] 2025/08/19 10:54:45 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_state_find' 2025/08/19 10:54:58 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/19 10:55:01 patched crashed: WARNING in rate_control_rate_init [need repro = true] 2025/08/19 10:55:01 scheduled a reproduction of 'WARNING in rate_control_rate_init' 2025/08/19 10:55:03 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 10:55:09 runner 8 connected 2025/08/19 10:55:14 base crash: WARNING in xfrm_state_fini 2025/08/19 10:55:15 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 10:55:22 runner 2 connected 2025/08/19 10:55:26 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/19 10:55:35 runner 4 connected 2025/08/19 10:55:37 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 10:55:47 runner 7 connected 2025/08/19 10:55:50 runner 9 connected 2025/08/19 10:55:54 runner 0 connected 2025/08/19 10:56:05 runner 5 connected 2025/08/19 10:56:16 runner 0 connected 2025/08/19 10:56:16 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 10:56:26 runner 6 connected 2025/08/19 10:56:29 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 10:57:07 runner 4 connected 2025/08/19 10:57:10 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 10:57:18 runner 1 connected 2025/08/19 10:57:30 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:57:41 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 10:57:41 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:57:52 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 10:57:59 runner 8 connected 2025/08/19 10:58:10 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 10:58:10 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 10:58:18 runner 1 connected 2025/08/19 10:58:18 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 10:58:30 runner 0 connected 2025/08/19 10:58:40 runner 4 connected 2025/08/19 10:58:59 runner 5 connected 2025/08/19 10:59:00 runner 1 connected 2025/08/19 10:59:06 runner 6 connected 2025/08/19 10:59:33 STAT { "buffer too small": 0, "candidate triage jobs": 36, "candidates": 41200, "comps overflows": 0, "corpus": 39418, "corpus [files]": 2, "cover overflows": 27278, "coverage": 301307, "distributor delayed": 54590, "distributor undelayed": 54590, "distributor violated": 687, "exec candidate": 39950, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 100605, "exec total [new]": 204928, "exec triage": 123058, "executor restarts": 723, "fault jobs": 0, "fuzzer jobs": 36, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 303595, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 39950, "no exec duration": 49971000000, "no exec requests": 371, "pending": 25, "prog exec time": 245, "reproducing": 0, "rpc recv": 7886452808, "rpc sent": 1346641672, "signal": 296638, "smash jobs": 0, "triage jobs": 0, "vm output": 27638109, "vm restarts [base]": 38, "vm restarts [new]": 108 } 2025/08/19 10:59:47 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:00:12 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 11:00:20 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 11:00:21 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 11:00:21 new: boot error: can't ssh into the instance 2025/08/19 11:00:23 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:00:30 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/19 11:00:30 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/19 11:00:37 runner 6 connected 2025/08/19 11:01:01 runner 9 connected 2025/08/19 11:01:08 base crash: WARNING in dbAdjTree 2025/08/19 11:01:08 runner 1 connected 2025/08/19 11:01:10 runner 2 connected 2025/08/19 11:01:10 runner 2 connected 2025/08/19 11:01:15 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:01:20 runner 8 connected 2025/08/19 11:01:56 runner 1 connected 2025/08/19 11:02:04 runner 4 connected 2025/08/19 11:02:42 patched crashed: KASAN: slab-use-after-free Write in txEnd [need repro = true] 2025/08/19 11:02:42 scheduled a reproduction of 'KASAN: slab-use-after-free Write in txEnd' 2025/08/19 11:03:22 new: boot error: can't ssh into the instance 2025/08/19 11:03:31 runner 9 connected 2025/08/19 11:03:48 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 11:03:55 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:04:01 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:04:01 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 11:04:01 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:04:06 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = true] 2025/08/19 11:04:06 scheduled a reproduction of 'unregister_netdevice: waiting for DEV to become free' 2025/08/19 11:04:07 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:04:11 runner 3 connected 2025/08/19 11:04:33 STAT { "buffer too small": 0, "candidate triage jobs": 63, "candidates": 39491, "comps overflows": 0, "corpus": 41063, "corpus [files]": 3, "cover overflows": 29492, "coverage": 305209, "distributor delayed": 56810, "distributor undelayed": 56751, "distributor violated": 727, "exec candidate": 41659, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 105076, "exec total [new]": 221092, "exec triage": 128316, "executor restarts": 783, "fault jobs": 0, "fuzzer jobs": 63, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 307802, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 41659, "no exec duration": 49971000000, "no exec requests": 371, "pending": 28, "prog exec time": 223, "reproducing": 0, "rpc recv": 8381830468, "rpc sent": 1478708424, "signal": 300539, "smash jobs": 0, "triage jobs": 0, "vm output": 31027392, "vm restarts [base]": 40, "vm restarts [new]": 116 } 2025/08/19 11:04:37 runner 5 connected 2025/08/19 11:04:44 runner 6 connected 2025/08/19 11:04:48 runner 8 connected 2025/08/19 11:04:51 runner 2 connected 2025/08/19 11:04:54 runner 0 connected 2025/08/19 11:04:56 runner 4 connected 2025/08/19 11:05:19 base: boot error: can't ssh into the instance 2025/08/19 11:05:19 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 11:05:43 patched crashed: possible deadlock in ocfs2_reserve_local_alloc_bits [need repro = true] 2025/08/19 11:05:43 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_local_alloc_bits' 2025/08/19 11:05:55 patched crashed: possible deadlock in ocfs2_reserve_local_alloc_bits [need repro = true] 2025/08/19 11:05:55 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_local_alloc_bits' 2025/08/19 11:06:07 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:06:09 runner 2 connected 2025/08/19 11:06:09 runner 3 connected 2025/08/19 11:06:11 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 11:06:31 runner 5 connected 2025/08/19 11:06:38 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 11:06:44 runner 0 connected 2025/08/19 11:06:56 runner 8 connected 2025/08/19 11:06:59 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 11:07:00 runner 3 connected 2025/08/19 11:07:27 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:07:27 runner 2 connected 2025/08/19 11:07:30 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:07:46 base: boot error: can't ssh into the instance 2025/08/19 11:07:48 runner 6 connected 2025/08/19 11:08:21 runner 4 connected 2025/08/19 11:08:22 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:08:36 runner 0 connected 2025/08/19 11:08:45 patched crashed: general protection fault in pcl818_ai_cancel [need repro = true] 2025/08/19 11:08:45 scheduled a reproduction of 'general protection fault in pcl818_ai_cancel' 2025/08/19 11:08:48 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 11:09:00 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:09:11 runner 0 connected 2025/08/19 11:09:11 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:09:15 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 11:09:33 STAT { "buffer too small": 0, "candidate triage jobs": 20, "candidates": 38487, "comps overflows": 0, "corpus": 42059, "corpus [files]": 3, "cover overflows": 32432, "coverage": 307360, "distributor delayed": 58277, "distributor undelayed": 58259, "distributor violated": 769, "exec candidate": 42663, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 112049, "exec total [new]": 237776, "exec triage": 131566, "executor restarts": 847, "fault jobs": 0, "fuzzer jobs": 20, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 309761, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 42663, "no exec duration": 49971000000, "no exec requests": 371, "pending": 31, "prog exec time": 199, "reproducing": 0, "rpc recv": 9006211016, "rpc sent": 1616630664, "signal": 302643, "smash jobs": 0, "triage jobs": 0, "vm output": 32900621, "vm restarts [base]": 44, "vm restarts [new]": 129 } 2025/08/19 11:09:34 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:09:36 runner 2 connected 2025/08/19 11:09:37 runner 1 connected 2025/08/19 11:09:46 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:09:50 runner 6 connected 2025/08/19 11:09:59 runner 4 connected 2025/08/19 11:10:04 runner 2 connected 2025/08/19 11:10:29 new: boot error: can't ssh into the instance 2025/08/19 11:10:37 runner 5 connected 2025/08/19 11:10:39 base crash: general protection fault in pcl818_ai_cancel 2025/08/19 11:10:51 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 11:11:05 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 11:11:05 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 11:11:09 base crash: WARNING: suspicious RCU usage in get_callchain_entry 2025/08/19 11:11:16 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = false] 2025/08/19 11:11:18 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:11:29 runner 3 connected 2025/08/19 11:11:39 runner 0 connected 2025/08/19 11:11:54 runner 8 connected 2025/08/19 11:11:59 runner 2 connected 2025/08/19 11:12:05 runner 5 connected 2025/08/19 11:12:07 runner 2 connected 2025/08/19 11:12:19 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:12:33 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 11:12:36 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:12:46 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 11:13:07 runner 4 connected 2025/08/19 11:13:12 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 11:13:22 runner 0 connected 2025/08/19 11:13:24 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 11:13:26 runner 6 connected 2025/08/19 11:13:33 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:13:36 runner 3 connected 2025/08/19 11:14:00 runner 1 connected 2025/08/19 11:14:04 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 11:14:07 new: boot error: can't ssh into the instance 2025/08/19 11:14:12 runner 2 connected 2025/08/19 11:14:13 base crash: lost connection to test machine 2025/08/19 11:14:24 runner 2 connected 2025/08/19 11:14:33 STAT { "buffer too small": 0, "candidate triage jobs": 20, "candidates": 37434, "comps overflows": 0, "corpus": 43088, "corpus [files]": 3, "cover overflows": 33956, "coverage": 309327, "distributor delayed": 60283, "distributor undelayed": 60281, "distributor violated": 852, "exec candidate": 43716, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 119367, "exec total [new]": 248343, "exec triage": 134792, "executor restarts": 898, "fault jobs": 0, "fuzzer jobs": 20, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 311730, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43716, "no exec duration": 49974000000, "no exec requests": 372, "pending": 32, "prog exec time": 333, "reproducing": 0, "rpc recv": 9713313844, "rpc sent": 1742649080, "signal": 304616, "smash jobs": 0, "triage jobs": 0, "vm output": 35185552, "vm restarts [base]": 51, "vm restarts [new]": 141 } 2025/08/19 11:14:53 runner 0 connected 2025/08/19 11:14:56 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 11:14:56 runner 1 connected 2025/08/19 11:15:00 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:15:01 runner 3 connected 2025/08/19 11:15:46 runner 6 connected 2025/08/19 11:15:57 runner 5 connected 2025/08/19 11:16:03 patched crashed: INFO: trying to register non-static key in ocfs2_dlm_shutdown [need repro = true] 2025/08/19 11:16:03 scheduled a reproduction of 'INFO: trying to register non-static key in ocfs2_dlm_shutdown' 2025/08/19 11:16:33 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:16:41 base crash: INFO: trying to register non-static key in ocfs2_dlm_shutdown 2025/08/19 11:16:42 base crash: WARNING in xfrm_state_fini 2025/08/19 11:16:51 runner 0 connected 2025/08/19 11:17:22 runner 8 connected 2025/08/19 11:17:30 runner 2 connected 2025/08/19 11:17:32 runner 1 connected 2025/08/19 11:17:32 new: boot error: can't ssh into the instance 2025/08/19 11:17:35 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:17:36 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:17:43 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = true] 2025/08/19 11:17:43 scheduled a reproduction of 'WARNING in ext4_xattr_inode_lookup_create' 2025/08/19 11:17:47 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:17:53 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/19 11:18:01 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 11:18:18 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 11:18:22 runner 3 connected 2025/08/19 11:18:24 runner 6 connected 2025/08/19 11:18:27 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:18:32 runner 4 connected 2025/08/19 11:18:36 runner 5 connected 2025/08/19 11:18:43 runner 3 connected 2025/08/19 11:18:50 runner 1 connected 2025/08/19 11:19:04 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/19 11:19:04 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/19 11:19:07 runner 1 connected 2025/08/19 11:19:07 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/19 11:19:07 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/19 11:19:08 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/19 11:19:08 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/19 11:19:16 runner 8 connected 2025/08/19 11:19:17 base crash: kernel BUG in txUnlock 2025/08/19 11:19:33 STAT { "buffer too small": 0, "candidate triage jobs": 6, "candidates": 36602, "comps overflows": 0, "corpus": 43904, "corpus [files]": 3, "cover overflows": 35970, "coverage": 311090, "distributor delayed": 61665, "distributor undelayed": 61664, "distributor violated": 857, "exec candidate": 44548, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 13, "exec seeds": 0, "exec smash": 0, "exec total [base]": 127288, "exec total [new]": 260424, "exec triage": 137418, "executor restarts": 950, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 313685, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44548, "no exec duration": 49977000000, "no exec requests": 373, "pending": 37, "prog exec time": 292, "reproducing": 0, "rpc recv": 10338581572, "rpc sent": 1858288016, "signal": 306346, "smash jobs": 0, "triage jobs": 0, "vm output": 37181015, "vm restarts [base]": 57, "vm restarts [new]": 152 } 2025/08/19 11:19:40 new: boot error: can't ssh into the instance 2025/08/19 11:19:53 runner 5 connected 2025/08/19 11:19:55 base crash: WARNING in ext4_xattr_inode_lookup_create 2025/08/19 11:19:55 runner 3 connected 2025/08/19 11:19:56 runner 6 connected 2025/08/19 11:20:03 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 11:20:06 runner 0 connected 2025/08/19 11:20:07 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 11:20:18 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 11:20:21 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:20:29 runner 9 connected 2025/08/19 11:20:34 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 11:20:35 new: boot error: can't ssh into the instance 2025/08/19 11:20:35 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 11:20:45 runner 2 connected 2025/08/19 11:20:52 runner 4 connected 2025/08/19 11:20:57 runner 3 connected 2025/08/19 11:21:09 runner 8 connected 2025/08/19 11:21:10 base crash: WARNING in dbAdjTree 2025/08/19 11:21:11 runner 6 connected 2025/08/19 11:21:23 runner 1 connected 2025/08/19 11:21:24 runner 7 connected 2025/08/19 11:21:47 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:21:53 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:21:58 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 11:21:59 runner 0 connected 2025/08/19 11:22:36 runner 9 connected 2025/08/19 11:22:43 runner 5 connected 2025/08/19 11:23:00 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:23:01 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:23:06 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:23:11 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 11:23:13 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 11:23:17 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 11:23:31 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:23:38 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 11:23:46 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:23:50 runner 9 connected 2025/08/19 11:23:55 runner 3 connected 2025/08/19 11:23:57 runner 8 connected 2025/08/19 11:24:00 runner 2 connected 2025/08/19 11:24:06 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 11:24:06 runner 2 connected 2025/08/19 11:24:20 runner 5 connected 2025/08/19 11:24:27 runner 3 connected 2025/08/19 11:24:27 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 11:24:33 STAT { "buffer too small": 0, "candidate triage jobs": 6, "candidates": 36024, "comps overflows": 0, "corpus": 44401, "corpus [files]": 3, "cover overflows": 38058, "coverage": 312065, "distributor delayed": 62498, "distributor undelayed": 62497, "distributor violated": 863, "exec candidate": 45126, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 18, "exec seeds": 0, "exec smash": 0, "exec total [base]": 132704, "exec total [new]": 273317, "exec triage": 139071, "executor restarts": 1025, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 314770, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45075, "no exec duration": 50261000000, "no exec requests": 375, "pending": 37, "prog exec time": 292, "reproducing": 0, "rpc recv": 11083493720, "rpc sent": 1981840184, "signal": 307319, "smash jobs": 0, "triage jobs": 0, "vm output": 39404204, "vm restarts [base]": 63, "vm restarts [new]": 168 } 2025/08/19 11:24:36 runner 6 connected 2025/08/19 11:24:37 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 11:24:55 runner 7 connected 2025/08/19 11:25:17 runner 0 connected 2025/08/19 11:25:26 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:25:26 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 11:25:27 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:25:28 runner 2 connected 2025/08/19 11:25:38 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:26:09 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 11:26:15 runner 8 connected 2025/08/19 11:26:15 runner 9 connected 2025/08/19 11:26:26 runner 2 connected 2025/08/19 11:26:29 base crash: lost connection to test machine 2025/08/19 11:26:51 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 11:26:59 runner 2 connected 2025/08/19 11:27:19 runner 3 connected 2025/08/19 11:27:20 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:27:41 runner 8 connected 2025/08/19 11:27:42 new: boot error: can't ssh into the instance 2025/08/19 11:27:52 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 11:28:05 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 11:28:10 runner 5 connected 2025/08/19 11:28:41 runner 7 connected 2025/08/19 11:29:01 runner 2 connected 2025/08/19 11:29:33 STAT { "buffer too small": 0, "candidate triage jobs": 21, "candidates": 35532, "comps overflows": 0, "corpus": 44817, "corpus [files]": 3, "cover overflows": 39552, "coverage": 312933, "distributor delayed": 63377, "distributor undelayed": 63375, "distributor violated": 886, "exec candidate": 45618, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 18, "exec seeds": 0, "exec smash": 0, "exec total [base]": 136493, "exec total [new]": 282136, "exec triage": 140478, "executor restarts": 1091, "fault jobs": 0, "fuzzer jobs": 21, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 315682, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45539, "no exec duration": 50271000000, "no exec requests": 376, "pending": 37, "prog exec time": 672, "reproducing": 0, "rpc recv": 11541085016, "rpc sent": 2056663144, "signal": 308230, "smash jobs": 0, "triage jobs": 0, "vm output": 41361337, "vm restarts [base]": 67, "vm restarts [new]": 177 } 2025/08/19 11:30:14 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 11:30:37 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 11:30:41 base: boot error: can't ssh into the instance 2025/08/19 11:31:11 runner 9 connected 2025/08/19 11:31:30 runner 1 connected 2025/08/19 11:31:34 runner 6 connected 2025/08/19 11:31:35 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:31:43 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/19 11:32:04 new: boot error: can't ssh into the instance 2025/08/19 11:32:12 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:32:25 runner 2 connected 2025/08/19 11:32:33 runner 2 connected 2025/08/19 11:32:56 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 11:33:01 runner 1 connected 2025/08/19 11:33:10 runner 5 connected 2025/08/19 11:33:14 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 11:33:19 new: boot error: can't ssh into the instance 2025/08/19 11:33:38 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = false] 2025/08/19 11:33:45 runner 1 connected 2025/08/19 11:33:49 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = false] 2025/08/19 11:33:56 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 11:34:04 runner 2 connected 2025/08/19 11:34:08 runner 4 connected 2025/08/19 11:34:27 runner 8 connected 2025/08/19 11:34:33 timed out waiting for coprus triage 2025/08/19 11:34:33 starting bug reproductions 2025/08/19 11:34:33 starting bug reproductions (max 10 VMs, 7 repros) 2025/08/19 11:34:33 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/19 11:34:33 STAT { "buffer too small": 0, "candidate triage jobs": 1, "candidates": 35004, "comps overflows": 0, "corpus": 45307, "corpus [files]": 3, "cover overflows": 41267, "coverage": 313903, "distributor delayed": 64280, "distributor undelayed": 64280, "distributor violated": 903, "exec candidate": 46146, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 21, "exec seeds": 0, "exec smash": 0, "exec total [base]": 141789, "exec total [new]": 292547, "exec triage": 142038, "executor restarts": 1168, "fault jobs": 0, "fuzzer jobs": 1, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 316652, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46034, "no exec duration": 50271000000, "no exec requests": 376, "pending": 37, "prog exec time": 310, "reproducing": 0, "rpc recv": 11911049300, "rpc sent": 2140969088, "signal": 309215, "smash jobs": 0, "triage jobs": 0, "vm output": 43474247, "vm restarts [base]": 70, "vm restarts [new]": 185 } 2025/08/19 11:34:33 reproduction of "KASAN: slab-use-after-free Read in xfrm_alloc_spi" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "KASAN: slab-use-after-free Read in __xfrm_state_lookup" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "KASAN: slab-use-after-free Read in __xfrm_state_lookup" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "kernel BUG in jfs_evict_inode" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/19 11:34:33 start reproducing 'general protection fault in xfrm_alloc_spi' 2025/08/19 11:34:33 reproduction of "WARNING in dbAdjTree" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "unregister_netdevice: waiting for DEV to become free" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "general protection fault in pcl818_ai_cancel" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "INFO: trying to register non-static key in ocfs2_dlm_shutdown" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "WARNING in ext4_xattr_inode_lookup_create" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/19 11:34:33 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/19 11:34:33 start reproducing 'possible deadlock in ocfs2_reserve_local_alloc_bits' 2025/08/19 11:34:33 start reproducing 'possible deadlock in ntfs_fiemap' 2025/08/19 11:34:33 start reproducing 'KASAN: slab-use-after-free Read in xfrm_state_find' 2025/08/19 11:34:33 start reproducing 'KASAN: slab-use-after-free Write in txEnd' 2025/08/19 11:34:33 start reproducing 'WARNING in rate_control_rate_init' 2025/08/19 11:34:33 start reproducing 'KASAN: slab-use-after-free Write in __xfrm_state_delete' 2025/08/19 11:34:33 failed to recv *flatrpc.InfoRequestRawT: unexpected EOF 2025/08/19 11:34:44 runner 0 connected 2025/08/19 11:35:31 new: boot error: can't ssh into the instance 2025/08/19 11:36:16 base crash: WARNING: suspicious RCU usage in get_callchain_entry 2025/08/19 11:36:27 base crash: WARNING: suspicious RCU usage in get_callchain_entry 2025/08/19 11:37:03 reproducing crash 'KASAN: slab-use-after-free Write in txEnd': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 11:37:05 runner 2 connected 2025/08/19 11:37:43 reproducing crash 'KASAN: slab-use-after-free Write in txEnd': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 11:37:47 new: boot error: can't ssh into the instance 2025/08/19 11:39:33 STAT { "buffer too small": 0, "candidate triage jobs": 1, "candidates": 35004, "comps overflows": 0, "corpus": 45307, "corpus [files]": 3, "cover overflows": 41267, "coverage": 313903, "distributor delayed": 64280, "distributor undelayed": 64280, "distributor violated": 903, "exec candidate": 46146, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 21, "exec seeds": 0, "exec smash": 0, "exec total [base]": 145901, "exec total [new]": 292547, "exec triage": 142038, "executor restarts": 1168, "fault jobs": 0, "fuzzer jobs": 1, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 316652, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46034, "no exec duration": 50656000000, "no exec requests": 377, "pending": 1, "prog exec time": 0, "reproducing": 7, "rpc recv": 11973909300, "rpc sent": 2157011040, "signal": 309215, "smash jobs": 0, "triage jobs": 0, "vm output": 45944291, "vm restarts [base]": 72, "vm restarts [new]": 185 } 2025/08/19 11:41:28 base crash: no output from test machine 2025/08/19 11:41:34 base crash: no output from test machine 2025/08/19 11:42:04 base crash: no output from test machine 2025/08/19 11:42:19 runner 3 connected 2025/08/19 11:42:24 runner 0 connected 2025/08/19 11:42:53 runner 2 connected 2025/08/19 11:44:33 STAT { "buffer too small": 0, "candidate triage jobs": 1, "candidates": 35004, "comps overflows": 0, "corpus": 45307, "corpus [files]": 3, "cover overflows": 41267, "coverage": 313903, "distributor delayed": 64280, "distributor undelayed": 64280, "distributor violated": 903, "exec candidate": 46146, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 21, "exec seeds": 0, "exec smash": 0, "exec total [base]": 145901, "exec total [new]": 292547, "exec triage": 142038, "executor restarts": 1168, "fault jobs": 0, "fuzzer jobs": 1, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 316652, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46034, "no exec duration": 50656000000, "no exec requests": 377, "pending": 1, "prog exec time": 0, "reproducing": 7, "rpc recv": 12066597484, "rpc sent": 2157011880, "signal": 309215, "smash jobs": 0, "triage jobs": 0, "vm output": 48915059, "vm restarts [base]": 75, "vm restarts [new]": 185 } 2025/08/19 11:44:39 new: boot error: can't ssh into the instance 2025/08/19 11:45:58 reproducing crash 'possible deadlock in ntfs_fiemap': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/super.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 11:46:33 base: boot error: can't ssh into the instance 2025/08/19 11:47:13 reproducing crash 'KASAN: slab-use-after-free Write in txEnd': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 11:47:19 base crash: no output from test machine 2025/08/19 11:47:21 runner 1 connected 2025/08/19 11:47:23 base crash: no output from test machine 2025/08/19 11:47:24 new: boot error: can't ssh into the instance 2025/08/19 11:47:45 repro finished 'KASAN: slab-use-after-free Read in xfrm_state_find', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 11:47:45 failed repro for "KASAN: slab-use-after-free Read in xfrm_state_find", err=%!s() 2025/08/19 11:47:45 "KASAN: slab-use-after-free Read in xfrm_state_find": saved crash log into 1755604065.crash.log 2025/08/19 11:47:45 "KASAN: slab-use-after-free Read in xfrm_state_find": saved repro log into 1755604065.repro.log 2025/08/19 11:47:48 repro finished 'general protection fault in xfrm_alloc_spi', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 11:47:48 failed repro for "general protection fault in xfrm_alloc_spi", err=%!s() 2025/08/19 11:47:48 "general protection fault in xfrm_alloc_spi": saved crash log into 1755604068.crash.log 2025/08/19 11:47:48 "general protection fault in xfrm_alloc_spi": saved repro log into 1755604068.repro.log 2025/08/19 11:47:49 new: boot error: can't ssh into the instance 2025/08/19 11:47:53 base crash: no output from test machine 2025/08/19 11:48:05 runner 0 connected 2025/08/19 11:48:07 runner 3 connected 2025/08/19 11:48:14 repro finished 'WARNING in rate_control_rate_init', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 11:48:14 failed repro for "WARNING in rate_control_rate_init", err=%!s() 2025/08/19 11:48:14 "WARNING in rate_control_rate_init": saved crash log into 1755604094.crash.log 2025/08/19 11:48:14 "WARNING in rate_control_rate_init": saved repro log into 1755604094.repro.log 2025/08/19 11:48:26 runner 2 connected 2025/08/19 11:48:30 new: boot error: can't ssh into the instance 2025/08/19 11:48:39 runner 1 connected 2025/08/19 11:48:41 runner 2 connected 2025/08/19 11:48:56 repro finished 'possible deadlock in ocfs2_reserve_local_alloc_bits', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 11:48:56 failed repro for "possible deadlock in ocfs2_reserve_local_alloc_bits", err=%!s() 2025/08/19 11:48:56 start reproducing 'possible deadlock in ocfs2_reserve_local_alloc_bits' 2025/08/19 11:48:56 "possible deadlock in ocfs2_reserve_local_alloc_bits": saved crash log into 1755604136.crash.log 2025/08/19 11:48:56 "possible deadlock in ocfs2_reserve_local_alloc_bits": saved repro log into 1755604136.repro.log 2025/08/19 11:48:57 reproducing crash 'KASAN: slab-use-after-free Write in txEnd': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 11:49:02 runner 0 connected 2025/08/19 11:49:20 runner 3 connected 2025/08/19 11:49:33 STAT { "buffer too small": 0, "candidate triage jobs": 8, "candidates": 34983, "comps overflows": 0, "corpus": 45317, "corpus [files]": 3, "cover overflows": 41483, "coverage": 313924, "distributor delayed": 64309, "distributor undelayed": 64306, "distributor violated": 903, "exec candidate": 46167, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 22, "exec seeds": 0, "exec smash": 0, "exec total [base]": 147126, "exec total [new]": 293818, "exec triage": 142079, "executor restarts": 1181, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 316690, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46051, "no exec duration": 220580000000, "no exec requests": 972, "pending": 0, "prog exec time": 244, "reproducing": 4, "rpc recv": 12316374896, "rpc sent": 2177904296, "signal": 309237, "smash jobs": 0, "triage jobs": 0, "vm output": 52096355, "vm restarts [base]": 79, "vm restarts [new]": 189 } 2025/08/19 11:49:35 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 11:49:36 patched crashed: possible deadlock in ocfs2_setattr [need repro = false] 2025/08/19 11:49:40 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 11:49:41 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:50:29 runner 3 connected 2025/08/19 11:50:30 runner 2 connected 2025/08/19 11:50:32 runner 1 connected 2025/08/19 11:50:36 base crash: WARNING in xfrm_state_fini 2025/08/19 11:50:44 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 11:50:51 base crash: WARNING in xfrm_state_fini 2025/08/19 11:51:21 reproducing crash 'KASAN: slab-use-after-free Write in txEnd': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 11:51:21 repro finished 'KASAN: slab-use-after-free Write in txEnd', repro=true crepro=false desc='KASAN: slab-use-after-free Read in jfs_syncpt' hub=false from_dashboard=false 2025/08/19 11:51:21 found repro for "KASAN: slab-use-after-free Read in jfs_syncpt" (orig title: "KASAN: slab-use-after-free Write in txEnd", reliability: 1), took 16.79 minutes 2025/08/19 11:51:21 "KASAN: slab-use-after-free Read in jfs_syncpt": saved crash log into 1755604281.crash.log 2025/08/19 11:51:21 "KASAN: slab-use-after-free Read in jfs_syncpt": saved repro log into 1755604281.repro.log 2025/08/19 11:51:34 runner 3 connected 2025/08/19 11:51:54 new: boot error: can't ssh into the instance 2025/08/19 11:52:43 runner 4 connected 2025/08/19 11:52:46 attempt #0 to run "KASAN: slab-use-after-free Read in jfs_syncpt" on base: crashed with KASAN: slab-use-after-free Write in lmLogSync 2025/08/19 11:52:46 crashes both: KASAN: slab-use-after-free Read in jfs_syncpt / KASAN: slab-use-after-free Write in lmLogSync 2025/08/19 11:53:34 runner 0 connected 2025/08/19 11:53:36 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = false] 2025/08/19 11:53:39 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:53:55 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:54:12 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 11:54:22 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 11:54:25 runner 3 connected 2025/08/19 11:54:28 runner 0 connected 2025/08/19 11:54:33 STAT { "buffer too small": 0, "candidate triage jobs": 27, "candidates": 34822, "comps overflows": 0, "corpus": 45412, "corpus [files]": 3, "cover overflows": 42843, "coverage": 314189, "distributor delayed": 64513, "distributor undelayed": 64492, "distributor violated": 929, "exec candidate": 46328, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 22, "exec seeds": 0, "exec smash": 0, "exec total [base]": 153380, "exec total [new]": 300605, "exec triage": 142466, "executor restarts": 1199, "fault jobs": 0, "fuzzer jobs": 27, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 317030, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46184, "no exec duration": 230795000000, "no exec requests": 1009, "pending": 0, "prog exec time": 283, "reproducing": 3, "rpc recv": 12552385420, "rpc sent": 2236781480, "signal": 309501, "smash jobs": 0, "triage jobs": 0, "vm output": 53569894, "vm restarts [base]": 82, "vm restarts [new]": 194 } 2025/08/19 11:54:45 new: boot error: can't ssh into the instance 2025/08/19 11:54:45 runner 4 connected 2025/08/19 11:55:02 runner 1 connected 2025/08/19 11:55:12 runner 3 connected 2025/08/19 11:55:27 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 11:56:02 base crash: WARNING in xfrm_state_fini 2025/08/19 11:56:06 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 11:56:15 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 11:56:16 runner 2 connected 2025/08/19 11:56:55 runner 4 connected 2025/08/19 11:57:03 runner 3 connected 2025/08/19 11:57:16 reproducing crash 'possible deadlock in ntfs_fiemap': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/batman-adv/hard-interface.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 11:58:20 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 11:58:32 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 11:58:37 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 11:58:41 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:58:49 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 11:59:03 new: boot error: can't ssh into the instance 2025/08/19 11:59:10 runner 3 connected 2025/08/19 11:59:21 runner 0 connected 2025/08/19 11:59:25 runner 1 connected 2025/08/19 11:59:31 runner 4 connected 2025/08/19 11:59:33 STAT { "buffer too small": 0, "candidate triage jobs": 6, "candidates": 29813, "comps overflows": 0, "corpus": 45517, "corpus [files]": 3, "cover overflows": 43980, "coverage": 314396, "distributor delayed": 64767, "distributor undelayed": 64761, "distributor violated": 941, "exec candidate": 51337, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 22, "exec seeds": 0, "exec smash": 0, "exec total [base]": 157912, "exec total [new]": 307831, "exec triage": 142863, "executor restarts": 1235, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 317248, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46285, "no exec duration": 230795000000, "no exec requests": 1009, "pending": 0, "prog exec time": 213, "reproducing": 3, "rpc recv": 12882556436, "rpc sent": 2299629152, "signal": 309709, "smash jobs": 0, "triage jobs": 0, "vm output": 56178483, "vm restarts [base]": 86, "vm restarts [new]": 200 } 2025/08/19 11:59:38 runner 2 connected 2025/08/19 11:59:41 new: boot error: can't ssh into the instance 2025/08/19 11:59:54 runner 5 connected 2025/08/19 12:00:31 runner 1 connected 2025/08/19 12:00:57 base: boot error: can't ssh into the instance 2025/08/19 12:01:25 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 12:01:36 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 12:01:48 reproducing crash 'possible deadlock in ntfs_fiemap': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/batman-adv/hard-interface.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:01:52 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 12:01:53 runner 2 connected 2025/08/19 12:02:25 runner 3 connected 2025/08/19 12:02:41 runner 2 connected 2025/08/19 12:03:11 repro finished 'possible deadlock in ocfs2_reserve_local_alloc_bits', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 12:03:11 failed repro for "possible deadlock in ocfs2_reserve_local_alloc_bits", err=%!s() 2025/08/19 12:03:11 "possible deadlock in ocfs2_reserve_local_alloc_bits": saved crash log into 1755604991.crash.log 2025/08/19 12:03:11 "possible deadlock in ocfs2_reserve_local_alloc_bits": saved repro log into 1755604991.repro.log 2025/08/19 12:03:49 patched crashed: WARNING in drv_unassign_vif_chanctx [need repro = false] 2025/08/19 12:03:51 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 12:04:01 runner 6 connected 2025/08/19 12:04:33 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 18245, "comps overflows": 0, "corpus": 45666, "corpus [files]": 3, "cover overflows": 46082, "coverage": 314606, "distributor delayed": 65040, "distributor undelayed": 65040, "distributor violated": 947, "exec candidate": 62905, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 22, "exec seeds": 0, "exec smash": 0, "exec total [base]": 166245, "exec total [new]": 319982, "exec triage": 143428, "executor restarts": 1284, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 317496, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46455, "no exec duration": 230843000000, "no exec requests": 1011, "pending": 0, "prog exec time": 256, "reproducing": 2, "rpc recv": 13156992836, "rpc sent": 2397474512, "signal": 309914, "smash jobs": 0, "triage jobs": 0, "vm output": 59694668, "vm restarts [base]": 87, "vm restarts [new]": 206 } 2025/08/19 12:04:38 runner 5 connected 2025/08/19 12:04:41 runner 1 connected 2025/08/19 12:05:07 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 12:06:07 base: boot error: can't ssh into the instance 2025/08/19 12:06:30 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 12:06:43 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 12:06:57 runner 0 connected 2025/08/19 12:07:08 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 12:07:17 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 12:07:19 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 12:07:25 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 12:07:56 runner 5 connected 2025/08/19 12:08:00 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 12:08:07 runner 3 connected 2025/08/19 12:08:08 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 12:08:32 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 12:08:48 runner 0 connected 2025/08/19 12:08:49 runner 3 connected 2025/08/19 12:09:12 base crash: WARNING in xfrm_state_fini 2025/08/19 12:09:19 runner 5 connected 2025/08/19 12:09:33 STAT { "buffer too small": 0, "candidate triage jobs": 1, "candidates": 10631, "comps overflows": 0, "corpus": 45799, "corpus [files]": 3, "cover overflows": 47568, "coverage": 314902, "distributor delayed": 65288, "distributor undelayed": 65287, "distributor violated": 960, "exec candidate": 70519, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 22, "exec seeds": 0, "exec smash": 0, "exec total [base]": 174418, "exec total [new]": 328097, "exec triage": 143921, "executor restarts": 1314, "fault jobs": 0, "fuzzer jobs": 1, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 317853, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46604, "no exec duration": 230864000000, "no exec requests": 1013, "pending": 0, "prog exec time": 308, "reproducing": 2, "rpc recv": 13424776696, "rpc sent": 2466887096, "signal": 310199, "smash jobs": 0, "triage jobs": 0, "vm output": 62002786, "vm restarts [base]": 91, "vm restarts [new]": 210 } 2025/08/19 12:10:02 runner 1 connected 2025/08/19 12:10:21 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 12:10:52 reproducing crash 'possible deadlock in ntfs_fiemap': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/batman-adv/hard-interface.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:11:10 runner 0 connected 2025/08/19 12:11:12 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 12:11:30 new: boot error: can't ssh into the instance 2025/08/19 12:12:02 runner 3 connected 2025/08/19 12:12:03 triaged 90.7% of the corpus 2025/08/19 12:12:20 runner 0 connected 2025/08/19 12:12:38 base crash: lost connection to test machine 2025/08/19 12:12:39 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 12:13:28 runner 3 connected 2025/08/19 12:14:33 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 3498, "comps overflows": 0, "corpus": 45833, "corpus [files]": 3, "cover overflows": 49057, "coverage": 314961, "distributor delayed": 65332, "distributor undelayed": 65332, "distributor violated": 967, "exec candidate": 77652, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 23, "exec seeds": 0, "exec smash": 0, "exec total [base]": 181513, "exec total [new]": 335382, "exec triage": 144065, "executor restarts": 1326, "fault jobs": 0, "fuzzer jobs": 0, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 317915, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46644, "no exec duration": 230876000000, "no exec requests": 1015, "pending": 0, "prog exec time": 250, "reproducing": 2, "rpc recv": 13589589408, "rpc sent": 2518743480, "signal": 310260, "smash jobs": 0, "triage jobs": 0, "vm output": 63505900, "vm restarts [base]": 94, "vm restarts [new]": 212 } 2025/08/19 12:15:00 patched crashed: possible deadlock in ntfs_fiemap [need repro = true] 2025/08/19 12:15:00 scheduled a reproduction of 'possible deadlock in ntfs_fiemap' 2025/08/19 12:15:01 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/08/19 12:15:07 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = false] 2025/08/19 12:15:12 new: boot error: can't ssh into the instance 2025/08/19 12:15:50 runner 0 connected 2025/08/19 12:15:50 runner 5 connected 2025/08/19 12:15:57 runner 3 connected 2025/08/19 12:16:01 runner 4 connected 2025/08/19 12:16:36 new: boot error: can't ssh into the instance 2025/08/19 12:16:48 new: boot error: can't ssh into the instance 2025/08/19 12:17:22 new: boot error: can't ssh into the instance 2025/08/19 12:17:25 runner 6 connected 2025/08/19 12:17:31 base: boot error: can't ssh into the instance 2025/08/19 12:17:33 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 12:17:37 runner 1 connected 2025/08/19 12:17:45 new: boot error: can't ssh into the instance 2025/08/19 12:18:11 runner 2 connected 2025/08/19 12:18:20 runner 2 connected 2025/08/19 12:18:20 runner 4 connected 2025/08/19 12:19:02 base crash: WARNING in ext4_xattr_inode_lookup_create 2025/08/19 12:19:33 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 0, "corpus": 45861, "corpus [files]": 3, "cover overflows": 49904, "coverage": 315019, "distributor delayed": 65439, "distributor undelayed": 65439, "distributor violated": 967, "exec candidate": 81150, "exec collide": 494, "exec fuzz": 941, "exec gen": 69, "exec hints": 40, "exec inject": 0, "exec minimize": 120, "exec retries": 23, "exec seeds": 18, "exec smash": 81, "exec total [base]": 186330, "exec total [new]": 340844, "exec triage": 144251, "executor restarts": 1353, "fault jobs": 0, "fuzzer jobs": 16, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 7, "hints jobs": 4, "max signal": 318092, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 65, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46702, "no exec duration": 230876000000, "no exec requests": 1015, "pending": 1, "prog exec time": 1122, "reproducing": 2, "rpc recv": 13881929444, "rpc sent": 2587715656, "signal": 310318, "smash jobs": 5, "triage jobs": 7, "vm output": 65877517, "vm restarts [base]": 96, "vm restarts [new]": 219 } 2025/08/19 12:19:52 runner 1 connected 2025/08/19 12:21:02 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 12:21:21 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 12:21:41 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 12:21:51 runner 5 connected 2025/08/19 12:22:10 runner 3 connected 2025/08/19 12:22:44 base: boot error: can't ssh into the instance 2025/08/19 12:23:33 runner 3 connected 2025/08/19 12:24:33 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 0, "corpus": 45896, "corpus [files]": 3, "cover overflows": 50419, "coverage": 315148, "distributor delayed": 65561, "distributor undelayed": 65560, "distributor violated": 967, "exec candidate": 81150, "exec collide": 1449, "exec fuzz": 2727, "exec gen": 170, "exec hints": 622, "exec inject": 0, "exec minimize": 862, "exec retries": 24, "exec seeds": 103, "exec smash": 862, "exec total [base]": 188504, "exec total [new]": 346070, "exec triage": 144447, "executor restarts": 1368, "fault jobs": 0, "fuzzer jobs": 26, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 6, "hints jobs": 7, "max signal": 318209, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 458, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46772, "no exec duration": 230876000000, "no exec requests": 1015, "pending": 1, "prog exec time": 913, "reproducing": 2, "rpc recv": 14032540676, "rpc sent": 2661667576, "signal": 310435, "smash jobs": 13, "triage jobs": 6, "vm output": 70509087, "vm restarts [base]": 98, "vm restarts [new]": 221 } 2025/08/19 12:26:04 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 12:26:05 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/19 12:26:16 reproducing crash 'possible deadlock in ntfs_fiemap': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/batman-adv/hard-interface.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:26:35 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 12:26:52 runner 6 connected 2025/08/19 12:26:54 runner 3 connected 2025/08/19 12:27:20 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 12:27:25 runner 3 connected 2025/08/19 12:27:51 new: boot error: can't ssh into the instance 2025/08/19 12:28:09 runner 4 connected 2025/08/19 12:29:28 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 12:29:33 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 4, "corpus": 45922, "corpus [files]": 3, "cover overflows": 51182, "coverage": 315184, "distributor delayed": 65684, "distributor undelayed": 65684, "distributor violated": 967, "exec candidate": 81150, "exec collide": 2355, "exec fuzz": 4460, "exec gen": 275, "exec hints": 1652, "exec inject": 0, "exec minimize": 1428, "exec retries": 25, "exec seeds": 183, "exec smash": 1685, "exec total [base]": 191399, "exec total [new]": 351512, "exec triage": 144646, "executor restarts": 1390, "fault jobs": 0, "fuzzer jobs": 21, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 5, "hints jobs": 8, "max signal": 318380, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 749, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46837, "no exec duration": 230876000000, "no exec requests": 1015, "pending": 1, "prog exec time": 913, "reproducing": 2, "rpc recv": 14185106264, "rpc sent": 2741516912, "signal": 310469, "smash jobs": 4, "triage jobs": 9, "vm output": 74250182, "vm restarts [base]": 99, "vm restarts [new]": 224 } 2025/08/19 12:30:04 reproducing crash 'possible deadlock in ntfs_fiemap': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/batman-adv/hard-interface.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:30:17 runner 4 connected 2025/08/19 12:30:23 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/08/19 12:30:58 patched crashed: general protection fault in lock_sock_nested [need repro = true] 2025/08/19 12:30:58 scheduled a reproduction of 'general protection fault in lock_sock_nested' 2025/08/19 12:30:58 start reproducing 'general protection fault in lock_sock_nested' 2025/08/19 12:31:47 runner 3 connected 2025/08/19 12:31:47 new: boot error: can't ssh into the instance 2025/08/19 12:32:06 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 12:32:36 runner 2 connected 2025/08/19 12:32:56 runner 6 connected 2025/08/19 12:34:33 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 10, "corpus": 45937, "corpus [files]": 3, "cover overflows": 51861, "coverage": 315211, "distributor delayed": 65763, "distributor undelayed": 65763, "distributor violated": 967, "exec candidate": 81150, "exec collide": 3131, "exec fuzz": 5880, "exec gen": 362, "exec hints": 2673, "exec inject": 0, "exec minimize": 1792, "exec retries": 25, "exec seeds": 228, "exec smash": 2047, "exec total [base]": 194877, "exec total [new]": 355713, "exec triage": 144770, "executor restarts": 1435, "fault jobs": 0, "fuzzer jobs": 16, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 6, "hints jobs": 7, "max signal": 318624, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 965, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46876, "no exec duration": 232399000000, "no exec requests": 1018, "pending": 1, "prog exec time": 1416, "reproducing": 3, "rpc recv": 14335145700, "rpc sent": 2819398640, "signal": 310491, "smash jobs": 1, "triage jobs": 8, "vm output": 77743422, "vm restarts [base]": 99, "vm restarts [new]": 228 } 2025/08/19 12:35:29 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 12:36:18 runner 1 connected 2025/08/19 12:36:19 base crash: INFO: task hung in __closure_sync 2025/08/19 12:36:25 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = true] 2025/08/19 12:36:25 scheduled a reproduction of 'INFO: task hung in bch2_journal_reclaim_thread' 2025/08/19 12:36:25 start reproducing 'INFO: task hung in bch2_journal_reclaim_thread' 2025/08/19 12:37:10 runner 3 connected 2025/08/19 12:37:14 runner 3 connected 2025/08/19 12:38:36 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = true] 2025/08/19 12:38:36 scheduled a reproduction of 'INFO: task hung in bch2_journal_reclaim_thread' 2025/08/19 12:38:59 reproducing crash 'possible deadlock in ntfs_fiemap': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/batman-adv/hard-interface.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:39:26 runner 6 connected 2025/08/19 12:39:33 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 18, "corpus": 45943, "corpus [files]": 3, "cover overflows": 52528, "coverage": 315235, "distributor delayed": 65813, "distributor undelayed": 65810, "distributor violated": 967, "exec candidate": 81150, "exec collide": 3709, "exec fuzz": 6956, "exec gen": 416, "exec hints": 3258, "exec inject": 0, "exec minimize": 2143, "exec retries": 25, "exec seeds": 252, "exec smash": 2263, "exec total [base]": 197889, "exec total [new]": 358673, "exec triage": 144848, "executor restarts": 1447, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 3, "hints jobs": 6, "max signal": 318766, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1131, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46900, "no exec duration": 232399000000, "no exec requests": 1018, "pending": 2, "prog exec time": 1184, "reproducing": 4, "rpc recv": 14479894796, "rpc sent": 2894570320, "signal": 310511, "smash jobs": 1, "triage jobs": 8, "vm output": 81445287, "vm restarts [base]": 101, "vm restarts [new]": 230 } 2025/08/19 12:39:50 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 12:40:29 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = true] 2025/08/19 12:40:29 scheduled a reproduction of 'INFO: task hung in bch2_journal_reclaim_thread' 2025/08/19 12:40:40 runner 1 connected 2025/08/19 12:40:56 new: boot error: can't ssh into the instance 2025/08/19 12:41:04 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = true] 2025/08/19 12:41:04 scheduled a reproduction of 'INFO: task hung in bch2_journal_reclaim_thread' 2025/08/19 12:41:20 runner 5 connected 2025/08/19 12:41:31 base crash: lost connection to test machine 2025/08/19 12:41:53 runner 4 connected 2025/08/19 12:41:56 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 12:42:21 runner 0 connected 2025/08/19 12:42:44 runner 3 connected 2025/08/19 12:42:46 base crash: INFO: task hung in bch2_journal_reclaim_thread 2025/08/19 12:43:37 runner 2 connected 2025/08/19 12:44:04 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = false] 2025/08/19 12:44:33 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 36, "corpus": 45950, "corpus [files]": 3, "cover overflows": 53272, "coverage": 315389, "distributor delayed": 65868, "distributor undelayed": 65866, "distributor violated": 969, "exec candidate": 81150, "exec collide": 3997, "exec fuzz": 7523, "exec gen": 447, "exec hints": 3679, "exec inject": 0, "exec minimize": 2513, "exec retries": 26, "exec seeds": 276, "exec smash": 2408, "exec total [base]": 199960, "exec total [new]": 360611, "exec triage": 144935, "executor restarts": 1470, "fault jobs": 0, "fuzzer jobs": 23, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 5, "max signal": 318909, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1348, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46932, "no exec duration": 232399000000, "no exec requests": 1018, "pending": 4, "prog exec time": 1410, "reproducing": 4, "rpc recv": 14698912416, "rpc sent": 2962025792, "signal": 310582, "smash jobs": 4, "triage jobs": 14, "vm output": 85884919, "vm restarts [base]": 105, "vm restarts [new]": 232 } 2025/08/19 12:44:36 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = false] 2025/08/19 12:44:49 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = false] 2025/08/19 12:44:53 runner 6 connected 2025/08/19 12:44:56 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = false] 2025/08/19 12:45:25 runner 3 connected 2025/08/19 12:45:37 base crash: INFO: task hung in bch2_journal_reclaim_thread 2025/08/19 12:45:38 runner 4 connected 2025/08/19 12:45:45 runner 5 connected 2025/08/19 12:46:25 runner 0 connected 2025/08/19 12:47:01 reproducing crash 'possible deadlock in ntfs_fiemap': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/batman-adv/hard-interface.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:47:34 base crash: INFO: task hung in bch2_journal_reclaim_thread 2025/08/19 12:47:48 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = false] 2025/08/19 12:48:01 reproducing crash 'INFO: task hung in bch2_journal_reclaim_thread': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/bcachefs/journal_reclaim.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:48:25 runner 2 connected 2025/08/19 12:48:37 runner 6 connected 2025/08/19 12:49:04 base crash: INFO: task hung in bch2_journal_reclaim_thread 2025/08/19 12:49:33 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 58, "corpus": 45965, "corpus [files]": 4, "cover overflows": 53823, "coverage": 315445, "distributor delayed": 65901, "distributor undelayed": 65901, "distributor violated": 969, "exec candidate": 81150, "exec collide": 4295, "exec fuzz": 8019, "exec gen": 479, "exec hints": 4010, "exec inject": 0, "exec minimize": 2845, "exec retries": 26, "exec seeds": 309, "exec smash": 2668, "exec total [base]": 202096, "exec total [new]": 362460, "exec triage": 144995, "executor restarts": 1494, "fault jobs": 0, "fuzzer jobs": 18, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 9, "max signal": 318949, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1543, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46954, "no exec duration": 232399000000, "no exec requests": 1018, "pending": 4, "prog exec time": 1219, "reproducing": 4, "rpc recv": 14943928156, "rpc sent": 3029343552, "signal": 310618, "smash jobs": 5, "triage jobs": 4, "vm output": 89610807, "vm restarts [base]": 107, "vm restarts [new]": 237 } 2025/08/19 12:49:54 runner 3 connected 2025/08/19 12:50:02 base crash: INFO: task hung in bch2_journal_reclaim_thread 2025/08/19 12:50:46 base crash: INFO: task hung in bch2_journal_reclaim_thread 2025/08/19 12:50:49 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/08/19 12:50:51 runner 0 connected 2025/08/19 12:51:34 runner 1 connected 2025/08/19 12:51:38 runner 3 connected 2025/08/19 12:51:53 new: boot error: can't ssh into the instance 2025/08/19 12:53:00 patched crashed: kernel BUG in may_open [need repro = true] 2025/08/19 12:53:00 scheduled a reproduction of 'kernel BUG in may_open' 2025/08/19 12:53:00 start reproducing 'kernel BUG in may_open' 2025/08/19 12:53:03 base crash: INFO: task hung in bch2_journal_reclaim_thread 2025/08/19 12:53:17 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 12:53:31 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:53:59 runner 2 connected 2025/08/19 12:54:10 reproducing crash 'possible deadlock in ntfs_fiemap': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/batman-adv/hard-interface.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:54:14 runner 5 connected 2025/08/19 12:54:21 patched crashed: INFO: task hung in __closure_sync [need repro = false] 2025/08/19 12:54:33 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 66, "corpus": 45973, "corpus [files]": 4, "cover overflows": 54401, "coverage": 315456, "distributor delayed": 65946, "distributor undelayed": 65943, "distributor violated": 969, "exec candidate": 81150, "exec collide": 4789, "exec fuzz": 9004, "exec gen": 523, "exec hints": 4722, "exec inject": 0, "exec minimize": 3046, "exec retries": 27, "exec seeds": 336, "exec smash": 2930, "exec total [base]": 204238, "exec total [new]": 365236, "exec triage": 145050, "executor restarts": 1511, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 5, "max signal": 318994, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1669, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46973, "no exec duration": 232399000000, "no exec requests": 1018, "pending": 4, "prog exec time": 996, "reproducing": 5, "rpc recv": 15146799736, "rpc sent": 3098094448, "signal": 310627, "smash jobs": 0, "triage jobs": 6, "vm output": 93140524, "vm restarts [base]": 111, "vm restarts [new]": 239 } 2025/08/19 12:54:43 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:54:50 base crash: INFO: task hung in bch2_journal_reclaim_thread 2025/08/19 12:55:09 runner 4 connected 2025/08/19 12:55:31 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:55:39 runner 1 connected 2025/08/19 12:55:54 base crash: INFO: task hung in bch2_journal_reclaim_thread 2025/08/19 12:56:01 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:56:39 base crash: lost connection to test machine 2025/08/19 12:56:44 runner 3 connected 2025/08/19 12:56:46 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:57:30 runner 0 connected 2025/08/19 12:57:39 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:57:40 reproducing crash 'INFO: task hung in bch2_journal_reclaim_thread': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/bcachefs/journal_reclaim.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:57:44 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/08/19 12:57:46 reproducing crash 'possible deadlock in ntfs_fiemap': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/batman-adv/hard-interface.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:58:07 new: boot error: can't ssh into the instance 2025/08/19 12:58:11 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:58:33 runner 1 connected 2025/08/19 12:59:05 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:59:31 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 12:59:33 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 69, "corpus": 45978, "corpus [files]": 4, "cover overflows": 54802, "coverage": 315480, "distributor delayed": 65961, "distributor undelayed": 65961, "distributor violated": 972, "exec candidate": 81150, "exec collide": 5093, "exec fuzz": 9671, "exec gen": 562, "exec hints": 5217, "exec inject": 0, "exec minimize": 3212, "exec retries": 28, "exec seeds": 351, "exec smash": 3055, "exec total [base]": 206523, "exec total [new]": 367083, "exec triage": 145082, "executor restarts": 1522, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 3, "hints jobs": 7, "max signal": 319022, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1789, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46984, "no exec duration": 232399000000, "no exec requests": 1018, "pending": 4, "prog exec time": 1369, "reproducing": 5, "rpc recv": 15310714716, "rpc sent": 3154276696, "signal": 310650, "smash jobs": 0, "triage jobs": 5, "vm output": 96461474, "vm restarts [base]": 115, "vm restarts [new]": 240 } 2025/08/19 13:00:03 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 13:00:09 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 13:00:32 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 13:00:59 runner 2 connected 2025/08/19 13:01:01 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 13:01:30 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 13:01:31 reproducing crash 'possible deadlock in ntfs_fiemap': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/batman-adv/hard-interface.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 13:01:57 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 13:02:10 base crash: kernel BUG in may_open 2025/08/19 13:02:49 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 13:02:59 runner 3 connected 2025/08/19 13:03:07 base crash: general protection fault in pcl818_ai_cancel 2025/08/19 13:03:20 base crash: general protection fault in pcl818_ai_cancel 2025/08/19 13:03:32 reproducing crash 'INFO: task hung in bch2_journal_reclaim_thread': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/bcachefs/journal_reclaim.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 13:03:56 runner 1 connected 2025/08/19 13:03:59 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 13:04:29 status reporting terminated 2025/08/19 13:04:29 bug reporting terminated 2025/08/19 13:04:43 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 13:04:43 repro finished 'kernel BUG in may_open', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 13:06:41 reproducing crash 'INFO: task hung in bch2_journal_reclaim_thread': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/bcachefs/journal_reclaim.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 13:06:41 repro finished 'INFO: task hung in bch2_journal_reclaim_thread', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 13:07:23 repro finished 'general protection fault in lock_sock_nested', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 13:09:23 repro finished 'KASAN: slab-use-after-free Write in __xfrm_state_delete', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 13:09:24 reproducing crash 'possible deadlock in ntfs_fiemap': concatenation step failed with context deadline exceeded 2025/08/19 13:09:24 repro finished 'possible deadlock in ntfs_fiemap', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 13:09:24 syz-diff (new): kernel context loop terminated 2025/08/19 13:13:26 syz-diff (base): kernel context loop terminated 2025/08/19 13:13:26 diff fuzzing terminated 2025/08/19 13:13:26 fuzzing is finished 2025/08/19 13:13:26 status at the end: Title On-Base On-Patched INFO: rcu detected stall in worker_thread 1 crashes INFO: task hung in __closure_sync 1 crashes 1 crashes INFO: task hung in bch2_journal_reclaim_thread 9 crashes 9 crashes INFO: task hung in read_part_sector 1 crashes INFO: trying to register non-static key in ocfs2_dlm_shutdown 1 crashes 1 crashes KASAN: slab-use-after-free Read in __xfrm_state_lookup 5 crashes 2 crashes KASAN: slab-use-after-free Read in jfs_syncpt 1 crashes [reproduced] KASAN: slab-use-after-free Read in xfrm_alloc_spi 4 crashes 5 crashes KASAN: slab-use-after-free Read in xfrm_state_find 1 crashes KASAN: slab-use-after-free Write in __xfrm_state_delete 1 crashes KASAN: slab-use-after-free Write in txEnd 1 crashes WARNING in dbAdjTree 2 crashes 1 crashes WARNING in drv_unassign_vif_chanctx 1 crashes 2 crashes WARNING in ext4_xattr_inode_lookup_create 2 crashes 3 crashes WARNING in rate_control_rate_init 1 crashes WARNING in xfrm6_tunnel_net_exit 2 crashes 3 crashes WARNING in xfrm_state_fini 9 crashes 20 crashes WARNING: suspicious RCU usage in get_callchain_entry 3 crashes 7 crashes general protection fault in lock_sock_nested 1 crashes general protection fault in pcl818_ai_cancel 3 crashes 1 crashes general protection fault in xfrm_alloc_spi 1 crashes kernel BUG in jfs_evict_inode 2 crashes 2 crashes kernel BUG in may_open 1 crashes 1 crashes kernel BUG in txUnlock 1 crashes 6 crashes lost connection to test machine 6 crashes 14 crashes no output from test machine 6 crashes possible deadlock in attr_data_get_block 1 crashes possible deadlock in ntfs_fiemap 2 crashes possible deadlock in ocfs2_init_acl 11 crashes 42 crashes possible deadlock in ocfs2_reserve_local_alloc_bits 2 crashes possible deadlock in ocfs2_reserve_suballoc_bits 23 crashes 41 crashes possible deadlock in ocfs2_setattr 1 crashes 1 crashes possible deadlock in ocfs2_try_remove_refcount_tree 13 crashes 33 crashes possible deadlock in ocfs2_xattr_set 6 crashes 19 crashes unregister_netdevice: waiting for DEV to become free 1 crashes 3 crashes