2025/08/18 03:07:47 extracted 303751 symbol hashes for base and 303751 for patched 2025/08/18 03:07:47 adding modified_functions to focus areas: ["nvmet_execute_disc_identify"] 2025/08/18 03:07:47 adding directly modified files to focus areas: ["MAINTAINERS" "tools/testing/selftests/mm/.gitignore" "tools/testing/selftests/mm/Makefile" "tools/testing/selftests/mm/ksm_functional_tests.c" "tools/testing/selftests/mm/rmap.c" "tools/testing/selftests/mm/run_vmtests.sh" "tools/testing/selftests/mm/vm_util.c" "tools/testing/selftests/mm/vm_util.h"] 2025/08/18 03:07:48 downloaded the corpus from https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db 2025/08/18 03:08:53 runner 1 connected 2025/08/18 03:08:53 runner 0 connected 2025/08/18 03:08:53 runner 3 connected 2025/08/18 03:08:54 runner 5 connected 2025/08/18 03:08:54 runner 4 connected 2025/08/18 03:08:54 runner 9 connected 2025/08/18 03:08:54 runner 0 connected 2025/08/18 03:08:54 runner 2 connected 2025/08/18 03:08:54 runner 8 connected 2025/08/18 03:08:54 runner 1 connected 2025/08/18 03:08:55 runner 7 connected 2025/08/18 03:08:55 runner 3 connected 2025/08/18 03:09:01 executor cover filter: 0 PCs 2025/08/18 03:09:02 initializing coverage information... 2025/08/18 03:09:07 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/18 03:09:07 base: machine check complete 2025/08/18 03:09:08 discovered 7699 source files, 338620 symbols 2025/08/18 03:09:08 coverage filter: nvmet_execute_disc_identify: [nvmet_execute_disc_identify] 2025/08/18 03:09:08 coverage filter: MAINTAINERS: [] 2025/08/18 03:09:08 coverage filter: tools/testing/selftests/mm/.gitignore: [] 2025/08/18 03:09:08 coverage filter: tools/testing/selftests/mm/Makefile: [] 2025/08/18 03:09:08 coverage filter: tools/testing/selftests/mm/ksm_functional_tests.c: [] 2025/08/18 03:09:08 coverage filter: tools/testing/selftests/mm/rmap.c: [] 2025/08/18 03:09:08 coverage filter: tools/testing/selftests/mm/run_vmtests.sh: [] 2025/08/18 03:09:08 coverage filter: tools/testing/selftests/mm/vm_util.c: [] 2025/08/18 03:09:08 coverage filter: tools/testing/selftests/mm/vm_util.h: [] 2025/08/18 03:09:08 area "symbols": 15 PCs in the cover filter 2025/08/18 03:09:08 area "files": 0 PCs in the cover filter 2025/08/18 03:09:08 area "": 0 PCs in the cover filter 2025/08/18 03:09:08 executor cover filter: 0 PCs 2025/08/18 03:09:12 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/18 03:09:12 new: machine check complete 2025/08/18 03:09:13 new: adding 80339 seeds 2025/08/18 03:10:07 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/18 03:10:07 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/18 03:10:08 patched crashed: general protection fault in pcl818_ai_cancel [need repro = true] 2025/08/18 03:10:08 scheduled a reproduction of 'general protection fault in pcl818_ai_cancel' 2025/08/18 03:10:18 patched crashed: general protection fault in pcl818_ai_cancel [need repro = true] 2025/08/18 03:10:18 scheduled a reproduction of 'general protection fault in pcl818_ai_cancel' 2025/08/18 03:10:27 patched crashed: general protection fault in pcl818_ai_cancel [need repro = true] 2025/08/18 03:10:27 scheduled a reproduction of 'general protection fault in pcl818_ai_cancel' 2025/08/18 03:10:29 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/18 03:10:29 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/18 03:10:40 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/18 03:10:40 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/18 03:10:49 base crash: general protection fault in pcl818_ai_cancel 2025/08/18 03:10:54 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/18 03:10:54 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/18 03:11:05 runner 7 connected 2025/08/18 03:11:15 runner 0 connected 2025/08/18 03:11:25 runner 5 connected 2025/08/18 03:11:30 runner 8 connected 2025/08/18 03:11:39 runner 0 connected 2025/08/18 03:11:52 runner 1 connected 2025/08/18 03:12:34 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 03:12:50 STAT { "buffer too small": 0, "candidate triage jobs": 52, "candidates": 77871, "comps overflows": 0, "corpus": 2391, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 1512, "coverage": 136754, "distributor delayed": 3903, "distributor undelayed": 3897, "distributor violated": 392, "exec candidate": 2468, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 5805, "exec total [new]": 10908, "exec triage": 7662, "executor restarts": 112, "fault jobs": 0, "fuzzer jobs": 52, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 139740, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 2468, "no exec duration": 39839000000, "no exec requests": 291, "pending": 7, "prog exec time": 637, "reproducing": 0, "rpc recv": 843062808, "rpc sent": 62323144, "signal": 135221, "smash jobs": 0, "triage jobs": 0, "vm output": 1840396, "vm restarts [base]": 4, "vm restarts [new]": 14 } 2025/08/18 03:13:31 runner 1 connected 2025/08/18 03:13:36 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = true] 2025/08/18 03:13:36 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_alloc_spi' 2025/08/18 03:14:16 base crash: KASAN: slab-use-after-free Read in l2cap_unregister_user 2025/08/18 03:14:20 patched crashed: general protection fault in xfrm_alloc_spi [need repro = true] 2025/08/18 03:14:20 scheduled a reproduction of 'general protection fault in xfrm_alloc_spi' 2025/08/18 03:14:30 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = true] 2025/08/18 03:14:30 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_alloc_spi' 2025/08/18 03:14:35 runner 1 connected 2025/08/18 03:14:52 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/18 03:15:05 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 03:15:10 runner 8 connected 2025/08/18 03:15:13 runner 3 connected 2025/08/18 03:15:25 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 03:15:27 runner 7 connected 2025/08/18 03:15:47 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/18 03:15:48 runner 0 connected 2025/08/18 03:15:54 runner 2 connected 2025/08/18 03:16:22 runner 0 connected 2025/08/18 03:16:44 runner 1 connected 2025/08/18 03:17:03 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/18 03:17:03 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/18 03:17:14 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/18 03:17:14 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/18 03:17:50 STAT { "buffer too small": 0, "candidate triage jobs": 34, "candidates": 74221, "comps overflows": 0, "corpus": 6039, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 3987, "coverage": 185857, "distributor delayed": 9963, "distributor undelayed": 9959, "distributor violated": 448, "exec candidate": 6118, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 12473, "exec total [new]": 27019, "exec triage": 19072, "executor restarts": 152, "fault jobs": 0, "fuzzer jobs": 34, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 187416, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 6118, "no exec duration": 39839000000, "no exec requests": 291, "pending": 12, "prog exec time": 325, "reproducing": 0, "rpc recv": 1458379524, "rpc sent": 156120152, "signal": 182805, "smash jobs": 0, "triage jobs": 0, "vm output": 3766220, "vm restarts [base]": 8, "vm restarts [new]": 19 } 2025/08/18 03:17:56 new: boot error: can't ssh into the instance 2025/08/18 03:17:56 base: boot error: can't ssh into the instance 2025/08/18 03:17:59 runner 1 connected 2025/08/18 03:18:05 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/18 03:18:05 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/18 03:18:11 runner 8 connected 2025/08/18 03:18:53 runner 2 connected 2025/08/18 03:18:54 runner 6 connected 2025/08/18 03:19:02 runner 7 connected 2025/08/18 03:19:29 base crash: WARNING in xfrm_state_fini 2025/08/18 03:19:32 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 03:20:13 new: boot error: can't ssh into the instance 2025/08/18 03:20:27 runner 1 connected 2025/08/18 03:20:29 runner 2 connected 2025/08/18 03:20:33 new: boot error: can't ssh into the instance 2025/08/18 03:21:18 runner 3 connected 2025/08/18 03:21:37 runner 9 connected 2025/08/18 03:21:57 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 03:21:59 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 03:22:17 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/18 03:22:17 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/18 03:22:28 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/18 03:22:28 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/18 03:22:50 STAT { "buffer too small": 0, "candidate triage jobs": 50, "candidates": 69803, "comps overflows": 0, "corpus": 10395, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 7009, "coverage": 214678, "distributor delayed": 15427, "distributor undelayed": 15426, "distributor violated": 450, "exec candidate": 10536, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 24341, "exec total [new]": 47448, "exec triage": 32818, "executor restarts": 199, "fault jobs": 0, "fuzzer jobs": 50, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 216542, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 10536, "no exec duration": 39878000000, "no exec requests": 293, "pending": 15, "prog exec time": 460, "reproducing": 0, "rpc recv": 2110927688, "rpc sent": 301427072, "signal": 210966, "smash jobs": 0, "triage jobs": 0, "vm output": 5991477, "vm restarts [base]": 10, "vm restarts [new]": 26 } 2025/08/18 03:23:01 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/18 03:23:01 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/18 03:23:01 runner 2 connected 2025/08/18 03:23:14 runner 6 connected 2025/08/18 03:23:26 runner 7 connected 2025/08/18 03:23:56 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/18 03:23:57 runner 1 connected 2025/08/18 03:24:13 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = true] 2025/08/18 03:24:13 scheduled a reproduction of 'unregister_netdevice: waiting for DEV to become free' 2025/08/18 03:24:16 patched crashed: INFO: task hung in txBegin [need repro = true] 2025/08/18 03:24:16 scheduled a reproduction of 'INFO: task hung in txBegin' 2025/08/18 03:24:45 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 03:24:52 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/18 03:24:58 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 03:24:58 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/18 03:24:59 runner 2 connected 2025/08/18 03:25:10 runner 4 connected 2025/08/18 03:25:35 runner 3 connected 2025/08/18 03:25:46 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 03:25:54 runner 9 connected 2025/08/18 03:25:55 runner 3 connected 2025/08/18 03:26:03 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/18 03:26:03 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/18 03:26:13 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/18 03:26:13 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/18 03:26:29 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 03:26:43 runner 2 connected 2025/08/18 03:26:47 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/18 03:26:53 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 03:27:00 runner 4 connected 2025/08/18 03:27:10 runner 3 connected 2025/08/18 03:27:11 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 03:27:18 runner 8 connected 2025/08/18 03:27:28 base crash: WARNING in xfrm_state_fini 2025/08/18 03:27:37 runner 0 connected 2025/08/18 03:27:41 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 03:27:50 STAT { "buffer too small": 0, "candidate triage jobs": 33, "candidates": 65989, "comps overflows": 0, "corpus": 14199, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 9295, "coverage": 233841, "distributor delayed": 20833, "distributor undelayed": 20832, "distributor violated": 502, "exec candidate": 14350, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 1, "exec seeds": 0, "exec smash": 0, "exec total [base]": 32490, "exec total [new]": 65324, "exec triage": 44579, "executor restarts": 251, "fault jobs": 0, "fuzzer jobs": 33, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 235679, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 14350, "no exec duration": 39924000000, "no exec requests": 295, "pending": 20, "prog exec time": 305, "reproducing": 0, "rpc recv": 2854349516, "rpc sent": 408041072, "signal": 229905, "smash jobs": 0, "triage jobs": 0, "vm output": 8448520, "vm restarts [base]": 14, "vm restarts [new]": 36 } 2025/08/18 03:27:50 runner 1 connected 2025/08/18 03:27:51 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 03:28:02 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 03:28:08 runner 6 connected 2025/08/18 03:28:18 runner 2 connected 2025/08/18 03:28:38 runner 3 connected 2025/08/18 03:28:48 runner 7 connected 2025/08/18 03:28:59 runner 4 connected 2025/08/18 03:29:37 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 03:29:49 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 03:30:30 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 03:30:33 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 03:30:34 runner 2 connected 2025/08/18 03:30:44 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 03:30:46 runner 9 connected 2025/08/18 03:31:07 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 03:31:27 runner 0 connected 2025/08/18 03:31:31 runner 1 connected 2025/08/18 03:31:40 runner 3 connected 2025/08/18 03:32:04 runner 2 connected 2025/08/18 03:32:05 new: boot error: can't ssh into the instance 2025/08/18 03:32:50 STAT { "buffer too small": 0, "candidate triage jobs": 46, "candidates": 62191, "comps overflows": 0, "corpus": 17926, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 11625, "coverage": 247766, "distributor delayed": 25650, "distributor undelayed": 25650, "distributor violated": 503, "exec candidate": 18148, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 1, "exec seeds": 0, "exec smash": 0, "exec total [base]": 39525, "exec total [new]": 83244, "exec triage": 56129, "executor restarts": 320, "fault jobs": 0, "fuzzer jobs": 46, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 250143, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 18148, "no exec duration": 39924000000, "no exec requests": 295, "pending": 20, "prog exec time": 517, "reproducing": 0, "rpc recv": 3550784000, "rpc sent": 515419184, "signal": 243826, "smash jobs": 0, "triage jobs": 0, "vm output": 11167101, "vm restarts [base]": 18, "vm restarts [new]": 44 } 2025/08/18 03:32:56 patched crashed: KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings [need repro = true] 2025/08/18 03:32:56 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings' 2025/08/18 03:33:02 runner 0 connected 2025/08/18 03:33:52 runner 7 connected 2025/08/18 03:34:22 new: boot error: can't ssh into the instance 2025/08/18 03:34:58 base: boot error: can't ssh into the instance 2025/08/18 03:35:20 runner 5 connected 2025/08/18 03:35:21 base crash: possible deadlock in attr_data_get_block 2025/08/18 03:35:38 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/18 03:35:55 runner 1 connected 2025/08/18 03:36:19 runner 0 connected 2025/08/18 03:36:30 patched crashed: kernel BUG in may_open [need repro = true] 2025/08/18 03:36:30 scheduled a reproduction of 'kernel BUG in may_open' 2025/08/18 03:36:35 runner 3 connected 2025/08/18 03:36:58 base crash: KASAN: use-after-free Read in xfrm_alloc_spi 2025/08/18 03:37:29 runner 7 connected 2025/08/18 03:37:50 STAT { "buffer too small": 0, "candidate triage jobs": 58, "candidates": 57399, "comps overflows": 0, "corpus": 22619, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 14840, "coverage": 261636, "distributor delayed": 29875, "distributor undelayed": 29875, "distributor violated": 503, "exec candidate": 22940, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 3, "exec seeds": 0, "exec smash": 0, "exec total [base]": 48018, "exec total [new]": 107733, "exec triage": 70893, "executor restarts": 391, "fault jobs": 0, "fuzzer jobs": 58, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 10, "hints jobs": 0, "max signal": 264307, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 22940, "no exec duration": 40091000000, "no exec requests": 298, "pending": 22, "prog exec time": 461, "reproducing": 0, "rpc recv": 4146682916, "rpc sent": 643522288, "signal": 257391, "smash jobs": 0, "triage jobs": 0, "vm output": 14174239, "vm restarts [base]": 21, "vm restarts [new]": 48 } 2025/08/18 03:38:03 runner 2 connected 2025/08/18 03:38:10 base crash: lost connection to test machine 2025/08/18 03:38:59 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_insert [need repro = true] 2025/08/18 03:38:59 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_insert' 2025/08/18 03:39:07 runner 1 connected 2025/08/18 03:39:11 patched crashed: general protection fault in __xfrm_state_insert [need repro = true] 2025/08/18 03:39:11 scheduled a reproduction of 'general protection fault in __xfrm_state_insert' 2025/08/18 03:39:19 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 03:39:20 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 03:39:51 patched crashed: KASAN: slab-use-after-free Read in xfrm_state_find [need repro = true] 2025/08/18 03:39:51 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_state_find' 2025/08/18 03:39:58 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 03:40:04 runner 3 connected 2025/08/18 03:40:08 runner 7 connected 2025/08/18 03:40:17 runner 9 connected 2025/08/18 03:40:18 runner 5 connected 2025/08/18 03:40:24 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/18 03:40:24 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/18 03:40:49 runner 1 connected 2025/08/18 03:40:49 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 03:40:55 runner 2 connected 2025/08/18 03:41:07 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 03:41:21 runner 2 connected 2025/08/18 03:41:28 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 03:41:46 runner 0 connected 2025/08/18 03:41:57 patched crashed: no output from test machine [need repro = false] 2025/08/18 03:42:04 runner 6 connected 2025/08/18 03:42:04 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/08/18 03:42:04 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/08/18 03:42:18 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 03:42:28 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/08/18 03:42:50 STAT { "buffer too small": 0, "candidate triage jobs": 31, "candidates": 54301, "comps overflows": 0, "corpus": 25673, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 17496, "coverage": 269926, "distributor delayed": 34417, "distributor undelayed": 34417, "distributor violated": 635, "exec candidate": 26038, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 8, "exec seeds": 0, "exec smash": 0, "exec total [base]": 57958, "exec total [new]": 125206, "exec triage": 80643, "executor restarts": 431, "fault jobs": 0, "fuzzer jobs": 31, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 272902, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 26038, "no exec duration": 40675000000, "no exec requests": 304, "pending": 27, "prog exec time": 268, "reproducing": 0, "rpc recv": 4728876996, "rpc sent": 760695488, "signal": 265507, "smash jobs": 0, "triage jobs": 0, "vm output": 16060671, "vm restarts [base]": 24, "vm restarts [new]": 56 } 2025/08/18 03:42:54 runner 8 connected 2025/08/18 03:43:03 runner 7 connected 2025/08/18 03:43:10 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 03:43:15 runner 2 connected 2025/08/18 03:43:25 runner 1 connected 2025/08/18 03:43:41 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/18 03:43:41 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/18 03:43:45 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 03:43:52 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/18 03:43:52 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/18 03:43:55 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 03:43:59 base crash: WARNING in xfrm_state_fini 2025/08/18 03:44:07 runner 4 connected 2025/08/18 03:44:37 runner 8 connected 2025/08/18 03:44:42 runner 2 connected 2025/08/18 03:44:50 runner 6 connected 2025/08/18 03:44:52 runner 3 connected 2025/08/18 03:44:56 runner 2 connected 2025/08/18 03:45:50 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 03:45:58 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/18 03:46:05 base crash: KASAN: slab-use-after-free Read in __usb_hcd_giveback_urb 2025/08/18 03:46:21 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = true] 2025/08/18 03:46:21 scheduled a reproduction of 'WARNING in ext4_xattr_inode_lookup_create' 2025/08/18 03:46:22 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = true] 2025/08/18 03:46:22 scheduled a reproduction of 'WARNING in ext4_xattr_inode_lookup_create' 2025/08/18 03:46:23 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = true] 2025/08/18 03:46:24 scheduled a reproduction of 'WARNING in ext4_xattr_inode_lookup_create' 2025/08/18 03:46:25 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = true] 2025/08/18 03:46:25 scheduled a reproduction of 'WARNING in ext4_xattr_inode_lookup_create' 2025/08/18 03:46:46 runner 4 connected 2025/08/18 03:46:55 runner 1 connected 2025/08/18 03:47:00 base crash: WARNING in ext4_xattr_inode_lookup_create 2025/08/18 03:47:12 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/18 03:47:12 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/18 03:47:14 runner 7 connected 2025/08/18 03:47:18 runner 1 connected 2025/08/18 03:47:19 runner 0 connected 2025/08/18 03:47:20 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 03:47:22 runner 6 connected 2025/08/18 03:47:24 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/18 03:47:24 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/18 03:47:35 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 03:47:49 runner 0 connected 2025/08/18 03:47:50 STAT { "buffer too small": 0, "candidate triage jobs": 40, "candidates": 51046, "comps overflows": 0, "corpus": 28881, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 19372, "coverage": 278982, "distributor delayed": 38855, "distributor undelayed": 38850, "distributor violated": 640, "exec candidate": 29293, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 8, "exec seeds": 0, "exec smash": 0, "exec total [base]": 68221, "exec total [new]": 142187, "exec triage": 90522, "executor restarts": 509, "fault jobs": 0, "fuzzer jobs": 40, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 281874, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 29293, "no exec duration": 40758000000, "no exec requests": 305, "pending": 35, "prog exec time": 328, "reproducing": 0, "rpc recv": 5490287564, "rpc sent": 888569752, "signal": 274097, "smash jobs": 0, "triage jobs": 0, "vm output": 18774194, "vm restarts [base]": 29, "vm restarts [new]": 68 } 2025/08/18 03:47:52 base crash: WARNING in xfrm_state_fini 2025/08/18 03:48:10 runner 8 connected 2025/08/18 03:48:21 runner 2 connected 2025/08/18 03:48:23 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 03:48:32 runner 4 connected 2025/08/18 03:48:48 runner 3 connected 2025/08/18 03:49:21 runner 0 connected 2025/08/18 03:50:30 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 03:50:30 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 03:50:42 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 03:50:49 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 03:51:06 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 03:51:34 new: boot error: can't ssh into the instance 2025/08/18 03:51:38 runner 2 connected 2025/08/18 03:51:46 runner 0 connected 2025/08/18 03:51:48 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 03:52:03 runner 0 connected 2025/08/18 03:52:09 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 03:52:31 runner 9 connected 2025/08/18 03:52:45 runner 1 connected 2025/08/18 03:52:50 STAT { "buffer too small": 0, "candidate triage jobs": 37, "candidates": 47575, "comps overflows": 0, "corpus": 32318, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 21869, "coverage": 286674, "distributor delayed": 43258, "distributor undelayed": 43257, "distributor violated": 654, "exec candidate": 32764, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 8, "exec seeds": 0, "exec smash": 0, "exec total [base]": 73336, "exec total [new]": 161464, "exec triage": 101172, "executor restarts": 548, "fault jobs": 0, "fuzzer jobs": 37, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 289616, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 32764, "no exec duration": 41448000000, "no exec requests": 307, "pending": 35, "prog exec time": 227, "reproducing": 0, "rpc recv": 6082704728, "rpc sent": 991259536, "signal": 281747, "smash jobs": 0, "triage jobs": 0, "vm output": 21443883, "vm restarts [base]": 33, "vm restarts [new]": 74 } 2025/08/18 03:53:06 runner 3 connected 2025/08/18 03:53:35 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 03:53:46 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 03:53:53 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 03:54:23 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = true] 2025/08/18 03:54:23 scheduled a reproduction of 'WARNING in xfrm6_tunnel_net_exit' 2025/08/18 03:54:32 runner 8 connected 2025/08/18 03:54:43 runner 2 connected 2025/08/18 03:54:44 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 03:54:49 runner 3 connected 2025/08/18 03:55:12 runner 1 connected 2025/08/18 03:55:41 runner 1 connected 2025/08/18 03:56:05 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 03:56:11 base: boot error: can't ssh into the instance 2025/08/18 03:56:17 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 03:56:37 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 03:56:55 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 03:57:02 runner 0 connected 2025/08/18 03:57:09 runner 2 connected 2025/08/18 03:57:15 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 03:57:15 runner 9 connected 2025/08/18 03:57:26 new: boot error: can't ssh into the instance 2025/08/18 03:57:28 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 03:57:34 runner 5 connected 2025/08/18 03:57:39 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 03:57:45 runner 2 connected 2025/08/18 03:57:50 STAT { "buffer too small": 0, "candidate triage jobs": 92, "candidates": 44704, "comps overflows": 0, "corpus": 35095, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 24176, "coverage": 292254, "distributor delayed": 47683, "distributor undelayed": 47612, "distributor violated": 769, "exec candidate": 35635, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 80800, "exec total [new]": 178735, "exec triage": 109997, "executor restarts": 591, "fault jobs": 0, "fuzzer jobs": 92, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 295406, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 35635, "no exec duration": 42052000000, "no exec requests": 314, "pending": 36, "prog exec time": 343, "reproducing": 0, "rpc recv": 6636668764, "rpc sent": 1093835040, "signal": 287211, "smash jobs": 0, "triage jobs": 0, "vm output": 23401524, "vm restarts [base]": 37, "vm restarts [new]": 81 } 2025/08/18 03:58:06 base crash: INFO: task hung in v9fs_evict_inode 2025/08/18 03:58:11 runner 1 connected 2025/08/18 03:58:22 runner 3 connected 2025/08/18 03:58:25 runner 1 connected 2025/08/18 03:58:36 runner 8 connected 2025/08/18 03:58:55 base crash: possible deadlock in ocfs2_xattr_set 2025/08/18 03:58:57 base crash: lost connection to test machine 2025/08/18 03:59:03 runner 0 connected 2025/08/18 03:59:42 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/18 03:59:45 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 03:59:52 runner 2 connected 2025/08/18 03:59:54 runner 3 connected 2025/08/18 03:59:56 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:00:00 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 04:00:07 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:00:19 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/18 04:00:36 new: boot error: can't ssh into the instance 2025/08/18 04:00:36 new: boot error: can't ssh into the instance 2025/08/18 04:00:39 runner 5 connected 2025/08/18 04:00:42 runner 1 connected 2025/08/18 04:00:45 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 04:00:52 runner 3 connected 2025/08/18 04:00:57 runner 9 connected 2025/08/18 04:01:05 runner 1 connected 2025/08/18 04:01:06 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 04:01:13 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 04:01:16 runner 0 connected 2025/08/18 04:01:33 runner 6 connected 2025/08/18 04:01:34 runner 7 connected 2025/08/18 04:01:42 runner 8 connected 2025/08/18 04:02:02 runner 4 connected 2025/08/18 04:02:09 runner 0 connected 2025/08/18 04:02:10 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 04:02:28 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/18 04:02:28 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/18 04:02:40 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/18 04:02:40 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/18 04:02:50 STAT { "buffer too small": 0, "candidate triage jobs": 45, "candidates": 41678, "comps overflows": 0, "corpus": 38113, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 26229, "coverage": 298316, "distributor delayed": 51558, "distributor undelayed": 51558, "distributor violated": 850, "exec candidate": 38661, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 87779, "exec total [new]": 196092, "exec triage": 119236, "executor restarts": 654, "fault jobs": 0, "fuzzer jobs": 45, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 301345, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 38661, "no exec duration": 42218000000, "no exec requests": 316, "pending": 38, "prog exec time": 379, "reproducing": 0, "rpc recv": 7487946308, "rpc sent": 1225440632, "signal": 293301, "smash jobs": 0, "triage jobs": 0, "vm output": 26284221, "vm restarts [base]": 43, "vm restarts [new]": 93 } 2025/08/18 04:03:09 runner 1 connected 2025/08/18 04:03:31 runner 2 connected 2025/08/18 04:03:45 runner 9 connected 2025/08/18 04:03:58 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/18 04:03:58 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/18 04:04:09 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/18 04:04:09 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/18 04:04:12 base crash: general protection fault in pcl818_ai_cancel 2025/08/18 04:04:15 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/18 04:04:15 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/18 04:04:28 base crash: kernel BUG in txUnlock 2025/08/18 04:05:07 runner 6 connected 2025/08/18 04:05:09 runner 3 connected 2025/08/18 04:05:12 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = false] 2025/08/18 04:05:25 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 04:05:26 runner 2 connected 2025/08/18 04:05:28 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/18 04:05:58 base crash: possible deadlock in ocfs2_xattr_set 2025/08/18 04:06:09 runner 0 connected 2025/08/18 04:06:21 runner 5 connected 2025/08/18 04:06:25 runner 2 connected 2025/08/18 04:06:50 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/18 04:06:50 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/18 04:06:51 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/18 04:06:51 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/18 04:06:55 runner 0 connected 2025/08/18 04:07:10 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:07:22 base crash: WARNING in dbAdjTree 2025/08/18 04:07:47 runner 5 connected 2025/08/18 04:07:49 base crash: WARNING in dbAdjTree 2025/08/18 04:07:50 runner 9 connected 2025/08/18 04:07:50 STAT { "buffer too small": 0, "candidate triage jobs": 11, "candidates": 39368, "comps overflows": 0, "corpus": 40394, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 28684, "coverage": 302708, "distributor delayed": 54179, "distributor undelayed": 54179, "distributor violated": 857, "exec candidate": 40971, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 96615, "exec total [new]": 214336, "exec triage": 126535, "executor restarts": 715, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 305805, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 40971, "no exec duration": 42218000000, "no exec requests": 316, "pending": 43, "prog exec time": 284, "reproducing": 0, "rpc recv": 7992872460, "rpc sent": 1368552312, "signal": 297723, "smash jobs": 0, "triage jobs": 0, "vm output": 28952344, "vm restarts [base]": 47, "vm restarts [new]": 101 } 2025/08/18 04:08:07 runner 0 connected 2025/08/18 04:08:14 patched crashed: INFO: task hung in read_part_sector [need repro = true] 2025/08/18 04:08:14 scheduled a reproduction of 'INFO: task hung in read_part_sector' 2025/08/18 04:08:17 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/18 04:08:19 runner 0 connected 2025/08/18 04:08:47 runner 3 connected 2025/08/18 04:09:06 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 04:09:11 runner 3 connected 2025/08/18 04:09:14 runner 1 connected 2025/08/18 04:09:25 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:09:38 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:09:58 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 04:10:05 runner 9 connected 2025/08/18 04:10:13 patched crashed: possible deadlock in attr_data_get_block [need repro = false] 2025/08/18 04:10:15 base crash: kernel BUG in txUnlock 2025/08/18 04:10:22 runner 5 connected 2025/08/18 04:10:36 runner 6 connected 2025/08/18 04:10:44 base crash: possible deadlock in attr_data_get_block 2025/08/18 04:10:55 runner 3 connected 2025/08/18 04:11:04 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 04:11:09 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:11:10 runner 2 connected 2025/08/18 04:11:20 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:11:34 runner 2 connected 2025/08/18 04:12:01 runner 0 connected 2025/08/18 04:12:06 runner 5 connected 2025/08/18 04:12:13 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 04:12:17 runner 6 connected 2025/08/18 04:12:24 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 04:12:42 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/18 04:12:43 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 04:12:50 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:12:50 STAT { "buffer too small": 0, "candidate triage jobs": 39, "candidates": 37964, "comps overflows": 0, "corpus": 41739, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 30694, "coverage": 305763, "distributor delayed": 56208, "distributor undelayed": 56178, "distributor violated": 860, "exec candidate": 42375, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 13, "exec seeds": 0, "exec smash": 0, "exec total [base]": 103623, "exec total [new]": 228291, "exec triage": 130819, "executor restarts": 776, "fault jobs": 0, "fuzzer jobs": 39, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 308882, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 42375, "no exec duration": 42300000000, "no exec requests": 317, "pending": 44, "prog exec time": 223, "reproducing": 0, "rpc recv": 8639684664, "rpc sent": 1510222256, "signal": 300761, "smash jobs": 0, "triage jobs": 0, "vm output": 31832809, "vm restarts [base]": 52, "vm restarts [new]": 110 } 2025/08/18 04:12:52 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/18 04:13:02 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 04:13:14 runner 9 connected 2025/08/18 04:13:30 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 04:13:32 runner 2 connected 2025/08/18 04:13:33 runner 5 connected 2025/08/18 04:13:49 runner 0 connected 2025/08/18 04:13:51 runner 6 connected 2025/08/18 04:14:04 new: boot error: can't ssh into the instance 2025/08/18 04:14:19 runner 1 connected 2025/08/18 04:14:21 new: boot error: can't ssh into the instance 2025/08/18 04:14:42 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 04:14:50 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 04:14:55 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 04:15:03 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 04:15:06 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 04:15:07 base crash: WARNING in xfrm_state_fini 2025/08/18 04:15:14 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 04:15:17 runner 7 connected 2025/08/18 04:15:19 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 04:15:24 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 04:15:31 runner 0 connected 2025/08/18 04:15:40 runner 9 connected 2025/08/18 04:15:44 runner 2 connected 2025/08/18 04:15:53 runner 6 connected 2025/08/18 04:15:56 runner 0 connected 2025/08/18 04:15:57 runner 2 connected 2025/08/18 04:16:04 runner 1 connected 2025/08/18 04:16:08 runner 5 connected 2025/08/18 04:16:13 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 04:16:13 runner 3 connected 2025/08/18 04:16:23 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:16:39 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 04:17:11 runner 0 connected 2025/08/18 04:17:14 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 04:17:20 runner 7 connected 2025/08/18 04:17:35 runner 0 connected 2025/08/18 04:17:45 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 04:17:50 STAT { "buffer too small": 0, "candidate triage jobs": 11, "candidates": 37327, "comps overflows": 0, "corpus": 42378, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 31738, "coverage": 307275, "distributor delayed": 57365, "distributor undelayed": 57365, "distributor violated": 862, "exec candidate": 43012, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 13, "exec seeds": 0, "exec smash": 0, "exec total [base]": 109418, "exec total [new]": 235788, "exec triage": 132803, "executor restarts": 846, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 310167, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43012, "no exec duration": 42300000000, "no exec requests": 317, "pending": 44, "prog exec time": 385, "reproducing": 0, "rpc recv": 9309806208, "rpc sent": 1608701552, "signal": 302314, "smash jobs": 0, "triage jobs": 0, "vm output": 33755030, "vm restarts [base]": 56, "vm restarts [new]": 125 } 2025/08/18 04:17:53 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 04:18:10 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 04:18:31 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 04:18:42 runner 9 connected 2025/08/18 04:18:51 runner 2 connected 2025/08/18 04:18:52 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 04:19:06 runner 1 connected 2025/08/18 04:19:50 runner 0 connected 2025/08/18 04:19:52 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 04:20:03 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 04:20:14 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 04:20:21 base: boot error: can't ssh into the instance 2025/08/18 04:20:49 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 04:20:53 runner 2 connected 2025/08/18 04:21:04 runner 1 connected 2025/08/18 04:21:09 runner 1 connected 2025/08/18 04:21:38 runner 2 connected 2025/08/18 04:21:43 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 04:21:50 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 04:22:02 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 04:22:18 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = true] 2025/08/18 04:22:18 scheduled a reproduction of 'WARNING in xfrm6_tunnel_net_exit' 2025/08/18 04:22:19 new: boot error: can't ssh into the instance 2025/08/18 04:22:25 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 04:22:26 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:22:32 runner 0 connected 2025/08/18 04:22:33 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 04:22:39 runner 5 connected 2025/08/18 04:22:50 STAT { "buffer too small": 0, "candidate triage jobs": 106, "candidates": 36603, "comps overflows": 0, "corpus": 42991, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 33164, "coverage": 308469, "distributor delayed": 58570, "distributor undelayed": 58470, "distributor violated": 922, "exec candidate": 43736, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 13, "exec seeds": 0, "exec smash": 0, "exec total [base]": 113123, "exec total [new]": 244463, "exec triage": 134799, "executor restarts": 885, "fault jobs": 0, "fuzzer jobs": 106, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 311558, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43736, "no exec duration": 42300000000, "no exec requests": 317, "pending": 45, "prog exec time": 316, "reproducing": 0, "rpc recv": 9701018348, "rpc sent": 1679296592, "signal": 303535, "smash jobs": 0, "triage jobs": 0, "vm output": 35525943, "vm restarts [base]": 60, "vm restarts [new]": 131 } 2025/08/18 04:22:55 new: boot error: can't ssh into the instance 2025/08/18 04:23:06 runner 9 connected 2025/08/18 04:23:09 runner 4 connected 2025/08/18 04:23:22 runner 1 connected 2025/08/18 04:23:25 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:23:36 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:23:45 runner 3 connected 2025/08/18 04:24:10 new: boot error: can't ssh into the instance 2025/08/18 04:24:13 runner 2 connected 2025/08/18 04:24:21 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/18 04:24:22 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/18 04:24:25 runner 5 connected 2025/08/18 04:24:25 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/18 04:24:46 base crash: kernel BUG in txUnlock 2025/08/18 04:24:49 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/18 04:24:50 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 04:24:58 runner 8 connected 2025/08/18 04:25:10 runner 9 connected 2025/08/18 04:25:12 runner 4 connected 2025/08/18 04:25:14 runner 3 connected 2025/08/18 04:25:38 runner 2 connected 2025/08/18 04:25:41 runner 1 connected 2025/08/18 04:26:29 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:26:54 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 04:27:11 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 04:27:18 runner 5 connected 2025/08/18 04:27:20 new: boot error: can't ssh into the instance 2025/08/18 04:27:37 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 04:27:44 runner 3 connected 2025/08/18 04:27:50 STAT { "buffer too small": 0, "candidate triage jobs": 6, "candidates": 35934, "comps overflows": 0, "corpus": 43731, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 34256, "coverage": 310131, "distributor delayed": 60038, "distributor undelayed": 60038, "distributor violated": 924, "exec candidate": 44405, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 13, "exec seeds": 0, "exec smash": 0, "exec total [base]": 115176, "exec total [new]": 251765, "exec triage": 137077, "executor restarts": 950, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 313040, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44405, "no exec duration": 42300000000, "no exec requests": 317, "pending": 45, "prog exec time": 437, "reproducing": 0, "rpc recv": 10192282468, "rpc sent": 1751669392, "signal": 305219, "smash jobs": 0, "triage jobs": 0, "vm output": 37408232, "vm restarts [base]": 62, "vm restarts [new]": 143 } 2025/08/18 04:28:09 runner 6 connected 2025/08/18 04:28:26 runner 9 connected 2025/08/18 04:28:37 base: boot error: can't ssh into the instance 2025/08/18 04:28:45 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 04:29:15 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/18 04:29:26 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/18 04:29:42 runner 3 connected 2025/08/18 04:29:43 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/18 04:29:43 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/18 04:29:58 new: boot error: can't ssh into the instance 2025/08/18 04:30:17 runner 9 connected 2025/08/18 04:30:33 runner 4 connected 2025/08/18 04:30:36 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 04:30:46 runner 0 connected 2025/08/18 04:30:52 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:31:21 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:31:25 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 04:31:26 runner 2 connected 2025/08/18 04:31:42 runner 5 connected 2025/08/18 04:32:08 new: boot error: can't ssh into the instance 2025/08/18 04:32:08 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/08/18 04:32:08 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/08/18 04:32:11 runner 0 connected 2025/08/18 04:32:15 runner 4 connected 2025/08/18 04:32:26 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = false] 2025/08/18 04:32:31 base: boot error: can't ssh into the instance 2025/08/18 04:32:32 new: boot error: can't ssh into the instance 2025/08/18 04:32:50 STAT { "buffer too small": 0, "candidate triage jobs": 10, "candidates": 35182, "comps overflows": 0, "corpus": 44440, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 36335, "coverage": 311497, "distributor delayed": 61309, "distributor undelayed": 61308, "distributor violated": 936, "exec candidate": 45157, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 13, "exec seeds": 0, "exec smash": 0, "exec total [base]": 115176, "exec total [new]": 264242, "exec triage": 139375, "executor restarts": 1006, "fault jobs": 0, "fuzzer jobs": 10, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 314507, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45148, "no exec duration": 45358000000, "no exec requests": 322, "pending": 47, "prog exec time": 287, "reproducing": 0, "rpc recv": 10610177944, "rpc sent": 1837247608, "signal": 306631, "smash jobs": 0, "triage jobs": 0, "vm output": 39391396, "vm restarts [base]": 62, "vm restarts [new]": 153 } 2025/08/18 04:33:05 runner 9 connected 2025/08/18 04:33:05 runner 7 connected 2025/08/18 04:33:17 runner 3 connected 2025/08/18 04:33:26 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:33:27 runner 2 connected 2025/08/18 04:33:28 runner 1 connected 2025/08/18 04:34:03 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 04:34:23 runner 5 connected 2025/08/18 04:34:23 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 04:34:37 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 04:34:52 base: boot error: can't ssh into the instance 2025/08/18 04:34:52 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 04:34:53 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 04:34:59 runner 9 connected 2025/08/18 04:35:20 runner 0 connected 2025/08/18 04:35:33 runner 3 connected 2025/08/18 04:35:41 runner 0 connected 2025/08/18 04:35:49 runner 4 connected 2025/08/18 04:35:50 runner 2 connected 2025/08/18 04:37:01 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 04:37:02 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 04:37:17 base: boot error: can't ssh into the instance 2025/08/18 04:37:42 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 04:37:50 timed out waiting for coprus triage 2025/08/18 04:37:50 starting bug reproductions 2025/08/18 04:37:50 starting bug reproductions (max 10 VMs, 7 repros) 2025/08/18 04:37:50 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/18 04:37:50 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34622, "comps overflows": 0, "corpus": 44891, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 40343, "coverage": 312566, "distributor delayed": 61932, "distributor undelayed": 61932, "distributor violated": 940, "exec candidate": 45717, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 117916, "exec total [new]": 285593, "exec triage": 141048, "executor restarts": 1057, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 315730, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 45363000000, "no exec requests": 323, "pending": 47, "prog exec time": 245, "reproducing": 0, "rpc recv": 11042479356, "rpc sent": 1946120808, "signal": 307605, "smash jobs": 0, "triage jobs": 0, "vm output": 41612207, "vm restarts [base]": 65, "vm restarts [new]": 162 } 2025/08/18 04:37:50 reproduction of "general protection fault in pcl818_ai_cancel" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "general protection fault in pcl818_ai_cancel" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "general protection fault in pcl818_ai_cancel" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "KASAN: slab-use-after-free Read in xfrm_alloc_spi" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "KASAN: slab-use-after-free Read in xfrm_alloc_spi" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "unregister_netdevice: waiting for DEV to become free" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/18 04:37:50 reproduction of "KASAN: slab-use-after-free Read in __xfrm_state_lookup" aborted: it's no longer needed 2025/08/18 04:37:50 start reproducing 'KASAN: slab-use-after-free Read in xfrm_state_find' 2025/08/18 04:37:50 start reproducing 'KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings' 2025/08/18 04:37:50 start reproducing 'general protection fault in xfrm_alloc_spi' 2025/08/18 04:37:50 start reproducing 'KASAN: slab-use-after-free Read in __xfrm_state_insert' 2025/08/18 04:37:50 start reproducing 'INFO: task hung in txBegin' 2025/08/18 04:37:50 start reproducing 'kernel BUG in may_open' 2025/08/18 04:37:50 start reproducing 'general protection fault in __xfrm_state_insert' 2025/08/18 04:38:14 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 04:38:14 runner 1 connected 2025/08/18 04:38:42 base: boot error: can't ssh into the instance 2025/08/18 04:38:52 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 04:39:03 runner 2 connected 2025/08/18 04:39:14 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 04:39:20 new: boot error: can't ssh into the instance 2025/08/18 04:39:26 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 04:39:32 runner 3 connected 2025/08/18 04:39:41 runner 0 connected 2025/08/18 04:40:23 runner 2 connected 2025/08/18 04:40:47 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 04:42:50 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34622, "comps overflows": 0, "corpus": 44891, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 40343, "coverage": 312566, "distributor delayed": 61932, "distributor undelayed": 61932, "distributor violated": 940, "exec candidate": 45717, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 122023, "exec total [new]": 285593, "exec triage": 141048, "executor restarts": 1057, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 315730, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 45363000000, "no exec requests": 323, "pending": 20, "prog exec time": 0, "reproducing": 7, "rpc recv": 11198022876, "rpc sent": 1960479088, "signal": 307605, "smash jobs": 0, "triage jobs": 0, "vm output": 43017746, "vm restarts [base]": 70, "vm restarts [new]": 162 } 2025/08/18 04:46:02 base crash: no output from test machine 2025/08/18 04:46:04 base crash: no output from test machine 2025/08/18 04:46:11 base crash: no output from test machine 2025/08/18 04:46:11 base crash: no output from test machine 2025/08/18 04:46:53 runner 1 connected 2025/08/18 04:47:00 runner 3 connected 2025/08/18 04:47:01 runner 0 connected 2025/08/18 04:47:08 new: boot error: can't ssh into the instance 2025/08/18 04:47:50 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34622, "comps overflows": 0, "corpus": 44891, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 40343, "coverage": 312566, "distributor delayed": 61932, "distributor undelayed": 61932, "distributor violated": 940, "exec candidate": 45717, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 122023, "exec total [new]": 285593, "exec triage": 141048, "executor restarts": 1057, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 315730, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 45363000000, "no exec requests": 323, "pending": 20, "prog exec time": 0, "reproducing": 7, "rpc recv": 11290711444, "rpc sent": 1960479928, "signal": 307605, "smash jobs": 0, "triage jobs": 0, "vm output": 44888099, "vm restarts [base]": 73, "vm restarts [new]": 162 } 2025/08/18 04:47:56 new: boot error: can't ssh into the instance 2025/08/18 04:47:56 new: boot error: can't ssh into the instance 2025/08/18 04:48:27 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 04:49:24 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 04:50:41 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 04:51:51 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 04:51:52 base crash: no output from test machine 2025/08/18 04:51:59 base crash: no output from test machine 2025/08/18 04:52:01 base crash: no output from test machine 2025/08/18 04:52:02 repro finished 'KASAN: slab-use-after-free Read in xfrm_state_find', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 04:52:02 failed repro for "KASAN: slab-use-after-free Read in xfrm_state_find", err=%!s() 2025/08/18 04:52:02 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/18 04:52:02 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/18 04:52:02 reproduction of "WARNING in ext4_xattr_inode_lookup_create" aborted: it's no longer needed 2025/08/18 04:52:02 start reproducing 'kernel BUG in jfs_evict_inode' 2025/08/18 04:52:02 reproduction of "WARNING in ext4_xattr_inode_lookup_create" aborted: it's no longer needed 2025/08/18 04:52:02 reproduction of "WARNING in ext4_xattr_inode_lookup_create" aborted: it's no longer needed 2025/08/18 04:52:02 reproduction of "WARNING in ext4_xattr_inode_lookup_create" aborted: it's no longer needed 2025/08/18 04:52:02 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/18 04:52:02 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/18 04:52:02 "KASAN: slab-use-after-free Read in xfrm_state_find": saved crash log into 1755492722.crash.log 2025/08/18 04:52:02 "KASAN: slab-use-after-free Read in xfrm_state_find": saved repro log into 1755492722.repro.log 2025/08/18 04:52:33 new: boot error: can't ssh into the instance 2025/08/18 04:52:42 runner 1 connected 2025/08/18 04:52:44 repro finished 'KASAN: slab-use-after-free Read in __xfrm_state_insert', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 04:52:44 failed repro for "KASAN: slab-use-after-free Read in __xfrm_state_insert", err=%!s() 2025/08/18 04:52:44 start reproducing 'WARNING in xfrm6_tunnel_net_exit' 2025/08/18 04:52:44 "KASAN: slab-use-after-free Read in __xfrm_state_insert": saved crash log into 1755492764.crash.log 2025/08/18 04:52:44 "KASAN: slab-use-after-free Read in __xfrm_state_insert": saved repro log into 1755492764.repro.log 2025/08/18 04:52:50 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34622, "comps overflows": 0, "corpus": 44891, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 40343, "coverage": 312566, "distributor delayed": 61932, "distributor undelayed": 61932, "distributor violated": 940, "exec candidate": 45717, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 122023, "exec total [new]": 285593, "exec triage": 141048, "executor restarts": 1057, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 315730, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 9, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 45363000000, "no exec requests": 323, "pending": 10, "prog exec time": 0, "reproducing": 7, "rpc recv": 11321607636, "rpc sent": 1960480192, "signal": 307605, "smash jobs": 0, "triage jobs": 0, "vm output": 48752413, "vm restarts [base]": 74, "vm restarts [new]": 162 } 2025/08/18 04:52:56 runner 3 connected 2025/08/18 04:53:28 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 04:54:05 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 04:55:46 new: boot error: can't ssh into the instance 2025/08/18 04:56:08 base: boot error: can't ssh into the instance 2025/08/18 04:56:52 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 04:56:57 runner 2 connected 2025/08/18 04:57:41 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 04:57:41 base crash: no output from test machine 2025/08/18 04:57:50 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34622, "comps overflows": 0, "corpus": 44891, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 40343, "coverage": 312566, "distributor delayed": 61932, "distributor undelayed": 61932, "distributor violated": 940, "exec candidate": 45717, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 122023, "exec total [new]": 285593, "exec triage": 141048, "executor restarts": 1057, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 315730, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 14, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 45363000000, "no exec requests": 323, "pending": 10, "prog exec time": 0, "reproducing": 7, "rpc recv": 11383400020, "rpc sent": 1960480768, "signal": 307605, "smash jobs": 0, "triage jobs": 0, "vm output": 51843978, "vm restarts [base]": 76, "vm restarts [new]": 162 } 2025/08/18 04:57:56 base crash: no output from test machine 2025/08/18 04:58:30 runner 1 connected 2025/08/18 04:58:52 runner 3 connected 2025/08/18 04:58:59 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:00:30 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:01:40 new: boot error: can't ssh into the instance 2025/08/18 05:01:50 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:01:57 base crash: no output from test machine 2025/08/18 05:02:07 base: boot error: can't ssh into the instance 2025/08/18 05:02:47 runner 2 connected 2025/08/18 05:02:50 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34622, "comps overflows": 0, "corpus": 44891, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 40343, "coverage": 312566, "distributor delayed": 61932, "distributor undelayed": 61932, "distributor violated": 940, "exec candidate": 45717, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 122023, "exec total [new]": 285593, "exec triage": 141048, "executor restarts": 1057, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 315730, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 17, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 45363000000, "no exec requests": 323, "pending": 10, "prog exec time": 0, "reproducing": 7, "rpc recv": 11445192568, "rpc sent": 1960481592, "signal": 307605, "smash jobs": 0, "triage jobs": 0, "vm output": 53659352, "vm restarts [base]": 79, "vm restarts [new]": 162 } 2025/08/18 05:02:55 runner 0 connected 2025/08/18 05:03:30 base crash: no output from test machine 2025/08/18 05:03:52 base crash: no output from test machine 2025/08/18 05:04:20 runner 1 connected 2025/08/18 05:04:35 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:04:41 runner 3 connected 2025/08/18 05:05:54 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:06:42 repro finished 'general protection fault in __xfrm_state_insert', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 05:06:42 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/18 05:06:42 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/18 05:06:42 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/18 05:06:42 reproduction of "WARNING in dbAdjTree" aborted: it's no longer needed 2025/08/18 05:06:42 reproduction of "WARNING in dbAdjTree" aborted: it's no longer needed 2025/08/18 05:06:42 failed repro for "general protection fault in __xfrm_state_insert", err=%!s() 2025/08/18 05:06:42 start reproducing 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/18 05:06:42 "general protection fault in __xfrm_state_insert": saved crash log into 1755493602.crash.log 2025/08/18 05:06:42 "general protection fault in __xfrm_state_insert": saved repro log into 1755493602.repro.log 2025/08/18 05:07:05 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:07:46 base crash: no output from test machine 2025/08/18 05:07:47 new: boot error: can't ssh into the instance 2025/08/18 05:07:50 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34622, "comps overflows": 0, "corpus": 44891, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 40343, "coverage": 312566, "distributor delayed": 61932, "distributor undelayed": 61932, "distributor violated": 940, "exec candidate": 45717, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 122023, "exec total [new]": 285593, "exec triage": 141048, "executor restarts": 1057, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 315730, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 21, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 45363000000, "no exec requests": 323, "pending": 4, "prog exec time": 0, "reproducing": 7, "rpc recv": 11568777164, "rpc sent": 1960482448, "signal": 307605, "smash jobs": 0, "triage jobs": 0, "vm output": 56034383, "vm restarts [base]": 82, "vm restarts [new]": 162 } 2025/08/18 05:07:55 base crash: no output from test machine 2025/08/18 05:07:57 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:08:36 runner 2 connected 2025/08/18 05:08:45 runner 0 connected 2025/08/18 05:08:49 repro finished 'INFO: task hung in txBegin', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 05:08:49 start reproducing 'INFO: task hung in read_part_sector' 2025/08/18 05:08:49 failed repro for "INFO: task hung in txBegin", err=%!s() 2025/08/18 05:08:49 "INFO: task hung in txBegin": saved crash log into 1755493729.crash.log 2025/08/18 05:08:49 "INFO: task hung in txBegin": saved repro log into 1755493729.repro.log 2025/08/18 05:09:19 base crash: no output from test machine 2025/08/18 05:09:41 base crash: no output from test machine 2025/08/18 05:10:08 runner 1 connected 2025/08/18 05:10:30 runner 3 connected 2025/08/18 05:10:38 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:11:09 new: boot error: can't ssh into the instance 2025/08/18 05:12:50 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34622, "comps overflows": 0, "corpus": 44891, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 40343, "coverage": 312566, "distributor delayed": 61932, "distributor undelayed": 61932, "distributor violated": 940, "exec candidate": 45717, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 122023, "exec total [new]": 285593, "exec triage": 141048, "executor restarts": 1057, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 315730, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 29, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 45363000000, "no exec requests": 323, "pending": 4, "prog exec time": 0, "reproducing": 7, "rpc recv": 11692361924, "rpc sent": 1960483568, "signal": 307605, "smash jobs": 0, "triage jobs": 0, "vm output": 58947602, "vm restarts [base]": 86, "vm restarts [new]": 162 } 2025/08/18 05:13:17 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:13:35 base crash: no output from test machine 2025/08/18 05:13:44 base crash: no output from test machine 2025/08/18 05:14:25 runner 2 connected 2025/08/18 05:14:33 runner 0 connected 2025/08/18 05:14:36 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:15:05 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:15:08 base crash: no output from test machine 2025/08/18 05:15:29 base crash: no output from test machine 2025/08/18 05:15:57 runner 1 connected 2025/08/18 05:16:07 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:16:07 repro finished 'WARNING: suspicious RCU usage in get_callchain_entry', repro=true crepro=false desc='WARNING: suspicious RCU usage in get_callchain_entry' hub=false from_dashboard=false 2025/08/18 05:16:07 found repro for "WARNING: suspicious RCU usage in get_callchain_entry" (orig title: "-SAME-", reliability: 1), took 9.41 minutes 2025/08/18 05:16:07 start reproducing 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/18 05:16:07 "WARNING: suspicious RCU usage in get_callchain_entry": saved crash log into 1755494167.crash.log 2025/08/18 05:16:07 "WARNING: suspicious RCU usage in get_callchain_entry": saved repro log into 1755494167.repro.log 2025/08/18 05:16:18 runner 3 connected 2025/08/18 05:16:47 new: boot error: can't ssh into the instance 2025/08/18 05:16:52 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:17:26 attempt #0 to run "WARNING: suspicious RCU usage in get_callchain_entry" on base: crashed with WARNING: suspicious RCU usage in get_callchain_entry 2025/08/18 05:17:26 crashes both: WARNING: suspicious RCU usage in get_callchain_entry / WARNING: suspicious RCU usage in get_callchain_entry 2025/08/18 05:17:50 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 34622, "comps overflows": 0, "corpus": 44891, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 40343, "coverage": 312566, "distributor delayed": 61932, "distributor undelayed": 61932, "distributor violated": 940, "exec candidate": 45717, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 122023, "exec total [new]": 285593, "exec triage": 141048, "executor restarts": 1057, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 315730, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 30, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 45363000000, "no exec requests": 323, "pending": 3, "prog exec time": 0, "reproducing": 7, "rpc recv": 11815946684, "rpc sent": 1960484688, "signal": 307605, "smash jobs": 0, "triage jobs": 0, "vm output": 64628036, "vm restarts [base]": 90, "vm restarts [new]": 162 } 2025/08/18 05:18:18 runner 0 connected 2025/08/18 05:18:21 reproducing crash 'kernel BUG in may_open': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/namei.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:19:24 base crash: no output from test machine 2025/08/18 05:20:13 runner 2 connected 2025/08/18 05:20:56 base crash: no output from test machine 2025/08/18 05:21:08 repro finished 'INFO: task hung in read_part_sector', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 05:21:08 failed repro for "INFO: task hung in read_part_sector", err=%!s() 2025/08/18 05:21:08 "INFO: task hung in read_part_sector": saved crash log into 1755494468.crash.log 2025/08/18 05:21:08 "INFO: task hung in read_part_sector": saved repro log into 1755494468.repro.log 2025/08/18 05:21:18 base crash: no output from test machine 2025/08/18 05:21:28 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:21:29 runner 1 connected 2025/08/18 05:22:15 runner 3 connected 2025/08/18 05:22:18 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:22:21 new: boot error: can't ssh into the instance 2025/08/18 05:22:50 STAT { "buffer too small": 0, "candidate triage jobs": 26, "candidates": 34600, "comps overflows": 0, "corpus": 44891, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 40486, "coverage": 312566, "distributor delayed": 61958, "distributor undelayed": 61932, "distributor violated": 940, "exec candidate": 45739, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 122628, "exec total [new]": 286216, "exec triage": 141051, "executor restarts": 1060, "fault jobs": 0, "fuzzer jobs": 26, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 315759, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 34, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45668, "no exec duration": 182983000000, "no exec requests": 740, "pending": 3, "prog exec time": 244, "reproducing": 6, "rpc recv": 11940037496, "rpc sent": 1967610904, "signal": 307605, "smash jobs": 0, "triage jobs": 0, "vm output": 70228066, "vm restarts [base]": 93, "vm restarts [new]": 163 } 2025/08/18 05:23:02 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:23:17 runner 0 connected 2025/08/18 05:23:31 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 05:24:16 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:24:16 repro finished 'WARNING: suspicious RCU usage in get_callchain_entry', repro=true crepro=false desc='WARNING: suspicious RCU usage in get_callchain_entry' hub=false from_dashboard=false 2025/08/18 05:24:16 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/18 05:24:16 found repro for "WARNING: suspicious RCU usage in get_callchain_entry" (orig title: "-SAME-", reliability: 1), took 8.14 minutes 2025/08/18 05:24:16 "WARNING: suspicious RCU usage in get_callchain_entry": saved crash log into 1755494656.crash.log 2025/08/18 05:24:16 "WARNING: suspicious RCU usage in get_callchain_entry": saved repro log into 1755494656.repro.log 2025/08/18 05:24:27 runner 1 connected 2025/08/18 05:24:29 reproducing crash 'kernel BUG in may_open': reproducer is too unreliable: 0.10 2025/08/18 05:24:29 repro finished 'kernel BUG in may_open', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 05:24:29 failed repro for "kernel BUG in may_open", err=%!s() 2025/08/18 05:24:29 "kernel BUG in may_open": saved crash log into 1755494669.crash.log 2025/08/18 05:24:29 "kernel BUG in may_open": saved repro log into 1755494669.repro.log 2025/08/18 05:24:58 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 05:24:58 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 05:25:06 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = true] 2025/08/18 05:25:06 scheduled a reproduction of 'WARNING in xfrm6_tunnel_net_exit' 2025/08/18 05:25:28 runner 2 connected 2025/08/18 05:25:41 attempt #0 to run "WARNING: suspicious RCU usage in get_callchain_entry" on base: crashed with WARNING: suspicious RCU usage in get_callchain_entry 2025/08/18 05:25:41 crashes both: WARNING: suspicious RCU usage in get_callchain_entry / WARNING: suspicious RCU usage in get_callchain_entry 2025/08/18 05:25:47 runner 1 connected 2025/08/18 05:25:48 runner 3 connected 2025/08/18 05:25:55 runner 0 connected 2025/08/18 05:26:30 runner 0 connected 2025/08/18 05:26:31 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/18 05:26:34 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 05:26:35 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/18 05:27:28 runner 2 connected 2025/08/18 05:27:31 runner 3 connected 2025/08/18 05:27:31 runner 0 connected 2025/08/18 05:27:32 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 05:27:44 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 05:27:45 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 05:27:50 STAT { "buffer too small": 0, "candidate triage jobs": 6, "candidates": 34546, "comps overflows": 0, "corpus": 44936, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 41035, "coverage": 312643, "distributor delayed": 62024, "distributor undelayed": 62022, "distributor violated": 944, "exec candidate": 45793, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 124762, "exec total [new]": 289143, "exec triage": 141212, "executor restarts": 1084, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 315829, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 34, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45697, "no exec duration": 494087000000, "no exec requests": 1625, "pending": 3, "prog exec time": 438, "reproducing": 4, "rpc recv": 12257741348, "rpc sent": 2000447112, "signal": 307682, "smash jobs": 0, "triage jobs": 0, "vm output": 72486656, "vm restarts [base]": 97, "vm restarts [new]": 169 } 2025/08/18 05:28:29 runner 2 connected 2025/08/18 05:28:41 runner 0 connected 2025/08/18 05:28:41 runner 1 connected 2025/08/18 05:29:27 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/18 05:30:09 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 05:30:24 runner 3 connected 2025/08/18 05:30:26 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 05:30:31 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 05:31:02 base: boot error: can't ssh into the instance 2025/08/18 05:31:06 runner 2 connected 2025/08/18 05:31:23 runner 0 connected 2025/08/18 05:31:28 runner 1 connected 2025/08/18 05:31:46 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 05:32:01 runner 1 connected 2025/08/18 05:32:23 new: boot error: can't ssh into the instance 2025/08/18 05:32:45 runner 0 connected 2025/08/18 05:32:50 STAT { "buffer too small": 0, "candidate triage jobs": 13, "candidates": 34440, "comps overflows": 0, "corpus": 45005, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 42234, "coverage": 312773, "distributor delayed": 62146, "distributor undelayed": 62139, "distributor violated": 956, "exec candidate": 45899, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 130892, "exec total [new]": 295471, "exec triage": 141506, "executor restarts": 1100, "fault jobs": 0, "fuzzer jobs": 13, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 315998, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 34, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45787, "no exec duration": 494116000000, "no exec requests": 1626, "pending": 3, "prog exec time": 130, "reproducing": 4, "rpc recv": 12518475584, "rpc sent": 2053609728, "signal": 307813, "smash jobs": 0, "triage jobs": 0, "vm output": 74732762, "vm restarts [base]": 102, "vm restarts [new]": 173 } 2025/08/18 05:33:00 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/18 05:33:08 new: boot error: can't ssh into the instance 2025/08/18 05:33:20 runner 3 connected 2025/08/18 05:33:29 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = false] 2025/08/18 05:33:32 base crash: possible deadlock in ocfs2_xattr_set 2025/08/18 05:33:57 runner 1 connected 2025/08/18 05:34:20 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 05:34:26 runner 1 connected 2025/08/18 05:34:31 runner 3 connected 2025/08/18 05:34:40 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 05:35:05 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/18 05:35:07 base crash: WARNING: suspicious RCU usage in get_callchain_entry 2025/08/18 05:35:16 runner 0 connected 2025/08/18 05:35:18 base crash: WARNING: suspicious RCU usage in get_callchain_entry 2025/08/18 05:35:34 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 05:35:37 runner 0 connected 2025/08/18 05:36:02 runner 3 connected 2025/08/18 05:36:04 runner 2 connected 2025/08/18 05:36:17 runner 3 connected 2025/08/18 05:36:31 runner 2 connected 2025/08/18 05:37:06 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/08/18 05:37:06 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/08/18 05:37:50 STAT { "buffer too small": 0, "candidate triage jobs": 29, "candidates": 31818, "comps overflows": 0, "corpus": 45226, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 43302, "coverage": 313147, "distributor delayed": 62693, "distributor undelayed": 62670, "distributor violated": 985, "exec candidate": 48521, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 137706, "exec total [new]": 302005, "exec triage": 142307, "executor restarts": 1136, "fault jobs": 0, "fuzzer jobs": 29, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 316427, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 34, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46044, "no exec duration": 494116000000, "no exec requests": 1626, "pending": 4, "prog exec time": 354, "reproducing": 4, "rpc recv": 12893814344, "rpc sent": 2136383832, "signal": 308184, "smash jobs": 0, "triage jobs": 0, "vm output": 77307000, "vm restarts [base]": 107, "vm restarts [new]": 178 } 2025/08/18 05:38:03 runner 3 connected 2025/08/18 05:38:04 base crash: KASAN: slab-use-after-free Read in xfrm_state_find 2025/08/18 05:38:33 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:39:03 runner 0 connected 2025/08/18 05:39:19 repro finished 'general protection fault in xfrm_alloc_spi', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 05:39:19 failed repro for "general protection fault in xfrm_alloc_spi", err=%!s() 2025/08/18 05:39:19 "general protection fault in xfrm_alloc_spi": saved crash log into 1755495559.crash.log 2025/08/18 05:39:19 "general protection fault in xfrm_alloc_spi": saved repro log into 1755495559.repro.log 2025/08/18 05:39:27 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:39:41 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/18 05:40:10 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:40:17 runner 5 connected 2025/08/18 05:40:37 runner 1 connected 2025/08/18 05:41:01 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/18 05:41:06 runner 4 connected 2025/08/18 05:41:42 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 05:41:58 runner 3 connected 2025/08/18 05:42:05 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 05:42:41 runner 3 connected 2025/08/18 05:42:50 STAT { "buffer too small": 0, "candidate triage jobs": 1, "candidates": 21562, "comps overflows": 0, "corpus": 45445, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 45188, "coverage": 313535, "distributor delayed": 63032, "distributor undelayed": 63032, "distributor violated": 1018, "exec candidate": 58777, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 148417, "exec total [new]": 313015, "exec triage": 143049, "executor restarts": 1166, "fault jobs": 0, "fuzzer jobs": 1, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 316814, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 34, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46261, "no exec duration": 494311000000, "no exec requests": 1627, "pending": 4, "prog exec time": 309, "reproducing": 3, "rpc recv": 13111811180, "rpc sent": 2224617544, "signal": 308576, "smash jobs": 0, "triage jobs": 0, "vm output": 79293106, "vm restarts [base]": 109, "vm restarts [new]": 183 } 2025/08/18 05:42:57 patched crashed: general protection fault in xfrm_state_find [need repro = true] 2025/08/18 05:42:57 scheduled a reproduction of 'general protection fault in xfrm_state_find' 2025/08/18 05:42:57 start reproducing 'general protection fault in xfrm_state_find' 2025/08/18 05:43:01 runner 4 connected 2025/08/18 05:43:32 base crash: WARNING in xfrm_state_fini 2025/08/18 05:43:36 new: boot error: can't ssh into the instance 2025/08/18 05:44:00 patched crashed: KASAN: slab-use-after-free Read in xfrm_state_find [need repro = false] 2025/08/18 05:44:31 runner 2 connected 2025/08/18 05:44:35 patched crashed: WARNING in rate_control_rate_init [need repro = true] 2025/08/18 05:44:35 scheduled a reproduction of 'WARNING in rate_control_rate_init' 2025/08/18 05:44:35 start reproducing 'WARNING in rate_control_rate_init' 2025/08/18 05:45:16 base crash: WARNING in xfrm_state_fini 2025/08/18 05:45:32 reproducing crash 'general protection fault in xfrm_state_find': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f net/xfrm/xfrm_state.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:45:34 runner 5 connected 2025/08/18 05:46:13 runner 1 connected 2025/08/18 05:46:20 base crash: lost connection to test machine 2025/08/18 05:46:45 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 05:47:17 runner 3 connected 2025/08/18 05:47:50 STAT { "buffer too small": 0, "candidate triage jobs": 6, "candidates": 14448, "comps overflows": 0, "corpus": 45493, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 46498, "coverage": 313649, "distributor delayed": 63149, "distributor undelayed": 63144, "distributor violated": 1033, "exec candidate": 65891, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 17, "exec seeds": 0, "exec smash": 0, "exec total [base]": 156725, "exec total [new]": 320373, "exec triage": 143288, "executor restarts": 1185, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 316993, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 35, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46330, "no exec duration": 871859000000, "no exec requests": 3041, "pending": 4, "prog exec time": 227, "reproducing": 5, "rpc recv": 13310487780, "rpc sent": 2280985600, "signal": 308678, "smash jobs": 0, "triage jobs": 0, "vm output": 80782408, "vm restarts [base]": 112, "vm restarts [new]": 185 } 2025/08/18 05:49:33 new: boot error: can't ssh into the instance 2025/08/18 05:52:42 base crash: general protection fault in __xfrm_state_lookup 2025/08/18 05:52:50 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 8145, "comps overflows": 0, "corpus": 45535, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 47850, "coverage": 313742, "distributor delayed": 63184, "distributor undelayed": 63184, "distributor violated": 1038, "exec candidate": 72194, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 17, "exec seeds": 0, "exec smash": 0, "exec total [base]": 163194, "exec total [new]": 326887, "exec triage": 143498, "executor restarts": 1192, "fault jobs": 0, "fuzzer jobs": 0, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 317105, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 36, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46382, "no exec duration": 2184575000000, "no exec requests": 7655, "pending": 4, "prog exec time": 225, "reproducing": 5, "rpc recv": 13320895884, "rpc sent": 2327893248, "signal": 308764, "smash jobs": 0, "triage jobs": 0, "vm output": 82458472, "vm restarts [base]": 112, "vm restarts [new]": 185 } 2025/08/18 05:53:20 triaged 90.7% of the corpus 2025/08/18 05:53:54 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 05:54:16 patched crashed: lost connection to test machine [need repro = false] 2025/08/18 05:54:44 runner 3 connected 2025/08/18 05:55:05 runner 5 connected 2025/08/18 05:56:04 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:56:51 new: boot error: can't ssh into the instance 2025/08/18 05:57:40 runner 4 connected 2025/08/18 05:57:50 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 3884, "comps overflows": 0, "corpus": 45551, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 48670, "coverage": 313795, "distributor delayed": 63202, "distributor undelayed": 63202, "distributor violated": 1038, "exec candidate": 76455, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 17, "exec seeds": 0, "exec smash": 0, "exec total [base]": 167608, "exec total [new]": 331265, "exec triage": 143610, "executor restarts": 1206, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 317229, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 37, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46410, "no exec duration": 2893478000000, "no exec requests": 10167, "pending": 4, "prog exec time": 300, "reproducing": 5, "rpc recv": 13420106020, "rpc sent": 2366354424, "signal": 308817, "smash jobs": 0, "triage jobs": 0, "vm output": 84276526, "vm restarts [base]": 112, "vm restarts [new]": 188 } 2025/08/18 05:58:08 reproducing crash 'kernel BUG in jfs_evict_inode': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/18 05:59:08 base crash: WARNING in xfrm_state_fini 2025/08/18 05:59:39 new: boot error: can't ssh into the instance 2025/08/18 05:59:56 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/18 06:00:05 runner 3 connected 2025/08/18 06:00:07 new: boot error: can't ssh into the instance 2025/08/18 06:00:54 runner 3 connected 2025/08/18 06:01:08 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:63061: connect: connection refused 2025/08/18 06:01:08 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:63061: connect: connection refused 2025/08/18 06:01:18 base crash: lost connection to test machine 2025/08/18 06:01:20 base crash: kernel BUG in txUnlock 2025/08/18 06:01:21 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/18 06:02:14 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/18 06:02:16 runner 1 connected 2025/08/18 06:02:16 runner 2 connected 2025/08/18 06:02:17 runner 4 connected 2025/08/18 06:02:47 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/18 06:02:47 base: boot error: can't ssh into the instance 2025/08/18 06:02:50 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 0, "corpus": 45588, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 49611, "coverage": 313861, "distributor delayed": 63281, "distributor undelayed": 63279, "distributor violated": 1046, "exec candidate": 80339, "exec collide": 304, "exec fuzz": 583, "exec gen": 27, "exec hints": 3, "exec inject": 0, "exec minimize": 18, "exec retries": 17, "exec seeds": 3, "exec smash": 6, "exec total [base]": 172354, "exec total [new]": 336286, "exec triage": 143803, "executor restarts": 1223, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 1, "max signal": 317375, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 51, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46468, "no exec duration": 3080843000000, "no exec requests": 10986, "pending": 4, "prog exec time": 727, "reproducing": 5, "rpc recv": 13587863868, "rpc sent": 2428015904, "signal": 308881, "smash jobs": 1, "triage jobs": 6, "vm output": 86265988, "vm restarts [base]": 115, "vm restarts [new]": 190 } 2025/08/18 06:02:50 base crash: possible deadlock in ocfs2_xattr_set 2025/08/18 06:03:10 runner 3 connected 2025/08/18 06:03:43 base crash: possible deadlock in ocfs2_xattr_set 2025/08/18 06:03:44 runner 5 connected 2025/08/18 06:03:44 runner 0 connected 2025/08/18 06:03:48 runner 1 connected 2025/08/18 06:03:57 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/18 06:04:19 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 06:04:40 runner 3 connected 2025/08/18 06:04:56 runner 4 connected 2025/08/18 06:05:10 runner 1 connected 2025/08/18 06:05:20 base crash: possible deadlock in ocfs2_init_acl 2025/08/18 06:05:46 patched crashed: INFO: trying to register non-static key in ocfs2_dlm_shutdown [need repro = true] 2025/08/18 06:05:46 scheduled a reproduction of 'INFO: trying to register non-static key in ocfs2_dlm_shutdown' 2025/08/18 06:05:46 start reproducing 'INFO: trying to register non-static key in ocfs2_dlm_shutdown' 2025/08/18 06:05:53 base crash: lost connection to test machine 2025/08/18 06:06:17 runner 0 connected 2025/08/18 06:06:50 runner 3 connected 2025/08/18 06:07:45 status reporting terminated 2025/08/18 06:07:45 bug reporting terminated 2025/08/18 06:07:45 repro finished 'INFO: trying to register non-static key in ocfs2_dlm_shutdown', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 06:07:45 repro finished 'KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 06:07:46 syz-diff (base): kernel context loop terminated 2025/08/18 06:08:43 repro finished 'general protection fault in xfrm_state_find', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 06:10:03 repro finished 'WARNING in rate_control_rate_init', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 06:10:08 repro finished 'WARNING in xfrm6_tunnel_net_exit', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 06:12:02 repro finished 'kernel BUG in jfs_evict_inode', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/18 06:15:51 syz-diff (new): kernel context loop terminated 2025/08/18 06:15:51 diff fuzzing terminated 2025/08/18 06:15:51 fuzzing is finished 2025/08/18 06:15:51 status at the end: Title On-Base On-Patched INFO: task hung in read_part_sector 1 crashes INFO: task hung in txBegin 1 crashes INFO: task hung in v9fs_evict_inode 1 crashes INFO: trying to register non-static key in ocfs2_dlm_shutdown 1 crashes KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings 1 crashes KASAN: slab-use-after-free Read in __usb_hcd_giveback_urb 1 crashes KASAN: slab-use-after-free Read in __xfrm_state_insert 1 crashes KASAN: slab-use-after-free Read in __xfrm_state_lookup 1 crashes 3 crashes KASAN: slab-use-after-free Read in l2cap_unregister_user 1 crashes KASAN: slab-use-after-free Read in xfrm_alloc_spi 8 crashes 9 crashes KASAN: slab-use-after-free Read in xfrm_state_find 1 crashes 2 crashes KASAN: use-after-free Read in xfrm_alloc_spi 1 crashes WARNING in dbAdjTree 2 crashes 2 crashes WARNING in ext4_xattr_inode_lookup_create 1 crashes 4 crashes WARNING in rate_control_rate_init 1 crashes WARNING in xfrm6_tunnel_net_exit 1 crashes 3 crashes WARNING in xfrm_state_fini 8 crashes 13 crashes WARNING: suspicious RCU usage in get_callchain_entry 4 crashes 4 crashes[reproduced] general protection fault in __xfrm_state_insert 1 crashes general protection fault in __xfrm_state_lookup 1 crashes general protection fault in pcl818_ai_cancel 2 crashes 3 crashes general protection fault in xfrm_alloc_spi 1 crashes general protection fault in xfrm_state_find 1 crashes kernel BUG in jfs_evict_inode 3 crashes kernel BUG in may_open 1 crashes kernel BUG in txUnlock 4 crashes 8 crashes lost connection to test machine 5 crashes 14 crashes no output from test machine 23 crashes 1 crashes possible deadlock in attr_data_get_block 2 crashes 1 crashes possible deadlock in ocfs2_init_acl 16 crashes 36 crashes possible deadlock in ocfs2_reserve_suballoc_bits 13 crashes 26 crashes possible deadlock in ocfs2_try_remove_refcount_tree 15 crashes 29 crashes possible deadlock in ocfs2_xattr_set 5 crashes 12 crashes unregister_netdevice: waiting for DEV to become free 1 crashes 1 crashes