2025/08/19 01:32:31 extracted 303749 symbol hashes for base and 303751 for patched 2025/08/19 01:32:32 adding modified_functions to focus areas: ["__pfx_tear_down_vmas" "__pte_alloc" "__pte_alloc_kernel" "__se_sys_brk" "__se_sys_remap_file_pages" "__split_vma" "clear_gigantic_page" "commit_merge" "copy_folio_from_user" "copy_page_range" "copy_pmd_range" "copy_user_gigantic_page" "copy_user_large_folio" "do_swap_page" "do_vmi_align_munmap" "dup_mmap" "exit_mmap" "folio_zero_user" "free_pgtables" "mmap_region" "tear_down_vmas" "unmap_page_range" "unmap_region" "unmap_vmas" "vma_complete" "vma_iter_store_overwrite" "vma_link" "vma_modify" "vma_shrink" "vmf_insert_pfn_prot" "vms_clear_ptes" "vms_complete_munmap_vmas" "vms_gather_munmap_vmas"] 2025/08/19 01:32:32 adding directly modified files to focus areas: ["mm/internal.h" "mm/memory.c" "mm/mmap.c" "mm/vma.c" "mm/vma.h"] 2025/08/19 01:32:33 downloaded the corpus from https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db 2025/08/19 01:33:31 runner 3 connected 2025/08/19 01:33:31 runner 6 connected 2025/08/19 01:33:31 runner 2 connected 2025/08/19 01:33:31 runner 1 connected 2025/08/19 01:33:31 runner 8 connected 2025/08/19 01:33:31 runner 7 connected 2025/08/19 01:33:32 runner 1 connected 2025/08/19 01:33:32 runner 0 connected 2025/08/19 01:33:37 runner 3 connected 2025/08/19 01:33:38 runner 9 connected 2025/08/19 01:33:38 runner 5 connected 2025/08/19 01:33:38 runner 0 connected 2025/08/19 01:33:39 executor cover filter: 0 PCs 2025/08/19 01:33:39 initializing coverage information... 2025/08/19 01:33:44 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/19 01:33:44 base: machine check complete 2025/08/19 01:33:44 discovered 7699 source files, 338620 symbols 2025/08/19 01:33:45 coverage filter: __pfx_tear_down_vmas: [] 2025/08/19 01:33:45 coverage filter: __pte_alloc: [__pte_alloc __pte_alloc_kernel] 2025/08/19 01:33:45 coverage filter: __pte_alloc_kernel: [] 2025/08/19 01:33:45 coverage filter: __se_sys_brk: [__se_sys_brk] 2025/08/19 01:33:45 coverage filter: __se_sys_remap_file_pages: [__se_sys_remap_file_pages] 2025/08/19 01:33:45 coverage filter: __split_vma: [__split_vma] 2025/08/19 01:33:45 coverage filter: clear_gigantic_page: [clear_gigantic_page] 2025/08/19 01:33:45 coverage filter: commit_merge: [commit_merge persistent_commit_merge] 2025/08/19 01:33:45 coverage filter: copy_folio_from_user: [copy_folio_from_user] 2025/08/19 01:33:45 coverage filter: copy_page_range: [copy_page_range] 2025/08/19 01:33:45 coverage filter: copy_pmd_range: [copy_pmd_range] 2025/08/19 01:33:45 coverage filter: copy_user_gigantic_page: [copy_user_gigantic_page] 2025/08/19 01:33:45 coverage filter: copy_user_large_folio: [copy_user_large_folio] 2025/08/19 01:33:45 coverage filter: do_swap_page: [do_swap_page] 2025/08/19 01:33:45 coverage filter: do_vmi_align_munmap: [do_vmi_align_munmap] 2025/08/19 01:33:45 coverage filter: dup_mmap: [dup_mmap uprobe_dup_mmap uprobe_end_dup_mmap uprobe_start_dup_mmap] 2025/08/19 01:33:45 coverage filter: exit_mmap: [__bpf_trace_exit_mmap __probestub_exit_mmap __traceiter_exit_mmap exit_mmap ldt_arch_exit_mmap perf_trace_exit_mmap trace_event_raw_event_exit_mmap trace_raw_output_exit_mmap] 2025/08/19 01:33:45 coverage filter: folio_zero_user: [folio_zero_user] 2025/08/19 01:33:45 coverage filter: free_pgtables: [free_pgtables] 2025/08/19 01:33:45 coverage filter: mmap_region: [mmap_region] 2025/08/19 01:33:45 coverage filter: tear_down_vmas: [tear_down_vmas] 2025/08/19 01:33:45 coverage filter: unmap_page_range: [unmap_page_range] 2025/08/19 01:33:45 coverage filter: unmap_region: [pcim_iounmap_region unmap_region] 2025/08/19 01:33:45 coverage filter: unmap_vmas: [unmap_vmas vms_complete_munmap_vmas vms_gather_munmap_vmas] 2025/08/19 01:33:45 coverage filter: vma_complete: [vma_complete] 2025/08/19 01:33:45 coverage filter: vma_iter_store_overwrite: [vma_iter_store_overwrite] 2025/08/19 01:33:45 coverage filter: vma_link: [vma_link vma_link_file] 2025/08/19 01:33:45 coverage filter: vma_modify: [vma_modify vma_modify_flags vma_modify_flags_uffd vma_modify_name vma_modify_policy] 2025/08/19 01:33:45 coverage filter: vma_shrink: [vma_shrink] 2025/08/19 01:33:45 coverage filter: vmf_insert_pfn_prot: [vmf_insert_pfn_prot] 2025/08/19 01:33:45 coverage filter: vms_clear_ptes: [vms_clear_ptes] 2025/08/19 01:33:45 coverage filter: vms_complete_munmap_vmas: [] 2025/08/19 01:33:45 coverage filter: vms_gather_munmap_vmas: [] 2025/08/19 01:33:45 coverage filter: mm/internal.h: [] 2025/08/19 01:33:45 coverage filter: mm/memory.c: [mm/memory.c] 2025/08/19 01:33:45 coverage filter: mm/mmap.c: [arch/x86/mm/mmap.c mm/mmap.c] 2025/08/19 01:33:45 coverage filter: mm/vma.c: [mm/vma.c] 2025/08/19 01:33:45 coverage filter: mm/vma.h: [] 2025/08/19 01:33:45 area "symbols": 2891 PCs in the cover filter 2025/08/19 01:33:45 area "files": 8140 PCs in the cover filter 2025/08/19 01:33:45 area "": 0 PCs in the cover filter 2025/08/19 01:33:45 executor cover filter: 0 PCs 2025/08/19 01:33:49 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3832/8048 2025/08/19 01:33:49 new: machine check complete 2025/08/19 01:33:50 new: adding 81150 seeds 2025/08/19 01:34:12 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/19 01:34:12 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/19 01:34:34 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/08/19 01:34:34 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/08/19 01:34:38 base crash: WARNING in xfrm_state_fini 2025/08/19 01:35:09 runner 3 connected 2025/08/19 01:35:31 runner 6 connected 2025/08/19 01:35:35 runner 3 connected 2025/08/19 01:35:38 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 01:35:59 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 01:36:35 runner 9 connected 2025/08/19 01:36:41 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 01:36:57 runner 0 connected 2025/08/19 01:37:34 STAT { "buffer too small": 0, "candidate triage jobs": 51, "candidates": 77385, "comps overflows": 0, "corpus": 3695, "corpus [files]": 5802, "corpus [symbols]": 521, "cover overflows": 2712, "coverage": 160607, "distributor delayed": 4370, "distributor undelayed": 4370, "distributor violated": 0, "exec candidate": 3765, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 7758, "exec total [new]": 17129, "exec triage": 11860, "executor restarts": 99, "fault jobs": 0, "fuzzer jobs": 51, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 162734, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 3765, "no exec duration": 35499000000, "no exec requests": 297, "pending": 2, "prog exec time": 277, "reproducing": 0, "rpc recv": 911002996, "rpc sent": 97361752, "signal": 157927, "smash jobs": 0, "triage jobs": 0, "vm output": 2030288, "vm restarts [base]": 4, "vm restarts [new]": 13 } 2025/08/19 01:37:38 runner 5 connected 2025/08/19 01:37:49 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/19 01:37:49 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/19 01:38:45 runner 2 connected 2025/08/19 01:39:41 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/19 01:39:41 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/19 01:39:53 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/19 01:39:53 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/19 01:40:04 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/19 01:40:04 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 01:40:04 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/19 01:40:04 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/19 01:40:21 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/19 01:40:38 runner 2 connected 2025/08/19 01:40:40 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 01:40:49 runner 0 connected 2025/08/19 01:41:00 runner 6 connected 2025/08/19 01:41:01 runner 1 connected 2025/08/19 01:41:17 runner 1 connected 2025/08/19 01:41:26 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/19 01:41:26 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/19 01:41:36 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = true] 2025/08/19 01:41:36 scheduled a reproduction of 'WARNING in ext4_xattr_inode_lookup_create' 2025/08/19 01:41:37 runner 5 connected 2025/08/19 01:41:37 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = true] 2025/08/19 01:41:37 scheduled a reproduction of 'WARNING in ext4_xattr_inode_lookup_create' 2025/08/19 01:41:39 patched crashed: WARNING in ext4_xattr_inode_lookup_create [need repro = true] 2025/08/19 01:41:39 scheduled a reproduction of 'WARNING in ext4_xattr_inode_lookup_create' 2025/08/19 01:42:24 runner 7 connected 2025/08/19 01:42:31 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 01:42:34 runner 3 connected 2025/08/19 01:42:34 runner 8 connected 2025/08/19 01:42:34 STAT { "buffer too small": 0, "candidate triage jobs": 56, "candidates": 72816, "comps overflows": 0, "corpus": 8229, "corpus [files]": 10904, "corpus [symbols]": 859, "cover overflows": 6083, "coverage": 198935, "distributor delayed": 10553, "distributor undelayed": 10532, "distributor violated": 28, "exec candidate": 8334, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 17294, "exec total [new]": 37812, "exec triage": 26142, "executor restarts": 146, "fault jobs": 0, "fuzzer jobs": 56, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 201555, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 8334, "no exec duration": 35561000000, "no exec requests": 301, "pending": 11, "prog exec time": 210, "reproducing": 0, "rpc recv": 1577334228, "rpc sent": 210835288, "signal": 195040, "smash jobs": 0, "triage jobs": 0, "vm output": 4318377, "vm restarts [base]": 5, "vm restarts [new]": 23 } 2025/08/19 01:42:36 runner 1 connected 2025/08/19 01:42:40 new: boot error: can't ssh into the instance 2025/08/19 01:42:40 base: boot error: can't ssh into the instance 2025/08/19 01:43:05 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/19 01:43:05 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/19 01:43:19 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/19 01:43:19 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/19 01:43:28 runner 9 connected 2025/08/19 01:43:37 runner 2 connected 2025/08/19 01:43:37 base crash: WARNING in xfrm_state_fini 2025/08/19 01:43:37 runner 4 connected 2025/08/19 01:44:02 runner 0 connected 2025/08/19 01:44:34 runner 0 connected 2025/08/19 01:44:43 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/08/19 01:44:43 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/08/19 01:45:39 runner 1 connected 2025/08/19 01:46:29 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 01:46:30 patched crashed: KASAN: slab-use-after-free Read in xfrm_state_find [need repro = true] 2025/08/19 01:46:30 scheduled a reproduction of 'KASAN: slab-use-after-free Read in xfrm_state_find' 2025/08/19 01:46:31 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/19 01:46:46 base crash: KASAN: slab-use-after-free Read in xfrm_state_find 2025/08/19 01:47:17 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/19 01:47:17 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/19 01:47:19 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/19 01:47:19 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/19 01:47:26 runner 5 connected 2025/08/19 01:47:27 runner 6 connected 2025/08/19 01:47:30 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/19 01:47:30 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/19 01:47:30 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 01:47:34 STAT { "buffer too small": 0, "candidate triage jobs": 98, "candidates": 67890, "comps overflows": 0, "corpus": 13069, "corpus [files]": 15824, "corpus [symbols]": 1206, "cover overflows": 9528, "coverage": 225033, "distributor delayed": 15927, "distributor undelayed": 15856, "distributor violated": 29, "exec candidate": 13260, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 3, "exec seeds": 0, "exec smash": 0, "exec total [base]": 27557, "exec total [new]": 61228, "exec triage": 41347, "executor restarts": 195, "fault jobs": 0, "fuzzer jobs": 98, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 227586, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 13260, "no exec duration": 35879000000, "no exec requests": 304, "pending": 18, "prog exec time": 255, "reproducing": 0, "rpc recv": 2288012996, "rpc sent": 341253456, "signal": 220738, "smash jobs": 0, "triage jobs": 0, "vm output": 6623877, "vm restarts [base]": 7, "vm restarts [new]": 30 } 2025/08/19 01:47:42 patched crashed: WARNING in dbAdjTree [need repro = true] 2025/08/19 01:47:42 scheduled a reproduction of 'WARNING in dbAdjTree' 2025/08/19 01:47:43 runner 2 connected 2025/08/19 01:48:00 patched crashed: INFO: task hung in __iterate_supers [need repro = true] 2025/08/19 01:48:00 scheduled a reproduction of 'INFO: task hung in __iterate_supers' 2025/08/19 01:48:14 runner 3 connected 2025/08/19 01:48:15 runner 4 connected 2025/08/19 01:48:22 patched crashed: INFO: trying to register non-static key in ocfs2_dlm_shutdown [need repro = true] 2025/08/19 01:48:22 scheduled a reproduction of 'INFO: trying to register non-static key in ocfs2_dlm_shutdown' 2025/08/19 01:48:27 runner 2 connected 2025/08/19 01:48:28 runner 0 connected 2025/08/19 01:48:31 runner 1 connected 2025/08/19 01:48:50 runner 0 connected 2025/08/19 01:48:54 base crash: kernel BUG in jfs_evict_inode 2025/08/19 01:49:05 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 01:49:19 runner 9 connected 2025/08/19 01:49:37 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/19 01:49:52 runner 1 connected 2025/08/19 01:50:02 runner 2 connected 2025/08/19 01:50:17 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 01:50:19 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 01:50:31 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 01:50:33 runner 3 connected 2025/08/19 01:50:36 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/19 01:50:36 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/19 01:50:37 patched crashed: INFO: task hung in __iterate_supers [need repro = true] 2025/08/19 01:50:37 scheduled a reproduction of 'INFO: task hung in __iterate_supers' 2025/08/19 01:50:42 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 01:51:08 runner 9 connected 2025/08/19 01:51:15 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 01:51:16 runner 5 connected 2025/08/19 01:51:26 runner 2 connected 2025/08/19 01:51:27 runner 8 connected 2025/08/19 01:51:34 runner 6 connected 2025/08/19 01:51:34 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 01:51:54 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 01:52:02 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 01:52:05 runner 1 connected 2025/08/19 01:52:23 runner 0 connected 2025/08/19 01:52:34 STAT { "buffer too small": 0, "candidate triage jobs": 37, "candidates": 64616, "comps overflows": 0, "corpus": 16376, "corpus [files]": 19049, "corpus [symbols]": 1363, "cover overflows": 11631, "coverage": 241165, "distributor delayed": 21204, "distributor undelayed": 21204, "distributor violated": 74, "exec candidate": 16534, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 3, "exec seeds": 0, "exec smash": 0, "exec total [base]": 33545, "exec total [new]": 76499, "exec triage": 51437, "executor restarts": 268, "fault jobs": 0, "fuzzer jobs": 37, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 243217, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 16534, "no exec duration": 35879000000, "no exec requests": 304, "pending": 23, "prog exec time": 216, "reproducing": 0, "rpc recv": 3179244616, "rpc sent": 455355472, "signal": 236634, "smash jobs": 0, "triage jobs": 0, "vm output": 9231002, "vm restarts [base]": 11, "vm restarts [new]": 44 } 2025/08/19 01:52:38 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 01:52:51 runner 1 connected 2025/08/19 01:53:00 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 01:53:25 new: boot error: can't ssh into the instance 2025/08/19 01:53:28 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 01:53:35 runner 5 connected 2025/08/19 01:53:50 runner 6 connected 2025/08/19 01:54:05 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 01:54:22 runner 7 connected 2025/08/19 01:55:02 runner 0 connected 2025/08/19 01:56:07 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/19 01:56:26 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/08/19 01:56:37 base: boot error: can't ssh into the instance 2025/08/19 01:56:51 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/19 01:56:51 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/19 01:57:04 runner 6 connected 2025/08/19 01:57:13 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/19 01:57:13 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 01:57:23 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 01:57:23 runner 8 connected 2025/08/19 01:57:34 STAT { "buffer too small": 0, "candidate triage jobs": 32, "candidates": 59502, "comps overflows": 0, "corpus": 21386, "corpus [files]": 23740, "corpus [symbols]": 1624, "cover overflows": 15716, "coverage": 256946, "distributor delayed": 26899, "distributor undelayed": 26897, "distributor violated": 85, "exec candidate": 21648, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 3, "exec seeds": 0, "exec smash": 0, "exec total [base]": 37205, "exec total [new]": 103942, "exec triage": 67483, "executor restarts": 309, "fault jobs": 0, "fuzzer jobs": 32, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 259140, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 21648, "no exec duration": 36780000000, "no exec requests": 311, "pending": 25, "prog exec time": 138, "reproducing": 0, "rpc recv": 3781270280, "rpc sent": 581783312, "signal": 252117, "smash jobs": 0, "triage jobs": 0, "vm output": 11530333, "vm restarts [base]": 12, "vm restarts [new]": 50 } 2025/08/19 01:57:42 runner 0 connected 2025/08/19 01:57:54 base crash: KASAN: slab-use-after-free Read in rose_timer_expiry 2025/08/19 01:58:04 runner 1 connected 2025/08/19 01:58:12 runner 3 connected 2025/08/19 01:58:51 runner 1 connected 2025/08/19 01:59:24 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/19 01:59:24 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 02:00:21 runner 7 connected 2025/08/19 02:00:37 new: boot error: can't ssh into the instance 2025/08/19 02:00:50 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/19 02:00:50 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/19 02:01:02 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/08/19 02:01:02 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/08/19 02:01:18 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/19 02:01:18 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/19 02:01:27 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 02:01:28 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/19 02:01:28 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/19 02:01:29 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/19 02:01:29 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/19 02:01:29 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/19 02:01:29 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/19 02:01:33 runner 4 connected 2025/08/19 02:01:39 runner 2 connected 2025/08/19 02:01:39 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/19 02:01:39 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/19 02:01:50 runner 8 connected 2025/08/19 02:02:08 base: boot error: can't ssh into the instance 2025/08/19 02:02:08 runner 3 connected 2025/08/19 02:02:16 runner 5 connected 2025/08/19 02:02:19 runner 1 connected 2025/08/19 02:02:25 runner 7 connected 2025/08/19 02:02:25 runner 6 connected 2025/08/19 02:02:34 STAT { "buffer too small": 0, "candidate triage jobs": 42, "candidates": 54873, "comps overflows": 0, "corpus": 25935, "corpus [files]": 27884, "corpus [symbols]": 1852, "cover overflows": 19128, "coverage": 270121, "distributor delayed": 32082, "distributor undelayed": 32082, "distributor violated": 120, "exec candidate": 26277, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 3, "exec seeds": 0, "exec smash": 0, "exec total [base]": 39967, "exec total [new]": 129431, "exec triage": 81841, "executor restarts": 374, "fault jobs": 0, "fuzzer jobs": 42, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 272329, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 26277, "no exec duration": 36852000000, "no exec requests": 314, "pending": 33, "prog exec time": 290, "reproducing": 0, "rpc recv": 4536083124, "rpc sent": 706155160, "signal": 264929, "smash jobs": 0, "triage jobs": 0, "vm output": 14195386, "vm restarts [base]": 13, "vm restarts [new]": 62 } 2025/08/19 02:02:36 runner 0 connected 2025/08/19 02:02:56 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/19 02:02:56 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/19 02:03:04 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:03:04 runner 0 connected 2025/08/19 02:03:06 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:03:15 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:03:34 base: boot error: can't ssh into the instance 2025/08/19 02:03:53 runner 8 connected 2025/08/19 02:04:01 runner 6 connected 2025/08/19 02:04:03 runner 7 connected 2025/08/19 02:04:12 runner 3 connected 2025/08/19 02:04:27 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/19 02:04:27 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/19 02:04:30 runner 2 connected 2025/08/19 02:04:31 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/19 02:04:31 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 02:04:33 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:04:34 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:04:34 patched crashed: WARNING in io_ring_exit_work [need repro = true] 2025/08/19 02:04:34 scheduled a reproduction of 'WARNING in io_ring_exit_work' 2025/08/19 02:04:38 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/19 02:04:38 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/19 02:04:42 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/19 02:04:42 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 02:04:51 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/19 02:04:51 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/19 02:04:57 patched crashed: possible deadlock in ocfs2_del_inode_from_orphan [need repro = true] 2025/08/19 02:04:57 scheduled a reproduction of 'possible deadlock in ocfs2_del_inode_from_orphan' 2025/08/19 02:04:59 base crash: unregister_netdevice: waiting for DEV to become free 2025/08/19 02:05:02 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2025/08/19 02:05:02 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2025/08/19 02:05:09 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 02:05:23 runner 9 connected 2025/08/19 02:05:23 runner 6 connected 2025/08/19 02:05:26 runner 3 connected 2025/08/19 02:05:28 runner 2 connected 2025/08/19 02:05:31 runner 5 connected 2025/08/19 02:05:32 runner 4 connected 2025/08/19 02:05:40 runner 7 connected 2025/08/19 02:05:47 runner 8 connected 2025/08/19 02:05:48 runner 1 connected 2025/08/19 02:05:53 runner 1 connected 2025/08/19 02:06:32 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:06:35 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:06:43 base: boot error: can't ssh into the instance 2025/08/19 02:06:45 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:06:49 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:06:57 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:07:24 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 02:07:32 runner 8 connected 2025/08/19 02:07:34 STAT { "buffer too small": 0, "candidate triage jobs": 38, "candidates": 52076, "comps overflows": 0, "corpus": 28698, "corpus [files]": 30344, "corpus [symbols]": 1947, "cover overflows": 21121, "coverage": 277393, "distributor delayed": 35492, "distributor undelayed": 35484, "distributor violated": 121, "exec candidate": 29074, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 3, "exec seeds": 0, "exec smash": 0, "exec total [base]": 44676, "exec total [new]": 144105, "exec triage": 90392, "executor restarts": 445, "fault jobs": 0, "fuzzer jobs": 38, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 279930, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 29074, "no exec duration": 36885000000, "no exec requests": 315, "pending": 42, "prog exec time": 314, "reproducing": 0, "rpc recv": 5313639132, "rpc sent": 820271600, "signal": 272305, "smash jobs": 0, "triage jobs": 0, "vm output": 16841369, "vm restarts [base]": 16, "vm restarts [new]": 77 } 2025/08/19 02:07:39 runner 3 connected 2025/08/19 02:07:42 runner 2 connected 2025/08/19 02:07:45 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 02:07:45 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 02:07:47 runner 1 connected 2025/08/19 02:07:47 runner 3 connected 2025/08/19 02:07:55 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 02:07:55 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 02:08:05 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 02:08:05 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 02:08:13 runner 0 connected 2025/08/19 02:08:41 runner 4 connected 2025/08/19 02:08:52 runner 7 connected 2025/08/19 02:08:57 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 02:09:02 runner 9 connected 2025/08/19 02:09:45 runner 0 connected 2025/08/19 02:09:58 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:10:12 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:10:15 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 02:10:23 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 02:10:33 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 02:10:48 runner 3 connected 2025/08/19 02:10:55 base crash: INFO: task hung in rfkill_global_led_trigger_worker 2025/08/19 02:11:01 runner 4 connected 2025/08/19 02:11:05 runner 1 connected 2025/08/19 02:11:12 runner 6 connected 2025/08/19 02:11:23 runner 8 connected 2025/08/19 02:11:23 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:11:23 patched crashed: INFO: task hung in corrupted [need repro = true] 2025/08/19 02:11:23 scheduled a reproduction of 'INFO: task hung in corrupted' 2025/08/19 02:11:34 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:11:52 runner 3 connected 2025/08/19 02:12:13 runner 7 connected 2025/08/19 02:12:22 runner 2 connected 2025/08/19 02:12:34 STAT { "buffer too small": 0, "candidate triage jobs": 28, "candidates": 49757, "comps overflows": 0, "corpus": 30993, "corpus [files]": 32359, "corpus [symbols]": 2032, "cover overflows": 22401, "coverage": 282994, "distributor delayed": 39433, "distributor undelayed": 39433, "distributor violated": 294, "exec candidate": 31393, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 47978, "exec total [new]": 156164, "exec triage": 97418, "executor restarts": 517, "fault jobs": 0, "fuzzer jobs": 28, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 285517, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 31393, "no exec duration": 36982000000, "no exec requests": 317, "pending": 46, "prog exec time": 416, "reproducing": 0, "rpc recv": 6079241340, "rpc sent": 908949928, "signal": 278034, "smash jobs": 0, "triage jobs": 0, "vm output": 19438461, "vm restarts [base]": 20, "vm restarts [new]": 90 } 2025/08/19 02:13:03 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/19 02:13:48 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 02:14:06 patched crashed: possible deadlock in attr_data_get_block [need repro = true] 2025/08/19 02:14:06 scheduled a reproduction of 'possible deadlock in attr_data_get_block' 2025/08/19 02:14:13 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/08/19 02:14:15 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = true] 2025/08/19 02:14:15 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 02:14:38 new: boot error: can't ssh into the instance 2025/08/19 02:14:45 runner 2 connected 2025/08/19 02:14:56 runner 8 connected 2025/08/19 02:15:03 runner 6 connected 2025/08/19 02:15:11 runner 7 connected 2025/08/19 02:15:15 base: boot error: can't ssh into the instance 2025/08/19 02:15:28 runner 0 connected 2025/08/19 02:15:44 base crash: WARNING in xfrm_state_fini 2025/08/19 02:16:38 new: boot error: can't ssh into the instance 2025/08/19 02:16:39 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/19 02:16:39 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/19 02:16:41 runner 1 connected 2025/08/19 02:16:50 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/19 02:16:50 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/19 02:17:23 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 02:17:30 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/19 02:17:30 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/19 02:17:34 runner 5 connected 2025/08/19 02:17:34 STAT { "buffer too small": 0, "candidate triage jobs": 34, "candidates": 46770, "comps overflows": 0, "corpus": 33930, "corpus [files]": 34928, "corpus [symbols]": 2161, "cover overflows": 24616, "coverage": 289509, "distributor delayed": 43179, "distributor undelayed": 43169, "distributor violated": 499, "exec candidate": 34380, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 49729, "exec total [new]": 173049, "exec triage": 106525, "executor restarts": 577, "fault jobs": 0, "fuzzer jobs": 34, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 292124, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 34380, "no exec duration": 37049000000, "no exec requests": 319, "pending": 51, "prog exec time": 999, "reproducing": 0, "rpc recv": 6506143936, "rpc sent": 996604728, "signal": 284573, "smash jobs": 0, "triage jobs": 0, "vm output": 21732553, "vm restarts [base]": 21, "vm restarts [new]": 96 } 2025/08/19 02:17:37 runner 8 connected 2025/08/19 02:17:39 runner 2 connected 2025/08/19 02:17:44 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/08/19 02:17:44 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/08/19 02:18:20 runner 6 connected 2025/08/19 02:18:22 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 02:18:30 base crash: INFO: task hung in rtnetlink_rcv_msg 2025/08/19 02:18:40 runner 1 connected 2025/08/19 02:18:41 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:19:19 runner 1 connected 2025/08/19 02:19:27 runner 0 connected 2025/08/19 02:19:34 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 02:19:37 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:19:38 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:19:39 runner 4 connected 2025/08/19 02:19:47 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 02:19:53 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:19:59 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 02:20:29 runner 0 connected 2025/08/19 02:20:35 runner 6 connected 2025/08/19 02:20:47 runner 5 connected 2025/08/19 02:20:50 runner 1 connected 2025/08/19 02:20:52 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 02:21:29 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 02:21:35 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:21:39 new: boot error: can't ssh into the instance 2025/08/19 02:21:43 base crash: WARNING in xfrm_state_fini 2025/08/19 02:21:44 runner 1 connected 2025/08/19 02:21:46 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 02:22:18 runner 7 connected 2025/08/19 02:22:29 runner 9 connected 2025/08/19 02:22:32 runner 0 connected 2025/08/19 02:22:33 runner 0 connected 2025/08/19 02:22:34 STAT { "buffer too small": 0, "candidate triage jobs": 135, "candidates": 44782, "comps overflows": 0, "corpus": 35786, "corpus [files]": 36530, "corpus [symbols]": 2201, "cover overflows": 25749, "coverage": 293786, "distributor delayed": 46797, "distributor undelayed": 46692, "distributor violated": 703, "exec candidate": 36368, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 52191, "exec total [new]": 183552, "exec triage": 112415, "executor restarts": 655, "fault jobs": 0, "fuzzer jobs": 135, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 296538, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 36368, "no exec duration": 37049000000, "no exec requests": 319, "pending": 52, "prog exec time": 311, "reproducing": 0, "rpc recv": 7123945712, "rpc sent": 1081165368, "signal": 288887, "smash jobs": 0, "triage jobs": 0, "vm output": 24143875, "vm restarts [base]": 25, "vm restarts [new]": 108 } 2025/08/19 02:22:35 runner 1 connected 2025/08/19 02:23:03 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/19 02:23:03 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/19 02:23:04 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/19 02:23:04 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/19 02:23:05 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/19 02:23:05 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/19 02:23:06 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/08/19 02:23:06 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/08/19 02:23:09 base: boot error: can't ssh into the instance 2025/08/19 02:23:10 base crash: kernel BUG in txUnlock 2025/08/19 02:23:13 base crash: kernel BUG in txUnlock 2025/08/19 02:23:52 runner 6 connected 2025/08/19 02:23:54 runner 5 connected 2025/08/19 02:23:54 runner 9 connected 2025/08/19 02:23:59 runner 3 connected 2025/08/19 02:24:10 runner 0 connected 2025/08/19 02:24:25 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:24:35 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:25:07 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 02:25:13 runner 1 connected 2025/08/19 02:25:20 base: boot error: can't ssh into the instance 2025/08/19 02:25:32 runner 9 connected 2025/08/19 02:25:56 runner 3 connected 2025/08/19 02:26:10 runner 2 connected 2025/08/19 02:26:15 patched crashed: KASAN: slab-use-after-free Read in l2cap_unregister_user [need repro = true] 2025/08/19 02:26:15 scheduled a reproduction of 'KASAN: slab-use-after-free Read in l2cap_unregister_user' 2025/08/19 02:27:05 runner 0 connected 2025/08/19 02:27:20 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 02:27:34 STAT { "buffer too small": 0, "candidate triage jobs": 26, "candidates": 42682, "comps overflows": 0, "corpus": 37964, "corpus [files]": 38405, "corpus [symbols]": 2269, "cover overflows": 27023, "coverage": 298834, "distributor delayed": 50385, "distributor undelayed": 50384, "distributor violated": 907, "exec candidate": 38468, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 56425, "exec total [new]": 195261, "exec triage": 118863, "executor restarts": 730, "fault jobs": 0, "fuzzer jobs": 26, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 301485, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 38468, "no exec duration": 37049000000, "no exec requests": 319, "pending": 57, "prog exec time": 304, "reproducing": 0, "rpc recv": 7756079416, "rpc sent": 1172129072, "signal": 293816, "smash jobs": 0, "triage jobs": 0, "vm output": 26640457, "vm restarts [base]": 29, "vm restarts [new]": 115 } 2025/08/19 02:27:36 new: boot error: can't ssh into the instance 2025/08/19 02:27:41 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 02:28:12 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:28:17 runner 5 connected 2025/08/19 02:28:22 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 02:28:24 runner 3 connected 2025/08/19 02:28:31 runner 6 connected 2025/08/19 02:29:08 runner 0 connected 2025/08/19 02:29:19 runner 1 connected 2025/08/19 02:29:34 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/08/19 02:29:40 new: boot error: can't ssh into the instance 2025/08/19 02:29:43 new: boot error: can't ssh into the instance 2025/08/19 02:29:58 base crash: WARNING in xfrm_state_fini 2025/08/19 02:30:32 runner 0 connected 2025/08/19 02:30:38 runner 2 connected 2025/08/19 02:30:40 runner 8 connected 2025/08/19 02:30:56 runner 2 connected 2025/08/19 02:31:01 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/19 02:31:04 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/19 02:31:04 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/08/19 02:31:11 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = true] 2025/08/19 02:31:11 scheduled a reproduction of 'WARNING in xfrm6_tunnel_net_exit' 2025/08/19 02:31:58 runner 3 connected 2025/08/19 02:32:01 runner 2 connected 2025/08/19 02:32:01 runner 5 connected 2025/08/19 02:32:08 runner 6 connected 2025/08/19 02:32:34 STAT { "buffer too small": 0, "candidate triage jobs": 32, "candidates": 40190, "comps overflows": 0, "corpus": 40395, "corpus [files]": 40458, "corpus [symbols]": 2397, "cover overflows": 29597, "coverage": 303708, "distributor delayed": 53743, "distributor undelayed": 53743, "distributor violated": 939, "exec candidate": 40960, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 64314, "exec total [new]": 214216, "exec triage": 126656, "executor restarts": 777, "fault jobs": 0, "fuzzer jobs": 32, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 306426, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 40960, "no exec duration": 37049000000, "no exec requests": 319, "pending": 58, "prog exec time": 286, "reproducing": 0, "rpc recv": 8363946412, "rpc sent": 1329122072, "signal": 298495, "smash jobs": 0, "triage jobs": 0, "vm output": 29149424, "vm restarts [base]": 31, "vm restarts [new]": 126 } 2025/08/19 02:33:09 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:33:11 new: boot error: can't ssh into the instance 2025/08/19 02:33:14 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 02:33:16 base: boot error: can't ssh into the instance 2025/08/19 02:33:18 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 02:33:20 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:34:08 runner 4 connected 2025/08/19 02:34:13 runner 1 connected 2025/08/19 02:34:14 runner 0 connected 2025/08/19 02:34:17 runner 2 connected 2025/08/19 02:34:33 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 02:34:46 base crash: possible deadlock in ocfs2_xattr_set 2025/08/19 02:35:07 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:35:15 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 02:35:18 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:35:27 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:35:29 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:35:29 runner 1 connected 2025/08/19 02:35:29 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:35:37 runner 2 connected 2025/08/19 02:35:56 runner 2 connected 2025/08/19 02:36:06 runner 9 connected 2025/08/19 02:36:14 runner 4 connected 2025/08/19 02:36:18 runner 7 connected 2025/08/19 02:36:19 runner 5 connected 2025/08/19 02:36:53 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/19 02:37:00 base crash: kernel BUG in txUnlock 2025/08/19 02:37:15 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 02:37:19 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:37:30 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:37:34 STAT { "buffer too small": 0, "candidate triage jobs": 16, "candidates": 38738, "comps overflows": 0, "corpus": 41818, "corpus [files]": 41581, "corpus [symbols]": 2477, "cover overflows": 31933, "coverage": 306766, "distributor delayed": 55565, "distributor undelayed": 55565, "distributor violated": 940, "exec candidate": 42412, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 6, "exec seeds": 0, "exec smash": 0, "exec total [base]": 74206, "exec total [new]": 229783, "exec triage": 131197, "executor restarts": 837, "fault jobs": 0, "fuzzer jobs": 16, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 309577, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 42412, "no exec duration": 37065000000, "no exec requests": 320, "pending": 58, "prog exec time": 386, "reproducing": 0, "rpc recv": 8855962264, "rpc sent": 1477420520, "signal": 301501, "smash jobs": 0, "triage jobs": 0, "vm output": 31871231, "vm restarts [base]": 34, "vm restarts [new]": 134 } 2025/08/19 02:37:57 runner 2 connected 2025/08/19 02:38:08 runner 7 connected 2025/08/19 02:38:12 runner 0 connected 2025/08/19 02:38:12 base crash: INFO: trying to register non-static key in ocfs2_dlm_shutdown 2025/08/19 02:38:19 runner 5 connected 2025/08/19 02:38:22 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:39:09 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 02:39:16 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:39:19 runner 4 connected 2025/08/19 02:39:30 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:39:41 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 02:39:59 runner 0 connected 2025/08/19 02:40:13 runner 2 connected 2025/08/19 02:40:19 runner 1 connected 2025/08/19 02:40:21 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:40:25 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:40:31 runner 5 connected 2025/08/19 02:40:32 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 02:40:34 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 02:40:38 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 02:41:16 runner 9 connected 2025/08/19 02:41:21 runner 0 connected 2025/08/19 02:41:24 runner 4 connected 2025/08/19 02:41:33 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:41:34 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:42:07 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 02:42:07 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 02:42:17 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 02:42:17 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 02:42:24 runner 5 connected 2025/08/19 02:42:27 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 02:42:27 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 02:42:30 runner 1 connected 2025/08/19 02:42:34 STAT { "buffer too small": 0, "candidate triage jobs": 9, "candidates": 37895, "comps overflows": 0, "corpus": 42635, "corpus [files]": 42248, "corpus [symbols]": 2521, "cover overflows": 33787, "coverage": 308590, "distributor delayed": 57013, "distributor undelayed": 57010, "distributor violated": 953, "exec candidate": 43255, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 76713, "exec total [new]": 240647, "exec triage": 133811, "executor restarts": 895, "fault jobs": 0, "fuzzer jobs": 9, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 311581, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43255, "no exec duration": 37220000000, "no exec requests": 323, "pending": 61, "prog exec time": 263, "reproducing": 0, "rpc recv": 9350148020, "rpc sent": 1558995280, "signal": 303368, "smash jobs": 0, "triage jobs": 0, "vm output": 34035882, "vm restarts [base]": 38, "vm restarts [new]": 144 } 2025/08/19 02:42:39 base crash: WARNING in xfrm_state_fini 2025/08/19 02:42:56 runner 9 connected 2025/08/19 02:43:15 new: boot error: can't ssh into the instance 2025/08/19 02:43:16 runner 2 connected 2025/08/19 02:43:20 new: boot error: can't ssh into the instance 2025/08/19 02:43:27 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:43:28 runner 0 connected 2025/08/19 02:44:04 runner 0 connected 2025/08/19 02:44:09 runner 8 connected 2025/08/19 02:44:32 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:44:42 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 02:45:22 runner 1 connected 2025/08/19 02:45:31 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 02:45:32 runner 4 connected 2025/08/19 02:45:35 new: boot error: can't ssh into the instance 2025/08/19 02:45:42 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 02:46:15 patched crashed: general protection fault in pcl818_ai_cancel [need repro = true] 2025/08/19 02:46:15 scheduled a reproduction of 'general protection fault in pcl818_ai_cancel' 2025/08/19 02:46:19 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:46:20 runner 8 connected 2025/08/19 02:46:24 runner 3 connected 2025/08/19 02:46:31 runner 0 connected 2025/08/19 02:46:33 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:46:59 base: boot error: can't ssh into the instance 2025/08/19 02:47:07 runner 2 connected 2025/08/19 02:47:16 runner 9 connected 2025/08/19 02:47:24 runner 1 connected 2025/08/19 02:47:34 STAT { "buffer too small": 0, "candidate triage jobs": 8, "candidates": 37253, "comps overflows": 0, "corpus": 43260, "corpus [files]": 42750, "corpus [symbols]": 2543, "cover overflows": 35504, "coverage": 309824, "distributor delayed": 58234, "distributor undelayed": 58232, "distributor violated": 967, "exec candidate": 43897, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 79473, "exec total [new]": 250813, "exec triage": 135778, "executor restarts": 955, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 312864, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43897, "no exec duration": 37220000000, "no exec requests": 323, "pending": 62, "prog exec time": 310, "reproducing": 0, "rpc recv": 9863897380, "rpc sent": 1650790328, "signal": 304831, "smash jobs": 0, "triage jobs": 0, "vm output": 35988880, "vm restarts [base]": 39, "vm restarts [new]": 156 } 2025/08/19 02:47:39 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 02:47:59 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 02:48:18 base: boot error: can't ssh into the instance 2025/08/19 02:48:35 runner 3 connected 2025/08/19 02:48:37 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:48:48 runner 8 connected 2025/08/19 02:49:07 runner 1 connected 2025/08/19 02:49:33 runner 2 connected 2025/08/19 02:49:40 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 02:50:02 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 02:50:25 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:50:26 new: boot error: can't ssh into the instance 2025/08/19 02:50:29 runner 4 connected 2025/08/19 02:50:44 base: boot error: can't ssh into the instance 2025/08/19 02:50:52 runner 3 connected 2025/08/19 02:51:00 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 02:51:05 base crash: lost connection to test machine 2025/08/19 02:51:10 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:51:22 runner 9 connected 2025/08/19 02:51:23 runner 7 connected 2025/08/19 02:51:32 runner 2 connected 2025/08/19 02:51:49 runner 2 connected 2025/08/19 02:51:55 runner 1 connected 2025/08/19 02:52:01 runner 1 connected 2025/08/19 02:52:23 new: boot error: can't ssh into the instance 2025/08/19 02:52:27 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 02:52:34 STAT { "buffer too small": 0, "candidate triage jobs": 12, "candidates": 36354, "comps overflows": 0, "corpus": 44114, "corpus [files]": 43441, "corpus [symbols]": 2597, "cover overflows": 38032, "coverage": 311610, "distributor delayed": 59481, "distributor undelayed": 59481, "distributor violated": 976, "exec candidate": 44796, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 13, "exec seeds": 0, "exec smash": 0, "exec total [base]": 84836, "exec total [new]": 265782, "exec triage": 138590, "executor restarts": 1014, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 314761, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44792, "no exec duration": 37220000000, "no exec requests": 323, "pending": 62, "prog exec time": 375, "reproducing": 0, "rpc recv": 10332471236, "rpc sent": 1764352376, "signal": 306584, "smash jobs": 0, "triage jobs": 0, "vm output": 38490418, "vm restarts [base]": 42, "vm restarts [new]": 165 } 2025/08/19 02:53:01 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 02:53:24 runner 3 connected 2025/08/19 02:53:26 runner 6 connected 2025/08/19 02:53:27 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 02:53:33 new: boot error: can't ssh into the instance 2025/08/19 02:53:36 base crash: WARNING in xfrm_state_fini 2025/08/19 02:53:39 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 02:53:58 runner 1 connected 2025/08/19 02:54:00 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:54:11 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 02:54:24 runner 4 connected 2025/08/19 02:54:32 runner 5 connected 2025/08/19 02:54:33 runner 1 connected 2025/08/19 02:54:37 runner 2 connected 2025/08/19 02:54:56 runner 6 connected 2025/08/19 02:55:01 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 02:55:01 runner 7 connected 2025/08/19 02:55:12 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 02:55:18 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:55:19 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 02:55:20 base crash: general protection fault in pcl818_ai_cancel 2025/08/19 02:55:40 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 02:56:06 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 02:56:09 runner 4 connected 2025/08/19 02:56:15 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/08/19 02:56:15 runner 1 connected 2025/08/19 02:56:15 runner 0 connected 2025/08/19 02:56:17 runner 0 connected 2025/08/19 02:56:42 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 02:57:03 runner 2 connected 2025/08/19 02:57:05 base: boot error: can't ssh into the instance 2025/08/19 02:57:11 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 02:57:12 runner 8 connected 2025/08/19 02:57:19 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 02:57:34 STAT { "buffer too small": 0, "candidate triage jobs": 8, "candidates": 35597, "comps overflows": 0, "corpus": 44791, "corpus [files]": 44003, "corpus [symbols]": 2649, "cover overflows": 41003, "coverage": 313044, "distributor delayed": 60378, "distributor undelayed": 60376, "distributor violated": 976, "exec candidate": 45553, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 88862, "exec total [new]": 282344, "exec triage": 140835, "executor restarts": 1086, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 316233, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45499, "no exec duration": 40670000000, "no exec requests": 330, "pending": 62, "prog exec time": 315, "reproducing": 0, "rpc recv": 10878985052, "rpc sent": 1886551832, "signal": 308036, "smash jobs": 0, "triage jobs": 0, "vm output": 41042534, "vm restarts [base]": 46, "vm restarts [new]": 176 } 2025/08/19 02:57:34 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 02:57:39 runner 5 connected 2025/08/19 02:57:53 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 02:58:15 runner 4 connected 2025/08/19 02:58:17 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 02:58:22 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:58:23 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 02:58:24 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:58:31 runner 7 connected 2025/08/19 02:58:31 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 02:58:36 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:58:42 runner 0 connected 2025/08/19 02:58:52 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 02:59:06 runner 2 connected 2025/08/19 02:59:11 runner 5 connected 2025/08/19 02:59:12 runner 0 connected 2025/08/19 02:59:13 runner 1 connected 2025/08/19 02:59:15 patched crashed: WARNING: suspicious RCU usage in get_callchain_entry [need repro = true] 2025/08/19 02:59:15 scheduled a reproduction of 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 02:59:22 runner 3 connected 2025/08/19 02:59:25 runner 8 connected 2025/08/19 02:59:42 runner 4 connected 2025/08/19 02:59:53 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 03:00:03 runner 9 connected 2025/08/19 03:00:37 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 03:00:38 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/08/19 03:00:42 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 03:00:50 runner 7 connected 2025/08/19 03:00:53 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 03:01:17 base crash: possible deadlock in attr_data_get_block 2025/08/19 03:01:24 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 03:01:26 runner 3 connected 2025/08/19 03:01:27 runner 5 connected 2025/08/19 03:01:31 runner 9 connected 2025/08/19 03:01:41 runner 4 connected 2025/08/19 03:01:47 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 03:01:58 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 03:02:05 patched crashed: lost connection to test machine [need repro = false] 2025/08/19 03:02:05 runner 0 connected 2025/08/19 03:02:07 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/08/19 03:02:13 runner 8 connected 2025/08/19 03:02:28 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/08/19 03:02:34 timed out waiting for coprus triage 2025/08/19 03:02:34 starting bug reproductions 2025/08/19 03:02:34 starting bug reproductions (max 10 VMs, 7 repros) 2025/08/19 03:02:34 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/19 03:02:34 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35376, "comps overflows": 0, "corpus": 44916, "corpus [files]": 44100, "corpus [symbols]": 2662, "cover overflows": 43152, "coverage": 313272, "distributor delayed": 60696, "distributor undelayed": 60692, "distributor violated": 979, "exec candidate": 45774, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 92234, "exec total [new]": 293052, "exec triage": 141346, "executor restarts": 1147, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 316477, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 41282000000, "no exec requests": 333, "pending": 62, "prog exec time": 292, "reproducing": 0, "rpc recv": 11490136348, "rpc sent": 1977894008, "signal": 308262, "smash jobs": 0, "triage jobs": 0, "vm output": 42559074, "vm restarts [base]": 49, "vm restarts [new]": 192 } 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/19 03:02:34 start reproducing 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 03:02:34 start reproducing 'WARNING in ext4_xattr_inode_lookup_create' 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "KASAN: slab-use-after-free Read in xfrm_state_find" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "INFO: trying to register non-static key in ocfs2_dlm_shutdown" aborted: it's no longer needed 2025/08/19 03:02:34 start reproducing 'WARNING in dbAdjTree' 2025/08/19 03:02:34 start reproducing 'INFO: task hung in __iterate_supers' 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_xattr_set" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/19 03:02:34 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2025/08/19 03:02:34 start reproducing 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 03:02:34 start reproducing 'WARNING in io_ring_exit_work' 2025/08/19 03:02:34 start reproducing 'possible deadlock in ocfs2_del_inode_from_orphan' 2025/08/19 03:02:49 runner 2 connected 2025/08/19 03:03:17 runner 0 connected 2025/08/19 03:03:55 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:04:37 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 03:04:46 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 03:05:06 new: boot error: can't ssh into the instance 2025/08/19 03:05:18 reproducing crash 'WARNING in ext4_xattr_inode_lookup_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/xattr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:05:26 runner 2 connected 2025/08/19 03:05:35 runner 0 connected 2025/08/19 03:05:46 base: boot error: can't ssh into the instance 2025/08/19 03:06:36 runner 1 connected 2025/08/19 03:06:52 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 03:07:11 base: boot error: can't ssh into the instance 2025/08/19 03:07:17 new: boot error: can't ssh into the instance 2025/08/19 03:07:18 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 03:07:29 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 03:07:34 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35376, "comps overflows": 0, "corpus": 44916, "corpus [files]": 44100, "corpus [symbols]": 2662, "cover overflows": 43152, "coverage": 313272, "distributor delayed": 60696, "distributor undelayed": 60692, "distributor violated": 979, "exec candidate": 45774, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 95634, "exec total [new]": 293052, "exec triage": 141346, "executor restarts": 1147, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 316477, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 41282000000, "no exec requests": 333, "pending": 29, "prog exec time": 0, "reproducing": 7, "rpc recv": 11645491808, "rpc sent": 1991020216, "signal": 308262, "smash jobs": 0, "triage jobs": 0, "vm output": 43609622, "vm restarts [base]": 54, "vm restarts [new]": 192 } 2025/08/19 03:07:41 runner 2 connected 2025/08/19 03:08:01 runner 3 connected 2025/08/19 03:08:06 runner 0 connected 2025/08/19 03:08:16 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:08:17 runner 1 connected 2025/08/19 03:08:31 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 03:09:20 runner 2 connected 2025/08/19 03:12:11 new: boot error: can't ssh into the instance 2025/08/19 03:12:19 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/gpu/drm/drm_modeset_lock.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:12:34 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35376, "comps overflows": 0, "corpus": 44916, "corpus [files]": 44100, "corpus [symbols]": 2662, "cover overflows": 43152, "coverage": 313272, "distributor delayed": 60696, "distributor undelayed": 60692, "distributor violated": 979, "exec candidate": 45774, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 96336, "exec total [new]": 293052, "exec triage": 141346, "executor restarts": 1147, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 316477, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 11, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 41282000000, "no exec requests": 333, "pending": 29, "prog exec time": 0, "reproducing": 7, "rpc recv": 11800155292, "rpc sent": 1993190896, "signal": 308262, "smash jobs": 0, "triage jobs": 0, "vm output": 44908215, "vm restarts [base]": 59, "vm restarts [new]": 192 } 2025/08/19 03:12:40 reproducing crash 'WARNING in ext4_xattr_inode_lookup_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/xattr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:13:33 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:13:36 base crash: no output from test machine 2025/08/19 03:13:37 base crash: no output from test machine 2025/08/19 03:13:40 base crash: no output from test machine 2025/08/19 03:14:01 new: boot error: can't ssh into the instance 2025/08/19 03:14:06 new: boot error: can't ssh into the instance 2025/08/19 03:14:20 base crash: no output from test machine 2025/08/19 03:14:25 runner 0 connected 2025/08/19 03:14:29 runner 3 connected 2025/08/19 03:14:52 reproducing crash 'WARNING in ext4_xattr_inode_lookup_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/xattr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:15:08 runner 2 connected 2025/08/19 03:15:19 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:15:22 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:16:08 reproducing crash 'WARNING in ext4_xattr_inode_lookup_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/xattr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:16:08 repro finished 'WARNING in ext4_xattr_inode_lookup_create', repro=true crepro=false desc='WARNING in ext4_xattr_inode_lookup_create' hub=false from_dashboard=false 2025/08/19 03:16:08 found repro for "WARNING in ext4_xattr_inode_lookup_create" (orig title: "-SAME-", reliability: 1), took 13.56 minutes 2025/08/19 03:16:08 reproduction of "possible deadlock in attr_data_get_block" aborted: it's no longer needed 2025/08/19 03:16:08 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/19 03:16:08 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/19 03:16:08 start reproducing 'INFO: task hung in corrupted' 2025/08/19 03:16:08 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/19 03:16:08 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2025/08/19 03:16:08 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/19 03:16:08 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/19 03:16:08 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/19 03:16:08 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/08/19 03:16:08 "WARNING in ext4_xattr_inode_lookup_create": saved crash log into 1755573368.crash.log 2025/08/19 03:16:08 "WARNING in ext4_xattr_inode_lookup_create": saved repro log into 1755573368.repro.log 2025/08/19 03:16:32 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:16:36 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:16:54 repro finished 'possible deadlock in ocfs2_del_inode_from_orphan', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 03:16:54 reproduction of "WARNING in xfrm6_tunnel_net_exit" aborted: it's no longer needed 2025/08/19 03:16:54 reproduction of "general protection fault in pcl818_ai_cancel" aborted: it's no longer needed 2025/08/19 03:16:54 failed repro for "possible deadlock in ocfs2_del_inode_from_orphan", err=%!s() 2025/08/19 03:16:54 start reproducing 'KASAN: slab-use-after-free Read in l2cap_unregister_user' 2025/08/19 03:16:54 "possible deadlock in ocfs2_del_inode_from_orphan": saved crash log into 1755573414.crash.log 2025/08/19 03:16:54 "possible deadlock in ocfs2_del_inode_from_orphan": saved repro log into 1755573414.repro.log 2025/08/19 03:17:22 new: boot error: can't ssh into the instance 2025/08/19 03:17:28 attempt #0 to run "WARNING in ext4_xattr_inode_lookup_create" on base: crashed with WARNING in ext4_xattr_inode_lookup_create 2025/08/19 03:17:28 crashes both: WARNING in ext4_xattr_inode_lookup_create / WARNING in ext4_xattr_inode_lookup_create 2025/08/19 03:17:34 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35376, "comps overflows": 0, "corpus": 44916, "corpus [files]": 44100, "corpus [symbols]": 2662, "cover overflows": 43152, "coverage": 313272, "distributor delayed": 60696, "distributor undelayed": 60692, "distributor violated": 979, "exec candidate": 45774, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 96336, "exec total [new]": 293052, "exec triage": 141346, "executor restarts": 1147, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 316477, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 13, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 41282000000, "no exec requests": 333, "pending": 16, "prog exec time": 0, "reproducing": 7, "rpc recv": 11892843476, "rpc sent": 1993191736, "signal": 308262, "smash jobs": 0, "triage jobs": 0, "vm output": 46510327, "vm restarts [base]": 62, "vm restarts [new]": 192 } 2025/08/19 03:18:08 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:18:08 repro finished 'WARNING in dbAdjTree', repro=true crepro=false desc='WARNING in dbAdjTree' hub=false from_dashboard=false 2025/08/19 03:18:08 start reproducing 'WARNING in ext4_xattr_inode_lookup_create' 2025/08/19 03:18:08 found repro for "WARNING in dbAdjTree" (orig title: "-SAME-", reliability: 1), took 15.55 minutes 2025/08/19 03:18:08 "WARNING in dbAdjTree": saved crash log into 1755573488.crash.log 2025/08/19 03:18:08 "WARNING in dbAdjTree": saved repro log into 1755573488.repro.log 2025/08/19 03:18:09 reproducing crash 'WARNING: suspicious RCU usage in get_callchain_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f kernel/events/callchain.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:18:09 repro finished 'WARNING: suspicious RCU usage in get_callchain_entry', repro=true crepro=false desc='WARNING: suspicious RCU usage in get_callchain_entry' hub=false from_dashboard=false 2025/08/19 03:18:09 found repro for "WARNING: suspicious RCU usage in get_callchain_entry" (orig title: "-SAME-", reliability: 1), took 15.57 minutes 2025/08/19 03:18:09 start reproducing 'WARNING in dbAdjTree' 2025/08/19 03:18:09 "WARNING: suspicious RCU usage in get_callchain_entry": saved crash log into 1755573489.crash.log 2025/08/19 03:18:09 "WARNING: suspicious RCU usage in get_callchain_entry": saved repro log into 1755573489.repro.log 2025/08/19 03:18:12 repro finished 'KASAN: slab-use-after-free Read in __xfrm_state_lookup', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 03:18:12 start reproducing 'WARNING: suspicious RCU usage in get_callchain_entry' 2025/08/19 03:18:12 failed repro for "KASAN: slab-use-after-free Read in __xfrm_state_lookup", err=%!s() 2025/08/19 03:18:12 "KASAN: slab-use-after-free Read in __xfrm_state_lookup": saved crash log into 1755573492.crash.log 2025/08/19 03:18:12 "KASAN: slab-use-after-free Read in __xfrm_state_lookup": saved repro log into 1755573492.repro.log 2025/08/19 03:18:32 new: boot error: can't ssh into the instance 2025/08/19 03:19:28 base crash: no output from test machine 2025/08/19 03:19:33 attempt #0 to run "WARNING in dbAdjTree" on base: crashed with WARNING in dbAdjTree 2025/08/19 03:19:33 crashes both: WARNING in dbAdjTree / WARNING in dbAdjTree 2025/08/19 03:20:08 base crash: no output from test machine 2025/08/19 03:20:17 runner 3 connected 2025/08/19 03:20:22 runner 0 connected 2025/08/19 03:20:30 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/gpu/drm/drm_modeset_lock.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:20:56 runner 2 connected 2025/08/19 03:21:47 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:22:34 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35376, "comps overflows": 0, "corpus": 44916, "corpus [files]": 44100, "corpus [symbols]": 2662, "cover overflows": 43152, "coverage": 313272, "distributor delayed": 60696, "distributor undelayed": 60692, "distributor violated": 979, "exec candidate": 45774, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 96336, "exec total [new]": 293052, "exec triage": 141346, "executor restarts": 1147, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 316477, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 14, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 41282000000, "no exec requests": 333, "pending": 13, "prog exec time": 0, "reproducing": 7, "rpc recv": 11985531660, "rpc sent": 1993192576, "signal": 308262, "smash jobs": 0, "triage jobs": 0, "vm output": 47211159, "vm restarts [base]": 65, "vm restarts [new]": 192 } 2025/08/19 03:22:35 new: boot error: can't ssh into the instance 2025/08/19 03:22:43 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/gpu/drm/drm_modeset_lock.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:23:43 base: boot error: can't ssh into the instance 2025/08/19 03:24:10 reproducing crash 'INFO: task hung in __iterate_supers': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/gpu/drm/drm_modeset_lock.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:24:10 repro finished 'INFO: task hung in __iterate_supers', repro=true crepro=false desc='WARNING in __ww_mutex_wound' hub=false from_dashboard=false 2025/08/19 03:24:10 found repro for "WARNING in __ww_mutex_wound" (orig title: "INFO: task hung in __iterate_supers", reliability: 1), took 21.54 minutes 2025/08/19 03:24:10 start reproducing 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 03:24:10 "WARNING in __ww_mutex_wound": saved crash log into 1755573850.crash.log 2025/08/19 03:24:10 "WARNING in __ww_mutex_wound": saved repro log into 1755573850.repro.log 2025/08/19 03:24:58 attempt #0 to run "WARNING: suspicious RCU usage in get_callchain_entry" on base: crashed with WARNING: suspicious RCU usage in get_callchain_entry 2025/08/19 03:24:58 crashes both: WARNING: suspicious RCU usage in get_callchain_entry / WARNING: suspicious RCU usage in get_callchain_entry 2025/08/19 03:25:16 base crash: no output from test machine 2025/08/19 03:25:56 base crash: no output from test machine 2025/08/19 03:25:57 attempt #0 to run "WARNING in __ww_mutex_wound" on base: crashed with WARNING in __ww_mutex_wound 2025/08/19 03:25:57 crashes both: WARNING in __ww_mutex_wound / WARNING in __ww_mutex_wound 2025/08/19 03:25:58 runner 1 connected 2025/08/19 03:26:05 runner 3 connected 2025/08/19 03:26:14 new: boot error: can't ssh into the instance 2025/08/19 03:26:41 new: boot error: can't ssh into the instance 2025/08/19 03:26:45 runner 0 connected 2025/08/19 03:26:47 runner 2 connected 2025/08/19 03:27:13 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:27:34 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35376, "comps overflows": 0, "corpus": 44916, "corpus [files]": 44100, "corpus [symbols]": 2662, "cover overflows": 43152, "coverage": 313272, "distributor delayed": 60696, "distributor undelayed": 60692, "distributor violated": 979, "exec candidate": 45774, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 96336, "exec total [new]": 293052, "exec triage": 141346, "executor restarts": 1147, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 316477, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 15, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 41282000000, "no exec requests": 333, "pending": 12, "prog exec time": 0, "reproducing": 7, "rpc recv": 12109115908, "rpc sent": 1993193696, "signal": 308262, "smash jobs": 0, "triage jobs": 0, "vm output": 48076253, "vm restarts [base]": 69, "vm restarts [new]": 192 } 2025/08/19 03:28:13 new: boot error: can't ssh into the instance 2025/08/19 03:28:20 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:28:38 new: boot error: can't ssh into the instance 2025/08/19 03:29:33 reproducing crash 'WARNING in dbAdjTree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/08/19 03:29:33 repro finished 'WARNING in dbAdjTree', repro=true crepro=false desc='WARNING in dbAdjTree' hub=false from_dashboard=false 2025/08/19 03:29:33 found repro for "WARNING in dbAdjTree" (orig title: "-SAME-", reliability: 1), took 11.40 minutes 2025/08/19 03:29:33 reproduction of "WARNING in dbAdjTree" aborted: it's no longer needed 2025/08/19 03:29:33 reproduction of "WARNING in dbAdjTree" aborted: it's no longer needed 2025/08/19 03:29:33 start reproducing 'INFO: task hung in __iterate_supers' 2025/08/19 03:29:33 "WARNING in dbAdjTree": saved crash log into 1755574173.crash.log 2025/08/19 03:29:33 "WARNING in dbAdjTree": saved repro log into 1755574173.repro.log 2025/08/19 03:30:50 attempt #0 to run "WARNING in dbAdjTree" on base: crashed with WARNING in dbAdjTree 2025/08/19 03:30:50 crashes both: WARNING in dbAdjTree / WARNING in dbAdjTree 2025/08/19 03:30:57 base crash: no output from test machine 2025/08/19 03:31:05 base crash: no output from test machine 2025/08/19 03:31:40 runner 0 connected 2025/08/19 03:31:46 base crash: no output from test machine 2025/08/19 03:31:48 runner 1 connected 2025/08/19 03:31:54 runner 3 connected 2025/08/19 03:32:34 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35376, "comps overflows": 0, "corpus": 44916, "corpus [files]": 44100, "corpus [symbols]": 2662, "cover overflows": 43152, "coverage": 313272, "distributor delayed": 60696, "distributor undelayed": 60692, "distributor violated": 979, "exec candidate": 45774, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 96336, "exec total [new]": 293052, "exec triage": 141346, "executor restarts": 1147, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 316477, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 15, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 41282000000, "no exec requests": 333, "pending": 10, "prog exec time": 0, "reproducing": 7, "rpc recv": 12201804092, "rpc sent": 1993194536, "signal": 308262, "smash jobs": 0, "triage jobs": 0, "vm output": 49928936, "vm restarts [base]": 72, "vm restarts [new]": 192 } 2025/08/19 03:32:36 runner 2 connected 2025/08/19 03:36:40 base crash: no output from test machine 2025/08/19 03:36:48 base crash: no output from test machine 2025/08/19 03:36:54 base crash: no output from test machine 2025/08/19 03:37:34 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35376, "comps overflows": 0, "corpus": 44916, "corpus [files]": 44100, "corpus [symbols]": 2662, "cover overflows": 43152, "coverage": 313272, "distributor delayed": 60696, "distributor undelayed": 60692, "distributor violated": 979, "exec candidate": 45774, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 96336, "exec total [new]": 293052, "exec triage": 141346, "executor restarts": 1147, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 316477, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 15, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 41282000000, "no exec requests": 333, "pending": 10, "prog exec time": 0, "reproducing": 7, "rpc recv": 12232700156, "rpc sent": 1993194816, "signal": 308262, "smash jobs": 0, "triage jobs": 0, "vm output": 51707489, "vm restarts [base]": 73, "vm restarts [new]": 192 } 2025/08/19 03:37:36 base crash: no output from test machine 2025/08/19 03:37:37 runner 1 connected 2025/08/19 03:37:43 runner 3 connected 2025/08/19 03:38:10 new: boot error: can't ssh into the instance 2025/08/19 03:38:23 runner 2 connected 2025/08/19 03:38:26 new: boot error: can't ssh into the instance 2025/08/19 03:38:37 repro finished 'KASAN: slab-use-after-free Read in __xfrm_state_lookup', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 03:38:37 failed repro for "KASAN: slab-use-after-free Read in __xfrm_state_lookup", err=%!s() 2025/08/19 03:38:37 start reproducing 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 03:38:37 "KASAN: slab-use-after-free Read in __xfrm_state_lookup": saved crash log into 1755574717.crash.log 2025/08/19 03:38:37 "KASAN: slab-use-after-free Read in __xfrm_state_lookup": saved repro log into 1755574717.repro.log 2025/08/19 03:38:43 new: boot error: can't ssh into the instance 2025/08/19 03:40:01 new: boot error: can't ssh into the instance 2025/08/19 03:41:55 new: boot error: can't ssh into the instance 2025/08/19 03:42:12 new: boot error: can't ssh into the instance 2025/08/19 03:42:34 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35376, "comps overflows": 0, "corpus": 44916, "corpus [files]": 44100, "corpus [symbols]": 2662, "cover overflows": 43152, "coverage": 313272, "distributor delayed": 60696, "distributor undelayed": 60692, "distributor violated": 979, "exec candidate": 45774, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 96336, "exec total [new]": 293052, "exec triage": 141346, "executor restarts": 1147, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 316477, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 15, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 41282000000, "no exec requests": 333, "pending": 9, "prog exec time": 0, "reproducing": 7, "rpc recv": 12325388348, "rpc sent": 1993195656, "signal": 308262, "smash jobs": 0, "triage jobs": 0, "vm output": 53604968, "vm restarts [base]": 76, "vm restarts [new]": 192 } 2025/08/19 03:42:37 base crash: no output from test machine 2025/08/19 03:42:43 base crash: no output from test machine 2025/08/19 03:43:23 base crash: no output from test machine 2025/08/19 03:43:28 runner 1 connected 2025/08/19 03:43:31 runner 3 connected 2025/08/19 03:44:13 runner 2 connected 2025/08/19 03:46:45 base: boot error: can't ssh into the instance 2025/08/19 03:47:34 runner 0 connected 2025/08/19 03:47:34 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35376, "comps overflows": 0, "corpus": 44916, "corpus [files]": 44100, "corpus [symbols]": 2662, "cover overflows": 43152, "coverage": 313272, "distributor delayed": 60696, "distributor undelayed": 60692, "distributor violated": 979, "exec candidate": 45774, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 96336, "exec total [new]": 293052, "exec triage": 141346, "executor restarts": 1147, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 316477, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 15, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 41282000000, "no exec requests": 333, "pending": 9, "prog exec time": 0, "reproducing": 7, "rpc recv": 12418076696, "rpc sent": 1993196760, "signal": 308262, "smash jobs": 0, "triage jobs": 0, "vm output": 55952178, "vm restarts [base]": 80, "vm restarts [new]": 192 } 2025/08/19 03:48:27 base crash: no output from test machine 2025/08/19 03:48:31 base crash: no output from test machine 2025/08/19 03:49:12 base crash: no output from test machine 2025/08/19 03:49:16 runner 1 connected 2025/08/19 03:49:21 runner 3 connected 2025/08/19 03:50:02 runner 2 connected 2025/08/19 03:50:07 new: boot error: can't ssh into the instance 2025/08/19 03:51:15 new: boot error: can't ssh into the instance 2025/08/19 03:51:48 repro finished 'KASAN: slab-use-after-free Read in __xfrm_state_lookup', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 03:51:48 failed repro for "KASAN: slab-use-after-free Read in __xfrm_state_lookup", err=%!s() 2025/08/19 03:51:48 start reproducing 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 03:51:48 "KASAN: slab-use-after-free Read in __xfrm_state_lookup": saved crash log into 1755575508.crash.log 2025/08/19 03:51:48 "KASAN: slab-use-after-free Read in __xfrm_state_lookup": saved repro log into 1755575508.repro.log 2025/08/19 03:52:34 base crash: no output from test machine 2025/08/19 03:52:34 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35376, "comps overflows": 0, "corpus": 44916, "corpus [files]": 44100, "corpus [symbols]": 2662, "cover overflows": 43152, "coverage": 313272, "distributor delayed": 60696, "distributor undelayed": 60692, "distributor violated": 979, "exec candidate": 45774, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 96336, "exec total [new]": 293052, "exec triage": 141346, "executor restarts": 1147, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 316477, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 15, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 41282000000, "no exec requests": 333, "pending": 8, "prog exec time": 0, "reproducing": 7, "rpc recv": 12541660788, "rpc sent": 1993197616, "signal": 308262, "smash jobs": 0, "triage jobs": 0, "vm output": 59739260, "vm restarts [base]": 83, "vm restarts [new]": 192 } 2025/08/19 03:53:23 runner 0 connected 2025/08/19 03:54:16 base crash: no output from test machine 2025/08/19 03:54:20 base crash: no output from test machine 2025/08/19 03:55:01 base crash: no output from test machine 2025/08/19 03:55:05 runner 1 connected 2025/08/19 03:55:10 runner 3 connected 2025/08/19 03:55:51 runner 2 connected 2025/08/19 03:57:34 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35376, "comps overflows": 0, "corpus": 44916, "corpus [files]": 44100, "corpus [symbols]": 2662, "cover overflows": 43152, "coverage": 313272, "distributor delayed": 60696, "distributor undelayed": 60692, "distributor violated": 979, "exec candidate": 45774, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 96336, "exec total [new]": 293052, "exec triage": 141346, "executor restarts": 1147, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 316477, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 15, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45646, "no exec duration": 41282000000, "no exec requests": 333, "pending": 8, "prog exec time": 0, "reproducing": 7, "rpc recv": 12665245036, "rpc sent": 1993198736, "signal": 308262, "smash jobs": 0, "triage jobs": 0, "vm output": 62595092, "vm restarts [base]": 87, "vm restarts [new]": 192 } 2025/08/19 03:58:23 base crash: no output from test machine 2025/08/19 03:59:11 runner 0 connected 2025/08/19 04:00:05 base crash: no output from test machine 2025/08/19 04:00:09 base crash: no output from test machine 2025/08/19 04:00:51 base crash: no output from test machine 2025/08/19 04:00:53 new: boot error: can't ssh into the instance 2025/08/19 04:00:54 runner 1 connected 2025/08/19 04:00:59 runner 3 connected 2025/08/19 04:01:22 repro finished 'INFO: task hung in corrupted', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 04:01:22 failed repro for "INFO: task hung in corrupted", err=%!s() 2025/08/19 04:01:22 "INFO: task hung in corrupted": saved crash log into 1755576082.crash.log 2025/08/19 04:01:22 "INFO: task hung in corrupted": saved repro log into 1755576082.repro.log 2025/08/19 04:01:23 runner 0 connected 2025/08/19 04:02:34 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 35371, "comps overflows": 0, "corpus": 44918, "corpus [files]": 44101, "corpus [symbols]": 2662, "cover overflows": 43247, "coverage": 313274, "distributor delayed": 60698, "distributor undelayed": 60696, "distributor violated": 979, "exec candidate": 45779, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 96924, "exec total [new]": 293681, "exec triage": 141356, "executor restarts": 1152, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 316480, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 15, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45648, "no exec duration": 226707000000, "no exec requests": 842, "pending": 8, "prog exec time": 256, "reproducing": 6, "rpc recv": 12789566760, "rpc sent": 1999684608, "signal": 308267, "smash jobs": 0, "triage jobs": 0, "vm output": 64988240, "vm restarts [base]": 90, "vm restarts [new]": 193 } 2025/08/19 04:02:39 runner 1 connected 2025/08/19 04:03:27 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 04:03:36 repro finished 'KASAN: slab-use-after-free Read in __xfrm_state_lookup', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 04:03:36 failed repro for "KASAN: slab-use-after-free Read in __xfrm_state_lookup", err=%!s() 2025/08/19 04:03:36 start reproducing 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 04:03:36 "KASAN: slab-use-after-free Read in __xfrm_state_lookup": saved crash log into 1755576216.crash.log 2025/08/19 04:03:36 "KASAN: slab-use-after-free Read in __xfrm_state_lookup": saved repro log into 1755576216.repro.log 2025/08/19 04:03:39 new: boot error: can't ssh into the instance 2025/08/19 04:04:23 runner 0 connected 2025/08/19 04:04:28 base crash: lost connection to test machine 2025/08/19 04:05:24 runner 3 connected 2025/08/19 04:06:01 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 04:06:02 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 04:06:37 base crash: WARNING in xfrm_state_fini 2025/08/19 04:06:50 runner 3 connected 2025/08/19 04:07:34 runner 0 connected 2025/08/19 04:07:34 STAT { "buffer too small": 0, "candidate triage jobs": 8, "candidates": 35329, "comps overflows": 0, "corpus": 44927, "corpus [files]": 44110, "corpus [symbols]": 2662, "cover overflows": 43938, "coverage": 313300, "distributor delayed": 60720, "distributor undelayed": 60712, "distributor violated": 982, "exec candidate": 45821, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 100567, "exec total [new]": 297316, "exec triage": 141432, "executor restarts": 1166, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 316514, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 15, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45672, "no exec duration": 824067000000, "no exec requests": 2814, "pending": 7, "prog exec time": 267, "reproducing": 6, "rpc recv": 12918693152, "rpc sent": 2028732968, "signal": 308293, "smash jobs": 0, "triage jobs": 0, "vm output": 67192718, "vm restarts [base]": 93, "vm restarts [new]": 195 } 2025/08/19 04:08:36 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 04:09:26 runner 1 connected 2025/08/19 04:10:56 base: boot error: can't ssh into the instance 2025/08/19 04:11:31 new: boot error: can't ssh into the instance 2025/08/19 04:12:05 base crash: possible deadlock in ocfs2_init_acl 2025/08/19 04:12:34 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 35272, "comps overflows": 0, "corpus": 44980, "corpus [files]": 44153, "corpus [symbols]": 2665, "cover overflows": 44394, "coverage": 313396, "distributor delayed": 60720, "distributor undelayed": 60720, "distributor violated": 990, "exec candidate": 45878, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 102941, "exec total [new]": 299699, "exec triage": 141626, "executor restarts": 1173, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 316612, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 15, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45726, "no exec duration": 1583396000000, "no exec requests": 4912, "pending": 7, "prog exec time": 382, "reproducing": 6, "rpc recv": 12987659380, "rpc sent": 2046958792, "signal": 308393, "smash jobs": 0, "triage jobs": 0, "vm output": 69830066, "vm restarts [base]": 93, "vm restarts [new]": 196 } 2025/08/19 04:13:03 runner 0 connected 2025/08/19 04:15:55 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/08/19 04:16:08 new: boot error: can't ssh into the instance 2025/08/19 04:16:45 runner 0 connected 2025/08/19 04:16:57 runner 0 connected 2025/08/19 04:17:18 base crash: WARNING in xfrm_state_fini 2025/08/19 04:17:34 STAT { "buffer too small": 0, "candidate triage jobs": 1, "candidates": 32416, "comps overflows": 0, "corpus": 45009, "corpus [files]": 44179, "corpus [symbols]": 2667, "cover overflows": 45401, "coverage": 313448, "distributor delayed": 60722, "distributor undelayed": 60722, "distributor violated": 990, "exec candidate": 48734, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 107917, "exec total [new]": 304697, "exec triage": 141760, "executor restarts": 1178, "fault jobs": 0, "fuzzer jobs": 1, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 316864, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 15, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45762, "no exec duration": 2393837000000, "no exec requests": 7707, "pending": 7, "prog exec time": 189, "reproducing": 6, "rpc recv": 13087752028, "rpc sent": 2094378752, "signal": 308441, "smash jobs": 0, "triage jobs": 0, "vm output": 71796119, "vm restarts [base]": 95, "vm restarts [new]": 197 } 2025/08/19 04:18:08 runner 0 connected 2025/08/19 04:18:44 base crash: WARNING in xfrm6_tunnel_net_exit 2025/08/19 04:19:08 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 04:19:53 base crash: WARNING in xfrm_state_fini 2025/08/19 04:19:58 runner 1 connected 2025/08/19 04:20:28 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/08/19 04:20:41 runner 3 connected 2025/08/19 04:21:02 base: boot error: can't ssh into the instance 2025/08/19 04:21:17 runner 0 connected 2025/08/19 04:21:59 runner 2 connected 2025/08/19 04:22:34 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 27503, "comps overflows": 0, "corpus": 45088, "corpus [files]": 44238, "corpus [symbols]": 2673, "cover overflows": 46262, "coverage": 313600, "distributor delayed": 60804, "distributor undelayed": 60804, "distributor violated": 990, "exec candidate": 53647, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 113193, "exec total [new]": 309950, "exec triage": 142096, "executor restarts": 1189, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 317054, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 15, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45861, "no exec duration": 2711669000000, "no exec requests": 8780, "pending": 7, "prog exec time": 269, "reproducing": 6, "rpc recv": 13254679512, "rpc sent": 2136857488, "signal": 308592, "smash jobs": 0, "triage jobs": 0, "vm output": 73095520, "vm restarts [base]": 98, "vm restarts [new]": 199 } 2025/08/19 04:26:37 patched crashed: INFO: task hung in v9fs_evict_inode [need repro = true] 2025/08/19 04:26:37 scheduled a reproduction of 'INFO: task hung in v9fs_evict_inode' 2025/08/19 04:26:37 start reproducing 'INFO: task hung in v9fs_evict_inode' 2025/08/19 04:26:37 base crash: INFO: task hung in v9fs_evict_inode 2025/08/19 04:27:05 new: boot error: can't ssh into the instance 2025/08/19 04:27:17 repro finished 'WARNING: suspicious RCU usage in get_callchain_entry', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 04:27:17 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/19 04:27:17 failed repro for "WARNING: suspicious RCU usage in get_callchain_entry", err=%!s() 2025/08/19 04:27:17 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/19 04:27:17 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/19 04:27:17 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/19 04:27:17 reproduction of "WARNING: suspicious RCU usage in get_callchain_entry" aborted: it's no longer needed 2025/08/19 04:27:17 "WARNING: suspicious RCU usage in get_callchain_entry": saved crash log into 1755577637.crash.log 2025/08/19 04:27:17 "WARNING: suspicious RCU usage in get_callchain_entry": saved repro log into 1755577637.repro.log 2025/08/19 04:27:25 runner 0 connected 2025/08/19 04:27:34 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 22766, "comps overflows": 0, "corpus": 45163, "corpus [files]": 44299, "corpus [symbols]": 2681, "cover overflows": 47222, "coverage": 313742, "distributor delayed": 60865, "distributor undelayed": 60865, "distributor violated": 990, "exec candidate": 58384, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 118243, "exec total [new]": 314999, "exec triage": 142408, "executor restarts": 1198, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 317218, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 15, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45949, "no exec duration": 3322510000000, "no exec requests": 11161, "pending": 2, "prog exec time": 0, "reproducing": 6, "rpc recv": 13297512560, "rpc sent": 2180433328, "signal": 308725, "smash jobs": 0, "triage jobs": 0, "vm output": 78637625, "vm restarts [base]": 98, "vm restarts [new]": 200 } 2025/08/19 04:27:37 runner 1 connected 2025/08/19 04:28:01 repro finished 'KASAN: slab-use-after-free Read in __xfrm_state_lookup', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 04:28:01 start reproducing 'KASAN: slab-use-after-free Read in __xfrm_state_lookup' 2025/08/19 04:28:01 failed repro for "KASAN: slab-use-after-free Read in __xfrm_state_lookup", err=%!s() 2025/08/19 04:28:01 "KASAN: slab-use-after-free Read in __xfrm_state_lookup": saved crash log into 1755577681.crash.log 2025/08/19 04:28:01 "KASAN: slab-use-after-free Read in __xfrm_state_lookup": saved repro log into 1755577681.repro.log 2025/08/19 04:28:50 base: boot error: can't ssh into the instance 2025/08/19 04:29:46 runner 1 connected 2025/08/19 04:30:24 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/08/19 04:31:06 repro finished 'WARNING in io_ring_exit_work', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 04:31:06 failed repro for "WARNING in io_ring_exit_work", err=%!s() 2025/08/19 04:31:06 "WARNING in io_ring_exit_work": saved crash log into 1755577866.crash.log 2025/08/19 04:31:06 "WARNING in io_ring_exit_work": saved repro log into 1755577866.repro.log 2025/08/19 04:31:13 runner 0 connected 2025/08/19 04:32:04 runner 2 connected 2025/08/19 04:32:30 status reporting terminated 2025/08/19 04:32:30 bug reporting terminated 2025/08/19 04:32:30 repro finished 'INFO: task hung in __iterate_supers', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 04:32:40 repro finished 'INFO: task hung in v9fs_evict_inode', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 04:33:00 repro finished 'KASAN: slab-use-after-free Read in __xfrm_state_lookup', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 04:35:17 repro finished 'WARNING in ext4_xattr_inode_lookup_create', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 04:35:24 repro finished 'KASAN: slab-use-after-free Read in l2cap_unregister_user', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/08/19 04:36:43 syz-diff (base): kernel context loop terminated 2025/08/19 04:41:50 syz-diff (new): kernel context loop terminated 2025/08/19 04:41:50 diff fuzzing terminated 2025/08/19 04:41:50 fuzzing is finished 2025/08/19 04:41:50 status at the end: Title On-Base On-Patched INFO: task hung in __iterate_supers 2 crashes INFO: task hung in corrupted 1 crashes INFO: task hung in rfkill_global_led_trigger_worker 1 crashes INFO: task hung in rtnetlink_rcv_msg 1 crashes INFO: task hung in v9fs_evict_inode 1 crashes 1 crashes INFO: trying to register non-static key in ocfs2_dlm_shutdown 1 crashes 1 crashes KASAN: slab-use-after-free Read in __xfrm_state_lookup 6 crashes KASAN: slab-use-after-free Read in l2cap_unregister_user 1 crashes KASAN: slab-use-after-free Read in rose_timer_expiry 1 crashes KASAN: slab-use-after-free Read in xfrm_alloc_spi 4 crashes 3 crashes KASAN: slab-use-after-free Read in xfrm_state_find 1 crashes 1 crashes WARNING in __ww_mutex_wound 1 crashes [reproduced] WARNING in dbAdjTree 2 crashes 4 crashes[reproduced] WARNING in ext4_xattr_inode_lookup_create 1 crashes 3 crashes[reproduced] WARNING in io_ring_exit_work 1 crashes WARNING in xfrm6_tunnel_net_exit 2 crashes 1 crashes WARNING in xfrm_state_fini 10 crashes 21 crashes WARNING: suspicious RCU usage in get_callchain_entry 1 crashes 7 crashes[reproduced] general protection fault in pcl818_ai_cancel 1 crashes 1 crashes kernel BUG in jfs_evict_inode 1 crashes 1 crashes kernel BUG in txUnlock 3 crashes 7 crashes lost connection to test machine 2 crashes 9 crashes no output from test machine 29 crashes possible deadlock in attr_data_get_block 1 crashes 1 crashes possible deadlock in ocfs2_del_inode_from_orphan 1 crashes possible deadlock in ocfs2_init_acl 18 crashes 39 crashes possible deadlock in ocfs2_reserve_suballoc_bits 5 crashes 40 crashes possible deadlock in ocfs2_try_remove_refcount_tree 7 crashes 29 crashes possible deadlock in ocfs2_xattr_set 3 crashes 13 crashes unregister_netdevice: waiting for DEV to become free 1 crashes