2025/11/10 12:11:46 extracted 322917 text symbol hashes for base and 322917 for patched 2025/11/10 12:11:46 binaries are different, continuing fuzzing 2025/11/10 12:11:46 adding modified_functions to focus areas: ["__hugetlb_zap_begin" "move_hugetlb_page_tables" "remove_inode_hugepages"] 2025/11/10 12:11:46 adding directly modified files to focus areas: ["fs/hugetlbfs/inode.c" "mm/hugetlb.c"] 2025/11/10 12:11:46 downloading corpus #1: "https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db" 2025/11/10 12:12:45 runner 7 connected 2025/11/10 12:12:45 runner 0 connected 2025/11/10 12:12:45 runner 3 connected 2025/11/10 12:12:45 runner 1 connected 2025/11/10 12:12:45 runner 1 connected 2025/11/10 12:12:45 runner 5 connected 2025/11/10 12:12:45 runner 2 connected 2025/11/10 12:12:46 runner 2 connected 2025/11/10 12:12:46 runner 6 connected 2025/11/10 12:12:46 runner 4 connected 2025/11/10 12:12:46 runner 8 connected 2025/11/10 12:12:47 runner 0 connected 2025/11/10 12:12:52 initializing coverage information... 2025/11/10 12:12:52 executor cover filter: 0 PCs 2025/11/10 12:12:56 discovered 7611 source files, 333869 symbols 2025/11/10 12:12:56 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8056 2025/11/10 12:12:56 base: machine check complete 2025/11/10 12:12:56 coverage filter: __hugetlb_zap_begin: [__hugetlb_zap_begin] 2025/11/10 12:12:56 coverage filter: move_hugetlb_page_tables: [move_hugetlb_page_tables] 2025/11/10 12:12:56 coverage filter: remove_inode_hugepages: [remove_inode_hugepages] 2025/11/10 12:12:56 coverage filter: fs/hugetlbfs/inode.c: [fs/hugetlbfs/inode.c] 2025/11/10 12:12:56 coverage filter: mm/hugetlb.c: [mm/hugetlb.c mm/hugetlb_cgroup.c mm/hugetlb_cma.c] 2025/11/10 12:12:56 area "symbols": 224 PCs in the cover filter 2025/11/10 12:12:56 area "files": 4967 PCs in the cover filter 2025/11/10 12:12:56 area "": 0 PCs in the cover filter 2025/11/10 12:12:56 executor cover filter: 0 PCs 2025/11/10 12:13:00 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8056 2025/11/10 12:13:00 new: machine check complete 2025/11/10 12:13:01 new: adding 81717 seeds 2025/11/10 12:16:49 STAT { "buffer too small": 0, "candidate triage jobs": 58, "candidates": 77002, "comps overflows": 0, "corpus": 4610, "corpus [files]": 79, "corpus [symbols]": 12, "cover overflows": 3514, "coverage": 164266, "distributor delayed": 4448, "distributor undelayed": 4447, "distributor violated": 1, "exec candidate": 4715, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 6, "exec seeds": 0, "exec smash": 0, "exec total [base]": 7989, "exec total [new]": 21047, "exec triage": 14759, "executor restarts [base]": 59, "executor restarts [new]": 121, "fault jobs": 0, "fuzzer jobs": 58, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 166433, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 4715, "no exec duration": 44257000000, "no exec requests": 348, "pending": 0, "prog exec time": 223, "reproducing": 0, "rpc recv": 1093752532, "rpc sent": 108690080, "signal": 161307, "smash jobs": 0, "triage jobs": 0, "vm output": 3606110, "vm restarts [base]": 3, "vm restarts [new]": 9 } 2025/11/10 12:17:19 crash "WARNING in folio_memcg" is already known 2025/11/10 12:17:19 base crash "WARNING in folio_memcg" is to be ignored 2025/11/10 12:17:19 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 12:17:30 crash "WARNING in folio_memcg" is already known 2025/11/10 12:17:30 base crash "WARNING in folio_memcg" is to be ignored 2025/11/10 12:17:30 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 12:17:38 base crash: WARNING in folio_memcg 2025/11/10 12:17:40 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 12:17:48 base crash: WARNING in folio_memcg 2025/11/10 12:18:08 runner 2 connected 2025/11/10 12:18:18 runner 1 connected 2025/11/10 12:18:26 runner 0 connected 2025/11/10 12:18:30 runner 5 connected 2025/11/10 12:18:32 patched crashed: INFO: task hung in reg_check_chans_work [need repro = true] 2025/11/10 12:18:32 scheduled a reproduction of 'INFO: task hung in reg_check_chans_work' 2025/11/10 12:18:38 runner 2 connected 2025/11/10 12:19:01 base crash: possible deadlock in run_unpack_ex 2025/11/10 12:19:24 runner 0 connected 2025/11/10 12:19:51 runner 1 connected 2025/11/10 12:19:56 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 12:20:23 crash "WARNING in io_ring_exit_work" is already known 2025/11/10 12:20:23 base crash "WARNING in io_ring_exit_work" is to be ignored 2025/11/10 12:20:23 patched crashed: WARNING in io_ring_exit_work [need repro = false] 2025/11/10 12:20:24 crash "WARNING in io_ring_exit_work" is already known 2025/11/10 12:20:24 base crash "WARNING in io_ring_exit_work" is to be ignored 2025/11/10 12:20:24 patched crashed: WARNING in io_ring_exit_work [need repro = false] 2025/11/10 12:20:44 base crash: kernel BUG in txUnlock 2025/11/10 12:20:46 runner 0 connected 2025/11/10 12:21:13 runner 4 connected 2025/11/10 12:21:13 runner 7 connected 2025/11/10 12:21:28 base crash: lost connection to test machine 2025/11/10 12:21:31 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:21:31 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:21:33 runner 0 connected 2025/11/10 12:21:42 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 12:21:42 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 12:21:44 crash "WARNING in xfrm6_tunnel_net_exit" is already known 2025/11/10 12:21:44 base crash "WARNING in xfrm6_tunnel_net_exit" is to be ignored 2025/11/10 12:21:44 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/11/10 12:21:45 crash "WARNING in xfrm6_tunnel_net_exit" is already known 2025/11/10 12:21:45 base crash "WARNING in xfrm6_tunnel_net_exit" is to be ignored 2025/11/10 12:21:45 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/11/10 12:21:49 STAT { "buffer too small": 0, "candidate triage jobs": 61, "candidates": 72304, "comps overflows": 0, "corpus": 9241, "corpus [files]": 130, "corpus [symbols]": 19, "cover overflows": 6932, "coverage": 201261, "distributor delayed": 10626, "distributor undelayed": 10590, "distributor violated": 5, "exec candidate": 9413, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 9, "exec seeds": 0, "exec smash": 0, "exec total [base]": 13725, "exec total [new]": 42627, "exec triage": 29461, "executor restarts [base]": 99, "executor restarts [new]": 196, "fault jobs": 0, "fuzzer jobs": 61, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 203208, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 9413, "no exec duration": 44305000000, "no exec requests": 349, "pending": 3, "prog exec time": 379, "reproducing": 0, "rpc recv": 2030755448, "rpc sent": 224056784, "signal": 196649, "smash jobs": 0, "triage jobs": 0, "vm output": 6366685, "vm restarts [base]": 7, "vm restarts [new]": 16 } 2025/11/10 12:21:52 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:21:52 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:22:03 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:22:03 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:22:14 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 12:22:14 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 12:22:17 runner 1 connected 2025/11/10 12:22:22 runner 6 connected 2025/11/10 12:22:32 runner 5 connected 2025/11/10 12:22:33 runner 2 connected 2025/11/10 12:22:35 runner 4 connected 2025/11/10 12:22:41 runner 3 connected 2025/11/10 12:22:53 runner 0 connected 2025/11/10 12:23:04 runner 7 connected 2025/11/10 12:23:39 crash "BUG: sleeping function called from invalid context in hook_sb_delete" is already known 2025/11/10 12:23:39 base crash "BUG: sleeping function called from invalid context in hook_sb_delete" is to be ignored 2025/11/10 12:23:39 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 12:23:40 crash "BUG: sleeping function called from invalid context in hook_sb_delete" is already known 2025/11/10 12:23:40 base crash "BUG: sleeping function called from invalid context in hook_sb_delete" is to be ignored 2025/11/10 12:23:40 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 12:23:41 crash "BUG: sleeping function called from invalid context in hook_sb_delete" is already known 2025/11/10 12:23:41 base crash "BUG: sleeping function called from invalid context in hook_sb_delete" is to be ignored 2025/11/10 12:23:41 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 12:24:29 runner 3 connected 2025/11/10 12:24:29 runner 2 connected 2025/11/10 12:24:30 runner 7 connected 2025/11/10 12:24:31 base crash: BUG: sleeping function called from invalid context in hook_sb_delete 2025/11/10 12:25:28 runner 0 connected 2025/11/10 12:26:43 base crash: BUG: sleeping function called from invalid context in hook_sb_delete 2025/11/10 12:26:49 STAT { "buffer too small": 0, "candidate triage jobs": 56, "candidates": 67564, "comps overflows": 0, "corpus": 13934, "corpus [files]": 169, "corpus [symbols]": 24, "cover overflows": 10270, "coverage": 225289, "distributor delayed": 16566, "distributor undelayed": 16566, "distributor violated": 145, "exec candidate": 14153, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 9, "exec seeds": 0, "exec smash": 0, "exec total [base]": 24672, "exec total [new]": 65175, "exec triage": 44068, "executor restarts [base]": 118, "executor restarts [new]": 257, "fault jobs": 0, "fuzzer jobs": 56, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 227074, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 14153, "no exec duration": 44316000000, "no exec requests": 350, "pending": 6, "prog exec time": 241, "reproducing": 0, "rpc recv": 3098244200, "rpc sent": 355533864, "signal": 220296, "smash jobs": 0, "triage jobs": 0, "vm output": 8572425, "vm restarts [base]": 9, "vm restarts [new]": 26 } 2025/11/10 12:27:23 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:27:23 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:27:33 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:27:33 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:27:33 runner 1 connected 2025/11/10 12:27:44 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:27:44 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:27:54 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:27:54 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:28:05 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:28:05 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:28:13 runner 2 connected 2025/11/10 12:28:22 runner 8 connected 2025/11/10 12:28:33 crash "INFO: task hung in corrupted" is already known 2025/11/10 12:28:33 base crash "INFO: task hung in corrupted" is to be ignored 2025/11/10 12:28:33 patched crashed: INFO: task hung in corrupted [need repro = false] 2025/11/10 12:28:35 runner 6 connected 2025/11/10 12:28:43 runner 3 connected 2025/11/10 12:28:55 runner 7 connected 2025/11/10 12:29:23 runner 0 connected 2025/11/10 12:30:59 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 12:31:12 base crash: WARNING in xfrm6_tunnel_net_exit 2025/11/10 12:31:49 runner 3 connected 2025/11/10 12:31:49 STAT { "buffer too small": 0, "candidate triage jobs": 50, "candidates": 62749, "comps overflows": 0, "corpus": 18699, "corpus [files]": 204, "corpus [symbols]": 29, "cover overflows": 13244, "coverage": 242409, "distributor delayed": 22013, "distributor undelayed": 22013, "distributor violated": 268, "exec candidate": 18968, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 34458, "exec total [new]": 87606, "exec triage": 58772, "executor restarts [base]": 128, "executor restarts [new]": 318, "fault jobs": 0, "fuzzer jobs": 50, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 244443, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 18968, "no exec duration": 44554000000, "no exec requests": 355, "pending": 11, "prog exec time": 291, "reproducing": 0, "rpc recv": 4062020660, "rpc sent": 490172704, "signal": 237951, "smash jobs": 0, "triage jobs": 0, "vm output": 12000850, "vm restarts [base]": 10, "vm restarts [new]": 33 } 2025/11/10 12:31:59 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:31:59 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:32:02 runner 2 connected 2025/11/10 12:32:11 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:32:11 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:32:21 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:32:21 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:32:26 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 12:32:26 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 12:32:37 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:32:37 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:32:48 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:32:48 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:32:51 runner 2 connected 2025/11/10 12:32:59 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:32:59 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:33:01 runner 5 connected 2025/11/10 12:33:10 runner 4 connected 2025/11/10 12:33:15 runner 3 connected 2025/11/10 12:33:27 runner 0 connected 2025/11/10 12:33:38 runner 6 connected 2025/11/10 12:33:48 runner 8 connected 2025/11/10 12:35:31 crash "WARNING in xfrm_state_fini" is already known 2025/11/10 12:35:31 base crash "WARNING in xfrm_state_fini" is to be ignored 2025/11/10 12:35:31 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 12:36:29 runner 3 connected 2025/11/10 12:36:49 STAT { "buffer too small": 0, "candidate triage jobs": 53, "candidates": 57738, "comps overflows": 0, "corpus": 23635, "corpus [files]": 241, "corpus [symbols]": 34, "cover overflows": 16552, "coverage": 257215, "distributor delayed": 27394, "distributor undelayed": 27392, "distributor violated": 334, "exec candidate": 23979, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 44736, "exec total [new]": 112629, "exec triage": 74102, "executor restarts [base]": 149, "executor restarts [new]": 396, "fault jobs": 0, "fuzzer jobs": 53, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 259320, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 23979, "no exec duration": 44681000000, "no exec requests": 360, "pending": 18, "prog exec time": 309, "reproducing": 0, "rpc recv": 5124774020, "rpc sent": 638395192, "signal": 252738, "smash jobs": 0, "triage jobs": 0, "vm output": 15346951, "vm restarts [base]": 11, "vm restarts [new]": 41 } 2025/11/10 12:38:05 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:38:05 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:38:16 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:38:16 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:38:26 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:38:26 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:38:55 runner 5 connected 2025/11/10 12:39:05 runner 1 connected 2025/11/10 12:39:16 runner 4 connected 2025/11/10 12:39:49 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:39:49 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:39:59 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 12:39:59 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 12:40:38 runner 8 connected 2025/11/10 12:40:44 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 12:40:50 runner 4 connected 2025/11/10 12:41:35 runner 2 connected 2025/11/10 12:41:49 STAT { "buffer too small": 0, "candidate triage jobs": 47, "candidates": 52104, "comps overflows": 0, "corpus": 29193, "corpus [files]": 270, "corpus [symbols]": 39, "cover overflows": 20291, "coverage": 271236, "distributor delayed": 33114, "distributor undelayed": 33114, "distributor violated": 334, "exec candidate": 29613, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 56199, "exec total [new]": 143088, "exec triage": 91509, "executor restarts [base]": 159, "executor restarts [new]": 447, "fault jobs": 0, "fuzzer jobs": 47, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 273378, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 29613, "no exec duration": 44838000000, "no exec requests": 368, "pending": 23, "prog exec time": 233, "reproducing": 0, "rpc recv": 6096392732, "rpc sent": 811897832, "signal": 266483, "smash jobs": 0, "triage jobs": 0, "vm output": 18904049, "vm restarts [base]": 11, "vm restarts [new]": 47 } 2025/11/10 12:42:21 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/11/10 12:43:13 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 12:43:14 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/11/10 12:43:19 runner 4 connected 2025/11/10 12:43:24 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 12:43:34 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 12:43:44 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 12:43:55 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 12:44:02 runner 1 connected 2025/11/10 12:44:03 runner 0 connected 2025/11/10 12:44:06 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 12:44:08 base crash: WARNING in folio_memcg 2025/11/10 12:44:13 runner 5 connected 2025/11/10 12:44:25 runner 2 connected 2025/11/10 12:44:34 runner 7 connected 2025/11/10 12:44:44 runner 3 connected 2025/11/10 12:44:55 runner 4 connected 2025/11/10 12:44:56 runner 2 connected 2025/11/10 12:45:04 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 12:45:04 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 12:45:14 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 12:45:14 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 12:45:54 runner 7 connected 2025/11/10 12:46:04 runner 2 connected 2025/11/10 12:46:25 base crash: WARNING in xfrm_state_fini 2025/11/10 12:46:49 STAT { "buffer too small": 0, "candidate triage jobs": 52, "candidates": 47664, "comps overflows": 0, "corpus": 33551, "corpus [files]": 284, "corpus [symbols]": 40, "cover overflows": 23300, "coverage": 281690, "distributor delayed": 38372, "distributor undelayed": 38372, "distributor violated": 422, "exec candidate": 34053, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 13, "exec seeds": 0, "exec smash": 0, "exec total [base]": 68105, "exec total [new]": 167665, "exec triage": 105110, "executor restarts [base]": 167, "executor restarts [new]": 516, "fault jobs": 0, "fuzzer jobs": 52, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 283978, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 34053, "no exec duration": 50272000000, "no exec requests": 380, "pending": 25, "prog exec time": 256, "reproducing": 0, "rpc recv": 7120839596, "rpc sent": 972130560, "signal": 276771, "smash jobs": 0, "triage jobs": 0, "vm output": 22063760, "vm restarts [base]": 12, "vm restarts [new]": 57 } 2025/11/10 12:46:56 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 12:46:56 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 12:47:06 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:47:06 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:47:10 base crash: lost connection to test machine 2025/11/10 12:47:15 runner 2 connected 2025/11/10 12:47:18 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:47:18 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:47:28 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 12:47:28 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 12:47:31 base crash: lost connection to test machine 2025/11/10 12:47:39 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 12:47:39 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 12:47:46 runner 5 connected 2025/11/10 12:47:56 runner 6 connected 2025/11/10 12:47:59 runner 1 connected 2025/11/10 12:48:08 runner 0 connected 2025/11/10 12:48:18 runner 2 connected 2025/11/10 12:48:19 runner 0 connected 2025/11/10 12:48:29 runner 3 connected 2025/11/10 12:48:32 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 12:49:21 runner 8 connected 2025/11/10 12:49:23 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 12:49:23 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 12:49:32 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 12:49:34 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:49:34 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:50:08 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:50:08 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:50:13 runner 0 connected 2025/11/10 12:50:19 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 12:50:19 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 12:50:23 runner 3 connected 2025/11/10 12:50:24 runner 1 connected 2025/11/10 12:50:43 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:50:43 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:50:53 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:50:53 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:50:59 runner 6 connected 2025/11/10 12:51:09 runner 5 connected 2025/11/10 12:51:18 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 12:51:18 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 12:51:28 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 12:51:28 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 12:51:32 runner 4 connected 2025/11/10 12:51:36 base crash: WARNING in xfrm_state_fini 2025/11/10 12:51:42 runner 7 connected 2025/11/10 12:51:49 STAT { "buffer too small": 0, "candidate triage jobs": 31, "candidates": 43896, "comps overflows": 0, "corpus": 37302, "corpus [files]": 301, "corpus [symbols]": 42, "cover overflows": 25827, "coverage": 289558, "distributor delayed": 43349, "distributor undelayed": 43349, "distributor violated": 427, "exec candidate": 37821, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 77114, "exec total [new]": 189097, "exec triage": 116614, "executor restarts [base]": 182, "executor restarts [new]": 588, "fault jobs": 0, "fuzzer jobs": 31, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 291867, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 37821, "no exec duration": 50468000000, "no exec requests": 384, "pending": 38, "prog exec time": 198, "reproducing": 0, "rpc recv": 8182167168, "rpc sent": 1105696288, "signal": 284558, "smash jobs": 0, "triage jobs": 0, "vm output": 25085346, "vm restarts [base]": 15, "vm restarts [new]": 70 } 2025/11/10 12:51:54 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 12:52:08 runner 0 connected 2025/11/10 12:52:17 runner 1 connected 2025/11/10 12:52:17 base crash: lost connection to test machine 2025/11/10 12:52:26 runner 0 connected 2025/11/10 12:52:45 runner 2 connected 2025/11/10 12:53:06 runner 1 connected 2025/11/10 12:53:58 crash "INFO: task hung in __iterate_supers" is already known 2025/11/10 12:53:58 base crash "INFO: task hung in __iterate_supers" is to be ignored 2025/11/10 12:53:58 patched crashed: INFO: task hung in __iterate_supers [need repro = false] 2025/11/10 12:54:55 runner 8 connected 2025/11/10 12:55:08 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 12:55:20 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:55:20 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:55:30 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:55:30 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:55:31 crash "general protection fault in pcl818_ai_cancel" is already known 2025/11/10 12:55:31 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/11/10 12:55:31 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/10 12:55:40 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:55:40 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:55:42 crash "general protection fault in pcl818_ai_cancel" is already known 2025/11/10 12:55:42 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/11/10 12:55:42 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/10 12:55:42 crash "general protection fault in pcl818_ai_cancel" is already known 2025/11/10 12:55:42 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/11/10 12:55:42 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/10 12:55:53 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 12:55:53 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 12:55:55 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 12:55:57 base crash: WARNING in folio_memcg 2025/11/10 12:55:59 runner 7 connected 2025/11/10 12:56:09 runner 0 connected 2025/11/10 12:56:19 runner 2 connected 2025/11/10 12:56:20 runner 6 connected 2025/11/10 12:56:28 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 12:56:29 runner 1 connected 2025/11/10 12:56:30 runner 4 connected 2025/11/10 12:56:32 runner 8 connected 2025/11/10 12:56:42 runner 3 connected 2025/11/10 12:56:45 runner 5 connected 2025/11/10 12:56:47 runner 0 connected 2025/11/10 12:56:49 STAT { "buffer too small": 0, "candidate triage jobs": 35, "candidates": 40222, "comps overflows": 0, "corpus": 40911, "corpus [files]": 323, "corpus [symbols]": 44, "cover overflows": 28008, "coverage": 296595, "distributor delayed": 46881, "distributor undelayed": 46881, "distributor violated": 427, "exec candidate": 41495, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 85112, "exec total [new]": 210029, "exec triage": 127771, "executor restarts [base]": 206, "executor restarts [new]": 665, "fault jobs": 0, "fuzzer jobs": 35, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 298978, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 41495, "no exec duration": 50596000000, "no exec requests": 385, "pending": 42, "prog exec time": 328, "reproducing": 0, "rpc recv": 9129013756, "rpc sent": 1241021424, "signal": 291516, "smash jobs": 0, "triage jobs": 0, "vm output": 28602984, "vm restarts [base]": 18, "vm restarts [new]": 83 } 2025/11/10 12:57:03 base crash: INFO: task hung in __iterate_supers 2025/11/10 12:57:19 runner 7 connected 2025/11/10 12:57:43 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 12:57:47 patched crashed: possible deadlock in run_unpack_ex [need repro = false] 2025/11/10 12:57:52 runner 1 connected 2025/11/10 12:57:58 patched crashed: possible deadlock in run_unpack_ex [need repro = false] 2025/11/10 12:58:33 crash "possible deadlock in ext4_destroy_inline_data" is already known 2025/11/10 12:58:33 base crash "possible deadlock in ext4_destroy_inline_data" is to be ignored 2025/11/10 12:58:33 patched crashed: possible deadlock in ext4_destroy_inline_data [need repro = false] 2025/11/10 12:58:33 runner 8 connected 2025/11/10 12:58:33 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 12:58:33 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 12:58:35 runner 5 connected 2025/11/10 12:58:42 base crash: INFO: task hung in __iterate_supers 2025/11/10 12:58:48 runner 3 connected 2025/11/10 12:58:53 base crash: WARNING in xfrm_state_fini 2025/11/10 12:59:07 crash "possible deadlock in ext4_evict_inode" is already known 2025/11/10 12:59:07 base crash "possible deadlock in ext4_evict_inode" is to be ignored 2025/11/10 12:59:07 patched crashed: possible deadlock in ext4_evict_inode [need repro = false] 2025/11/10 12:59:21 runner 1 connected 2025/11/10 12:59:22 runner 6 connected 2025/11/10 12:59:25 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 12:59:32 runner 2 connected 2025/11/10 12:59:44 runner 0 connected 2025/11/10 12:59:52 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 12:59:52 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 12:59:55 runner 7 connected 2025/11/10 13:00:15 runner 3 connected 2025/11/10 13:00:28 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:00:28 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:00:32 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 13:00:32 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 13:00:38 base crash: possible deadlock in ext4_evict_inode 2025/11/10 13:00:42 runner 1 connected 2025/11/10 13:01:17 runner 4 connected 2025/11/10 13:01:21 runner 7 connected 2025/11/10 13:01:27 runner 1 connected 2025/11/10 13:01:49 STAT { "buffer too small": 0, "candidate triage jobs": 22, "candidates": 38258, "comps overflows": 0, "corpus": 42840, "corpus [files]": 326, "corpus [symbols]": 45, "cover overflows": 30762, "coverage": 300765, "distributor delayed": 49204, "distributor undelayed": 49203, "distributor violated": 430, "exec candidate": 43459, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 14, "exec seeds": 0, "exec smash": 0, "exec total [base]": 91999, "exec total [new]": 230666, "exec triage": 133888, "executor restarts [base]": 232, "executor restarts [new]": 742, "fault jobs": 0, "fuzzer jobs": 22, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 303155, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43459, "no exec duration": 50596000000, "no exec requests": 385, "pending": 46, "prog exec time": 305, "reproducing": 0, "rpc recv": 10087355124, "rpc sent": 1426336728, "signal": 295706, "smash jobs": 0, "triage jobs": 0, "vm output": 32326590, "vm restarts [base]": 22, "vm restarts [new]": 94 } 2025/11/10 13:02:24 base crash: possible deadlock in mark_as_free_ex 2025/11/10 13:02:33 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:02:33 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:03:12 base crash: possible deadlock in run_unpack_ex 2025/11/10 13:03:13 runner 2 connected 2025/11/10 13:03:20 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:03:20 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:03:22 runner 2 connected 2025/11/10 13:03:30 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:03:30 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:03:40 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:03:40 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:04:02 runner 1 connected 2025/11/10 13:04:11 runner 6 connected 2025/11/10 13:04:19 runner 4 connected 2025/11/10 13:04:30 runner 5 connected 2025/11/10 13:04:59 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 13:05:04 base crash: WARNING in folio_memcg 2025/11/10 13:05:19 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:05:19 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:05:49 runner 4 connected 2025/11/10 13:05:54 runner 0 connected 2025/11/10 13:06:07 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 13:06:07 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 13:06:15 runner 1 connected 2025/11/10 13:06:43 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:06:43 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:06:49 STAT { "buffer too small": 0, "candidate triage jobs": 17, "candidates": 36815, "comps overflows": 0, "corpus": 44223, "corpus [files]": 335, "corpus [symbols]": 45, "cover overflows": 34372, "coverage": 303455, "distributor delayed": 50764, "distributor undelayed": 50764, "distributor violated": 434, "exec candidate": 44902, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 98020, "exec total [new]": 253428, "exec triage": 138434, "executor restarts [base]": 252, "executor restarts [new]": 805, "fault jobs": 0, "fuzzer jobs": 17, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 306003, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44902, "no exec duration": 50667000000, "no exec requests": 388, "pending": 53, "prog exec time": 280, "reproducing": 0, "rpc recv": 10675791916, "rpc sent": 1567670832, "signal": 298368, "smash jobs": 0, "triage jobs": 0, "vm output": 35866870, "vm restarts [base]": 25, "vm restarts [new]": 100 } 2025/11/10 13:06:55 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:06:55 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:06:57 runner 2 connected 2025/11/10 13:07:05 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:07:05 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:07:12 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:07:12 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:07:15 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:07:15 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:07:26 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 13:07:26 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 13:07:30 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:07:30 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:07:34 runner 4 connected 2025/11/10 13:07:44 runner 7 connected 2025/11/10 13:07:56 runner 3 connected 2025/11/10 13:08:02 runner 0 connected 2025/11/10 13:08:06 runner 6 connected 2025/11/10 13:08:17 runner 5 connected 2025/11/10 13:08:22 runner 8 connected 2025/11/10 13:08:59 base crash: WARNING in xfrm_state_fini 2025/11/10 13:09:19 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:09:19 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:09:19 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 13:09:19 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 13:09:30 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 13:09:45 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 13:09:50 runner 0 connected 2025/11/10 13:09:55 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/10 13:09:55 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 13:10:06 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 13:10:06 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/10 13:10:07 runner 0 connected 2025/11/10 13:10:08 runner 6 connected 2025/11/10 13:10:09 base crash: WARNING in folio_memcg 2025/11/10 13:10:17 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 13:10:19 base crash: WARNING in folio_memcg 2025/11/10 13:10:19 runner 8 connected 2025/11/10 13:10:28 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/10 13:10:29 base crash: WARNING in folio_memcg 2025/11/10 13:10:31 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 13:10:34 runner 2 connected 2025/11/10 13:10:41 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/10 13:10:43 runner 3 connected 2025/11/10 13:10:43 runner 1 connected 2025/11/10 13:10:55 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 13:10:55 runner 4 connected 2025/11/10 13:10:55 runner 5 connected 2025/11/10 13:10:58 runner 1 connected 2025/11/10 13:11:06 runner 7 connected 2025/11/10 13:11:08 runner 2 connected 2025/11/10 13:11:17 runner 6 connected 2025/11/10 13:11:17 runner 0 connected 2025/11/10 13:11:19 runner 0 connected 2025/11/10 13:11:23 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 13:11:23 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 13:11:24 runner 8 connected 2025/11/10 13:11:34 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:11:34 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:11:44 runner 2 connected 2025/11/10 13:11:47 base crash: kernel BUG in txUnlock 2025/11/10 13:11:49 STAT { "buffer too small": 0, "candidate triage jobs": 8, "candidates": 36165, "comps overflows": 0, "corpus": 44814, "corpus [files]": 337, "corpus [symbols]": 45, "cover overflows": 36448, "coverage": 304714, "distributor delayed": 51660, "distributor undelayed": 51660, "distributor violated": 436, "exec candidate": 45552, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 104872, "exec total [new]": 267320, "exec triage": 140429, "executor restarts [base]": 282, "executor restarts [new]": 899, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 307370, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45531, "no exec duration": 50672000000, "no exec requests": 389, "pending": 63, "prog exec time": 245, "reproducing": 0, "rpc recv": 11647771840, "rpc sent": 1708526808, "signal": 299642, "smash jobs": 0, "triage jobs": 0, "vm output": 38614069, "vm restarts [base]": 29, "vm restarts [new]": 121 } 2025/11/10 13:12:05 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 13:12:12 runner 4 connected 2025/11/10 13:12:24 runner 5 connected 2025/11/10 13:12:31 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 13:12:31 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 13:12:39 runner 1 connected 2025/11/10 13:12:54 runner 6 connected 2025/11/10 13:12:59 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:12:59 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:13:01 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 13:13:01 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 13:13:10 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:13:10 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:13:11 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:13:11 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:13:21 runner 1 connected 2025/11/10 13:13:22 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:13:22 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:13:37 patched crashed: KASAN: slab-use-after-free Read in jfs_lazycommit [need repro = true] 2025/11/10 13:13:37 scheduled a reproduction of 'KASAN: slab-use-after-free Read in jfs_lazycommit' 2025/11/10 13:13:47 runner 0 connected 2025/11/10 13:13:49 runner 4 connected 2025/11/10 13:13:58 runner 8 connected 2025/11/10 13:14:00 runner 3 connected 2025/11/10 13:14:13 runner 2 connected 2025/11/10 13:14:28 runner 5 connected 2025/11/10 13:14:57 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:14:57 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:15:08 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:15:08 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:15:43 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:15:43 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:15:47 runner 1 connected 2025/11/10 13:15:56 runner 8 connected 2025/11/10 13:16:32 runner 7 connected 2025/11/10 13:16:37 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 13:16:49 STAT { "buffer too small": 0, "candidate triage jobs": 8, "candidates": 35377, "comps overflows": 0, "corpus": 45515, "corpus [files]": 338, "corpus [symbols]": 45, "cover overflows": 40176, "coverage": 306012, "distributor delayed": 52549, "distributor undelayed": 52549, "distributor violated": 448, "exec candidate": 46340, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 18, "exec seeds": 0, "exec smash": 0, "exec total [base]": 116544, "exec total [new]": 289309, "exec triage": 142783, "executor restarts [base]": 302, "executor restarts [new]": 980, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 308735, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46275, "no exec duration": 50786000000, "no exec requests": 392, "pending": 73, "prog exec time": 153, "reproducing": 0, "rpc recv": 12444527900, "rpc sent": 1879993504, "signal": 300925, "smash jobs": 0, "triage jobs": 0, "vm output": 41545033, "vm restarts [base]": 30, "vm restarts [new]": 134 } 2025/11/10 13:16:52 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:16:52 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:17:28 runner 2 connected 2025/11/10 13:17:33 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:17:33 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:17:42 runner 6 connected 2025/11/10 13:17:44 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:17:44 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:18:14 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:18:14 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:18:23 runner 3 connected 2025/11/10 13:18:32 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:18:32 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:18:39 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:18:39 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:18:40 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 13:18:40 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 13:18:41 runner 5 connected 2025/11/10 13:19:03 runner 4 connected 2025/11/10 13:19:17 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 13:19:20 runner 7 connected 2025/11/10 13:19:29 runner 6 connected 2025/11/10 13:19:29 runner 1 connected 2025/11/10 13:19:48 base crash: lost connection to test machine 2025/11/10 13:20:06 runner 8 connected 2025/11/10 13:20:08 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:20:08 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:20:17 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 13:20:20 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 13:20:20 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 13:20:37 runner 1 connected 2025/11/10 13:20:58 runner 4 connected 2025/11/10 13:21:08 runner 2 connected 2025/11/10 13:21:08 runner 5 connected 2025/11/10 13:21:10 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 13:21:20 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 13:21:49 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 27100, "comps overflows": 0, "corpus": 45807, "corpus [files]": 339, "corpus [symbols]": 45, "cover overflows": 43683, "coverage": 306608, "distributor delayed": 52976, "distributor undelayed": 52976, "distributor violated": 448, "exec candidate": 54617, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 130216, "exec total [new]": 311138, "exec triage": 143903, "executor restarts [base]": 312, "executor restarts [new]": 1056, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 309445, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46610, "no exec duration": 51152000000, "no exec requests": 395, "pending": 82, "prog exec time": 212, "reproducing": 0, "rpc recv": 13144579548, "rpc sent": 2042635848, "signal": 301502, "smash jobs": 0, "triage jobs": 0, "vm output": 44648015, "vm restarts [base]": 31, "vm restarts [new]": 146 } 2025/11/10 13:21:59 runner 0 connected 2025/11/10 13:22:10 runner 3 connected 2025/11/10 13:24:49 triaged 90.8% of the corpus 2025/11/10 13:24:49 starting bug reproductions 2025/11/10 13:24:49 starting bug reproductions (max 6 VMs, 4 repros) 2025/11/10 13:24:49 start reproducing 'INFO: task hung in reg_check_chans_work' 2025/11/10 13:24:49 start reproducing 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:24:49 start reproducing 'possible deadlock in unmap_vmas' 2025/11/10 13:24:49 start reproducing 'KASAN: slab-use-after-free Read in jfs_lazycommit' 2025/11/10 13:25:39 base crash: INFO: task hung in sync_bdevs 2025/11/10 13:26:02 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:26:30 runner 2 connected 2025/11/10 13:26:30 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:26:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 1905, "comps overflows": 0, "corpus": 45993, "corpus [files]": 339, "corpus [symbols]": 45, "cover overflows": 48362, "coverage": 306963, "distributor delayed": 53214, "distributor undelayed": 53214, "distributor violated": 453, "exec candidate": 79812, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 23, "exec seeds": 0, "exec smash": 0, "exec total [base]": 142633, "exec total [new]": 337132, "exec triage": 144696, "executor restarts [base]": 326, "executor restarts [new]": 1090, "fault jobs": 0, "fuzzer jobs": 0, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 309861, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46835, "no exec duration": 51165000000, "no exec requests": 396, "pending": 78, "prog exec time": 175, "reproducing": 4, "rpc recv": 13499903268, "rpc sent": 2178836344, "signal": 301852, "smash jobs": 0, "triage jobs": 0, "vm output": 47001292, "vm restarts [base]": 32, "vm restarts [new]": 148 } 2025/11/10 13:27:02 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 13:27:26 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:27:52 runner 8 connected 2025/11/10 13:28:52 crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/11/10 13:28:52 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is to be ignored 2025/11/10 13:28:52 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/10 13:28:53 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:29:40 runner 7 connected 2025/11/10 13:29:51 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/10 13:29:56 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/10 13:30:02 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:30:15 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/10 13:30:35 crash "possible deadlock in ocfs2_init_acl" is already known 2025/11/10 13:30:35 base crash "possible deadlock in ocfs2_init_acl" is to be ignored 2025/11/10 13:30:35 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/11/10 13:30:38 runner 2 connected 2025/11/10 13:30:46 runner 1 connected 2025/11/10 13:30:49 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/10 13:31:03 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 13:31:04 runner 0 connected 2025/11/10 13:31:23 runner 7 connected 2025/11/10 13:31:29 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/11/10 13:31:33 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:31:39 runner 6 connected 2025/11/10 13:31:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 14, "corpus": 46030, "corpus [files]": 342, "corpus [symbols]": 45, "cover overflows": 49604, "coverage": 307223, "distributor delayed": 53332, "distributor undelayed": 53332, "distributor violated": 455, "exec candidate": 81717, "exec collide": 534, "exec fuzz": 985, "exec gen": 65, "exec hints": 321, "exec inject": 0, "exec minimize": 579, "exec retries": 23, "exec seeds": 102, "exec smash": 612, "exec total [base]": 150599, "exec total [new]": 342452, "exec triage": 144911, "executor restarts [base]": 347, "executor restarts [new]": 1118, "fault jobs": 0, "fuzzer jobs": 34, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 12, "max signal": 310194, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 322, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46907, "no exec duration": 55796000000, "no exec requests": 407, "pending": 78, "prog exec time": 385, "reproducing": 4, "rpc recv": 13943922384, "rpc sent": 2260697456, "signal": 302063, "smash jobs": 17, "triage jobs": 5, "vm output": 48954010, "vm restarts [base]": 35, "vm restarts [new]": 152 } 2025/11/10 13:31:51 runner 2 connected 2025/11/10 13:31:57 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:31:57 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:32:07 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:32:07 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:32:09 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:32:19 runner 8 connected 2025/11/10 13:32:38 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:32:38 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:32:45 runner 7 connected 2025/11/10 13:32:48 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:32:48 repro finished 'possible deadlock in unmap_vmas', repro=true crepro=false desc='possible deadlock in unmap_vmas' hub=false from_dashboard=false 2025/11/10 13:32:48 found repro for "possible deadlock in unmap_vmas" (orig title: "-SAME-", reliability: 1), took 7.97 minutes 2025/11/10 13:32:48 "possible deadlock in unmap_vmas": saved crash log into 1762781568.crash.log 2025/11/10 13:32:48 "possible deadlock in unmap_vmas": saved repro log into 1762781568.repro.log 2025/11/10 13:32:48 start reproducing 'possible deadlock in unmap_vmas' 2025/11/10 13:32:56 runner 6 connected 2025/11/10 13:33:04 patched crashed: possible deadlock in unmap_vmas [need repro = false] 2025/11/10 13:33:23 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:33:27 runner 8 connected 2025/11/10 13:33:46 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:33:46 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:33:55 runner 7 connected 2025/11/10 13:34:08 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:34:36 runner 6 connected 2025/11/10 13:34:38 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:34:38 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:34:41 attempt #0 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 13:34:42 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:35:23 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 13:35:26 runner 8 connected 2025/11/10 13:36:12 runner 7 connected 2025/11/10 13:36:16 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:36:32 attempt #1 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 13:36:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 29, "corpus": 46056, "corpus [files]": 342, "corpus [symbols]": 45, "cover overflows": 50173, "coverage": 307251, "distributor delayed": 53400, "distributor undelayed": 53400, "distributor violated": 455, "exec candidate": 81717, "exec collide": 982, "exec fuzz": 1725, "exec gen": 104, "exec hints": 913, "exec inject": 0, "exec minimize": 1018, "exec retries": 23, "exec seeds": 181, "exec smash": 1168, "exec total [base]": 154765, "exec total [new]": 345457, "exec triage": 145016, "executor restarts [base]": 356, "executor restarts [new]": 1148, "fault jobs": 0, "fuzzer jobs": 37, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 16, "max signal": 310244, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 574, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46948, "no exec duration": 329203000000, "no exec requests": 1188, "pending": 82, "prog exec time": 368, "reproducing": 4, "rpc recv": 14415544184, "rpc sent": 2331904232, "signal": 302091, "smash jobs": 17, "triage jobs": 4, "vm output": 50564809, "vm restarts [base]": 36, "vm restarts [new]": 160 } 2025/11/10 13:37:13 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 13:37:13 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:37:59 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:37:59 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:38:01 runner 2 connected 2025/11/10 13:38:25 attempt #2 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 13:38:25 patched-only: possible deadlock in unmap_vmas 2025/11/10 13:38:25 scheduled a reproduction of 'possible deadlock in unmap_vmas (full)' 2025/11/10 13:38:48 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:38:50 runner 8 connected 2025/11/10 13:39:14 runner 0 connected 2025/11/10 13:39:31 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:40:05 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:40:05 repro finished 'possible deadlock in unmap_vmas', repro=true crepro=false desc='possible deadlock in unmap_vmas' hub=false from_dashboard=false 2025/11/10 13:40:05 found repro for "possible deadlock in unmap_vmas" (orig title: "-SAME-", reliability: 1), took 7.28 minutes 2025/11/10 13:40:05 start reproducing 'possible deadlock in unmap_vmas (full)' 2025/11/10 13:40:05 "possible deadlock in unmap_vmas": saved crash log into 1762782005.crash.log 2025/11/10 13:40:05 "possible deadlock in unmap_vmas": saved repro log into 1762782005.repro.log 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:05 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 13:40:15 crash "possible deadlock in ext4_destroy_inline_data" is already known 2025/11/10 13:40:15 base crash "possible deadlock in ext4_destroy_inline_data" is to be ignored 2025/11/10 13:40:15 patched crashed: possible deadlock in ext4_destroy_inline_data [need repro = false] 2025/11/10 13:40:47 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:41:06 runner 8 connected 2025/11/10 13:41:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 37, "corpus": 46090, "corpus [files]": 344, "corpus [symbols]": 46, "cover overflows": 51099, "coverage": 307352, "distributor delayed": 53501, "distributor undelayed": 53501, "distributor violated": 455, "exec candidate": 81717, "exec collide": 1719, "exec fuzz": 3128, "exec gen": 177, "exec hints": 2109, "exec inject": 0, "exec minimize": 1524, "exec retries": 24, "exec seeds": 282, "exec smash": 2082, "exec total [base]": 159439, "exec total [new]": 350574, "exec triage": 145200, "executor restarts [base]": 371, "executor restarts [new]": 1175, "fault jobs": 0, "fuzzer jobs": 32, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 18, "max signal": 310413, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 881, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47016, "no exec duration": 336915000000, "no exec requests": 1205, "pending": 61, "prog exec time": 484, "reproducing": 4, "rpc recv": 14783088168, "rpc sent": 2428745928, "signal": 302152, "smash jobs": 12, "triage jobs": 2, "vm output": 55596334, "vm restarts [base]": 38, "vm restarts [new]": 162 } 2025/11/10 13:41:58 attempt #0 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 13:42:35 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:42:35 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:43:06 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:43:16 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/11/10 13:43:16 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/11/10 13:43:16 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/10 13:43:26 runner 6 connected 2025/11/10 13:43:50 attempt #1 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 13:44:01 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:44:06 runner 7 connected 2025/11/10 13:44:19 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:44:22 base crash: INFO: task hung in jfs_commit_inode 2025/11/10 13:45:03 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:45:03 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:45:05 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:45:07 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:45:07 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:45:10 runner 2 connected 2025/11/10 13:45:12 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 13:45:24 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:45:43 attempt #2 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 13:45:43 patched-only: possible deadlock in unmap_vmas 2025/11/10 13:45:43 scheduled a reproduction of 'possible deadlock in unmap_vmas (full)' 2025/11/10 13:45:47 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:45:52 runner 8 connected 2025/11/10 13:45:57 runner 7 connected 2025/11/10 13:46:03 runner 1 connected 2025/11/10 13:46:03 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:46:27 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:46:33 runner 0 connected 2025/11/10 13:46:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 42, "corpus": 46108, "corpus [files]": 344, "corpus [symbols]": 46, "cover overflows": 52107, "coverage": 307380, "distributor delayed": 53550, "distributor undelayed": 53550, "distributor violated": 455, "exec candidate": 81717, "exec collide": 2506, "exec fuzz": 4620, "exec gen": 250, "exec hints": 3772, "exec inject": 0, "exec minimize": 1865, "exec retries": 24, "exec seeds": 334, "exec smash": 2572, "exec total [base]": 162995, "exec total [new]": 355565, "exec triage": 145292, "executor restarts [base]": 393, "executor restarts [new]": 1208, "fault jobs": 0, "fuzzer jobs": 14, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 9, "max signal": 310883, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1107, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47050, "no exec duration": 340399000000, "no exec requests": 1209, "pending": 65, "prog exec time": 248, "reproducing": 4, "rpc recv": 15147444572, "rpc sent": 2512262656, "signal": 302180, "smash jobs": 2, "triage jobs": 3, "vm output": 57896167, "vm restarts [base]": 41, "vm restarts [new]": 166 } 2025/11/10 13:46:50 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:47:17 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:47:17 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:47:21 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:47:25 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:47:25 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:47:26 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:47:26 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:47:49 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:48:07 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:48:07 runner 8 connected 2025/11/10 13:48:14 runner 7 connected 2025/11/10 13:48:17 runner 6 connected 2025/11/10 13:48:28 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:48:28 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:48:38 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:48:38 repro finished 'possible deadlock in unmap_vmas (full)', repro=true crepro=true desc='possible deadlock in unmap_vmas' hub=false from_dashboard=false 2025/11/10 13:48:38 found repro for "possible deadlock in unmap_vmas" (orig title: "-SAME-", reliability: 1), took 8.55 minutes 2025/11/10 13:48:38 start reproducing 'possible deadlock in unmap_vmas (full)' 2025/11/10 13:48:38 "possible deadlock in unmap_vmas": saved crash log into 1762782518.crash.log 2025/11/10 13:48:38 "possible deadlock in unmap_vmas": saved repro log into 1762782518.repro.log 2025/11/10 13:48:41 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:48:41 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:48:56 base crash: INFO: task hung in corrupted 2025/11/10 13:49:08 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:49:09 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:49:26 runner 8 connected 2025/11/10 13:49:38 runner 6 connected 2025/11/10 13:49:45 runner 2 connected 2025/11/10 13:50:05 crash "WARNING in dbAdjTree" is already known 2025/11/10 13:50:05 base crash "WARNING in dbAdjTree" is to be ignored 2025/11/10 13:50:05 patched crashed: WARNING in dbAdjTree [need repro = false] 2025/11/10 13:50:29 attempt #0 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 13:50:53 runner 7 connected 2025/11/10 13:51:17 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:51:17 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:51:22 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:51:29 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:51:29 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:51:40 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:51:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 49, "corpus": 46116, "corpus [files]": 344, "corpus [symbols]": 46, "cover overflows": 52677, "coverage": 307388, "distributor delayed": 53583, "distributor undelayed": 53580, "distributor violated": 455, "exec candidate": 81717, "exec collide": 2922, "exec fuzz": 5432, "exec gen": 298, "exec hints": 4699, "exec inject": 0, "exec minimize": 2093, "exec retries": 24, "exec seeds": 357, "exec smash": 2739, "exec total [base]": 167772, "exec total [new]": 358231, "exec triage": 145341, "executor restarts [base]": 399, "executor restarts [new]": 1246, "fault jobs": 0, "fuzzer jobs": 10, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 5, "max signal": 310942, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1247, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47070, "no exec duration": 417328000000, "no exec requests": 1415, "pending": 71, "prog exec time": 407, "reproducing": 4, "rpc recv": 15551737192, "rpc sent": 2597972472, "signal": 302188, "smash jobs": 1, "triage jobs": 4, "vm output": 60465000, "vm restarts [base]": 42, "vm restarts [new]": 172 } 2025/11/10 13:52:06 runner 7 connected 2025/11/10 13:52:08 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:52:18 runner 8 connected 2025/11/10 13:52:22 attempt #1 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 13:52:25 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:52:43 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:52:53 base crash: possible deadlock in ntfs_look_for_free_space 2025/11/10 13:52:57 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:52:57 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:52:57 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:52:57 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:53:08 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:53:08 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:53:29 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:53:41 runner 2 connected 2025/11/10 13:53:42 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:53:44 runner 8 connected 2025/11/10 13:53:46 runner 7 connected 2025/11/10 13:53:49 runner 6 connected 2025/11/10 13:53:54 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:54:14 attempt #2 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 13:54:14 patched-only: possible deadlock in unmap_vmas 2025/11/10 13:54:37 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:54:37 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:54:44 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:54:44 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:54:46 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:54:52 base crash: lost connection to test machine 2025/11/10 13:54:54 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:54:54 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:55:03 runner 0 connected 2025/11/10 13:55:11 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:55:26 runner 8 connected 2025/11/10 13:55:32 runner 6 connected 2025/11/10 13:55:37 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:55:37 repro finished 'possible deadlock in unmap_vmas (full)', repro=true crepro=true desc='possible deadlock in unmap_vmas' hub=false from_dashboard=false 2025/11/10 13:55:37 found repro for "possible deadlock in unmap_vmas" (orig title: "-SAME-", reliability: 1), took 6.98 minutes 2025/11/10 13:55:37 "possible deadlock in unmap_vmas": saved crash log into 1762782937.crash.log 2025/11/10 13:55:37 "possible deadlock in unmap_vmas": saved repro log into 1762782937.repro.log 2025/11/10 13:55:42 runner 1 connected 2025/11/10 13:55:43 runner 7 connected 2025/11/10 13:55:44 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 13:55:44 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 13:55:44 start reproducing 'possible deadlock in unmap_vmas' 2025/11/10 13:56:08 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 13:56:32 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:56:34 runner 8 connected 2025/11/10 13:56:39 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:56:39 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:56:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 61, "corpus": 46124, "corpus [files]": 345, "corpus [symbols]": 47, "cover overflows": 53315, "coverage": 307398, "distributor delayed": 53612, "distributor undelayed": 53608, "distributor violated": 455, "exec candidate": 81717, "exec collide": 3292, "exec fuzz": 6204, "exec gen": 343, "exec hints": 5323, "exec inject": 0, "exec minimize": 2320, "exec retries": 24, "exec seeds": 380, "exec smash": 2911, "exec total [base]": 170054, "exec total [new]": 360516, "exec triage": 145390, "executor restarts [base]": 410, "executor restarts [new]": 1277, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 6, "max signal": 310997, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1373, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47089, "no exec duration": 615505000000, "no exec requests": 1952, "pending": 78, "prog exec time": 509, "reproducing": 4, "rpc recv": 16022843876, "rpc sent": 2669204288, "signal": 302198, "smash jobs": 3, "triage jobs": 6, "vm output": 62321140, "vm restarts [base]": 45, "vm restarts [new]": 181 } 2025/11/10 13:56:57 runner 6 connected 2025/11/10 13:56:59 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:57:27 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:57:28 runner 7 connected 2025/11/10 13:57:28 attempt #0 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 13:57:43 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:57:43 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:57:52 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:57:54 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:57:54 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:58:08 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 13:58:08 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 13:58:16 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:58:32 runner 6 connected 2025/11/10 13:58:42 runner 7 connected 2025/11/10 13:58:49 runner 8 connected 2025/11/10 13:59:10 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 13:59:18 attempt #1 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 13:59:25 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 13:59:38 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:00:04 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:00:15 runner 6 connected 2025/11/10 14:00:32 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:01:06 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/11/10 14:01:11 attempt #2 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 14:01:11 patched-only: possible deadlock in unmap_vmas 2025/11/10 14:01:23 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:01:33 base crash: WARNING in xfrm6_tunnel_net_exit 2025/11/10 14:01:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 67, "corpus": 46139, "corpus [files]": 345, "corpus [symbols]": 47, "cover overflows": 54094, "coverage": 307429, "distributor delayed": 53675, "distributor undelayed": 53665, "distributor violated": 455, "exec candidate": 81717, "exec collide": 3851, "exec fuzz": 7231, "exec gen": 396, "exec hints": 5634, "exec inject": 0, "exec minimize": 2861, "exec retries": 25, "exec seeds": 403, "exec smash": 3278, "exec total [base]": 172878, "exec total [new]": 363489, "exec triage": 145477, "executor restarts [base]": 446, "executor restarts [new]": 1342, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 311051, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1781, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47128, "no exec duration": 702327000000, "no exec requests": 2186, "pending": 81, "prog exec time": 553, "reproducing": 4, "rpc recv": 16352626912, "rpc sent": 2736074448, "signal": 302223, "smash jobs": 1, "triage jobs": 10, "vm output": 64784560, "vm restarts [base]": 45, "vm restarts [new]": 187 } 2025/11/10 14:01:52 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:01:56 runner 7 connected 2025/11/10 14:02:01 runner 0 connected 2025/11/10 14:02:17 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:02:22 runner 1 connected 2025/11/10 14:02:36 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:02:36 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:03:07 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:03:24 runner 7 connected 2025/11/10 14:03:34 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:04:08 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:04:08 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:04:10 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:04:10 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:04:18 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:04:18 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:04:21 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:04:48 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:04:50 runner 7 connected 2025/11/10 14:04:53 runner 8 connected 2025/11/10 14:05:06 runner 6 connected 2025/11/10 14:05:10 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:05:10 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:05:14 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:05:37 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 14:05:40 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:05:59 runner 7 connected 2025/11/10 14:06:26 runner 6 connected 2025/11/10 14:06:30 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:06:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 76, "corpus": 46154, "corpus [files]": 346, "corpus [symbols]": 47, "cover overflows": 54821, "coverage": 307445, "distributor delayed": 53719, "distributor undelayed": 53719, "distributor violated": 455, "exec candidate": 81717, "exec collide": 4410, "exec fuzz": 8350, "exec gen": 454, "exec hints": 5953, "exec inject": 0, "exec minimize": 3317, "exec retries": 25, "exec seeds": 447, "exec smash": 3656, "exec total [base]": 176059, "exec total [new]": 366523, "exec triage": 145570, "executor restarts [base]": 466, "executor restarts [new]": 1377, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 2, "max signal": 311110, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2068, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47158, "no exec duration": 973624000000, "no exec requests": 2839, "pending": 86, "prog exec time": 484, "reproducing": 4, "rpc recv": 16798887028, "rpc sent": 2811963648, "signal": 302239, "smash jobs": 2, "triage jobs": 7, "vm output": 67004035, "vm restarts [base]": 47, "vm restarts [new]": 194 } 2025/11/10 14:06:57 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:06:57 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:07:02 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:07:31 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:07:46 runner 6 connected 2025/11/10 14:07:58 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:08:26 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:08:46 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:08:46 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:09:05 base crash: WARNING in xfrm_state_fini 2025/11/10 14:09:36 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:09:36 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:09:36 runner 6 connected 2025/11/10 14:09:38 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:09:55 runner 0 connected 2025/11/10 14:10:05 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:10:25 runner 8 connected 2025/11/10 14:10:31 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:11:48 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:11:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 82, "corpus": 46172, "corpus [files]": 347, "corpus [symbols]": 47, "cover overflows": 55993, "coverage": 307469, "distributor delayed": 53772, "distributor undelayed": 53772, "distributor violated": 455, "exec candidate": 81717, "exec collide": 5264, "exec fuzz": 10030, "exec gen": 542, "exec hints": 6851, "exec inject": 0, "exec minimize": 3716, "exec retries": 27, "exec seeds": 501, "exec smash": 4065, "exec total [base]": 180518, "exec total [new]": 371011, "exec triage": 145672, "executor restarts [base]": 488, "executor restarts [new]": 1402, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 3, "max signal": 311198, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2352, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47194, "no exec duration": 1352795000000, "no exec requests": 3902, "pending": 89, "prog exec time": 653, "reproducing": 4, "rpc recv": 17139013072, "rpc sent": 2898151696, "signal": 302262, "smash jobs": 3, "triage jobs": 2, "vm output": 69453641, "vm restarts [base]": 48, "vm restarts [new]": 197 } 2025/11/10 14:12:20 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:12:54 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 14:13:28 base crash: lost connection to test machine 2025/11/10 14:13:28 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:13:29 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 14:13:44 runner 8 connected 2025/11/10 14:14:14 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:14:17 runner 2 connected 2025/11/10 14:14:19 runner 6 connected 2025/11/10 14:14:41 reproducing crash 'possible deadlock in unmap_vmas': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:14:41 repro finished 'possible deadlock in unmap_vmas', repro=true crepro=false desc='possible deadlock in unmap_vmas' hub=false from_dashboard=false 2025/11/10 14:14:41 found repro for "possible deadlock in unmap_vmas" (orig title: "-SAME-", reliability: 1), took 18.85 minutes 2025/11/10 14:14:41 "possible deadlock in unmap_vmas": saved crash log into 1762784081.crash.log 2025/11/10 14:14:41 "possible deadlock in unmap_vmas": saved repro log into 1762784081.repro.log 2025/11/10 14:15:03 runner 1 connected 2025/11/10 14:15:31 runner 0 connected 2025/11/10 14:15:42 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 14:15:46 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:15:46 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:15:57 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:15:57 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:15:58 base crash: lost connection to test machine 2025/11/10 14:16:01 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 14:16:02 base crash: lost connection to test machine 2025/11/10 14:16:08 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:16:08 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:16:31 runner 1 connected 2025/11/10 14:16:34 attempt #0 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 14:16:35 runner 8 connected 2025/11/10 14:16:46 runner 7 connected 2025/11/10 14:16:47 runner 2 connected 2025/11/10 14:16:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 94, "corpus": 46182, "corpus [files]": 348, "corpus [symbols]": 47, "cover overflows": 57140, "coverage": 307486, "distributor delayed": 53817, "distributor undelayed": 53815, "distributor violated": 455, "exec candidate": 81717, "exec collide": 6119, "exec fuzz": 11835, "exec gen": 619, "exec hints": 7201, "exec inject": 0, "exec minimize": 4077, "exec retries": 27, "exec seeds": 534, "exec smash": 4333, "exec total [base]": 183895, "exec total [new]": 374844, "exec triage": 145756, "executor restarts [base]": 503, "executor restarts [new]": 1444, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 311263, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2604, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47224, "no exec duration": 1581477000000, "no exec requests": 4625, "pending": 92, "prog exec time": 523, "reproducing": 3, "rpc recv": 17520063136, "rpc sent": 2988924048, "signal": 302278, "smash jobs": 0, "triage jobs": 6, "vm output": 71899901, "vm restarts [base]": 50, "vm restarts [new]": 204 } 2025/11/10 14:16:51 runner 0 connected 2025/11/10 14:16:51 runner 1 connected 2025/11/10 14:16:56 runner 6 connected 2025/11/10 14:17:18 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:17:18 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:17:24 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:17:24 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:17:27 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:17:27 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:17:35 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:17:35 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:17:40 base crash: lost connection to test machine 2025/11/10 14:18:07 runner 1 connected 2025/11/10 14:18:13 runner 0 connected 2025/11/10 14:18:16 runner 8 connected 2025/11/10 14:18:24 runner 7 connected 2025/11/10 14:18:27 attempt #1 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 14:18:29 runner 1 connected 2025/11/10 14:18:51 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:18:51 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:19:01 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:19:01 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:19:41 runner 0 connected 2025/11/10 14:19:50 runner 7 connected 2025/11/10 14:20:12 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 14:20:17 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:20:17 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:20:20 attempt #2 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 14:20:20 patched-only: possible deadlock in unmap_vmas 2025/11/10 14:20:20 scheduled a reproduction of 'possible deadlock in unmap_vmas (full)' 2025/11/10 14:20:20 start reproducing 'possible deadlock in unmap_vmas (full)' 2025/11/10 14:20:26 base crash: unregister_netdevice: waiting for DEV to become free 2025/11/10 14:20:46 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:21:02 runner 6 connected 2025/11/10 14:21:10 runner 0 connected 2025/11/10 14:21:15 runner 2 connected 2025/11/10 14:21:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 120, "corpus": 46208, "corpus [files]": 348, "corpus [symbols]": 47, "cover overflows": 58343, "coverage": 307549, "distributor delayed": 53918, "distributor undelayed": 53915, "distributor violated": 455, "exec candidate": 81717, "exec collide": 6846, "exec fuzz": 13292, "exec gen": 706, "exec hints": 7497, "exec inject": 0, "exec minimize": 4636, "exec retries": 27, "exec seeds": 612, "exec smash": 4840, "exec total [base]": 186369, "exec total [new]": 378696, "exec triage": 145891, "executor restarts [base]": 542, "executor restarts [new]": 1524, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 1, "max signal": 311369, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2990, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47279, "no exec duration": 1581477000000, "no exec requests": 4625, "pending": 99, "prog exec time": 418, "reproducing": 4, "rpc recv": 18152031060, "rpc sent": 3084474600, "signal": 302337, "smash jobs": 3, "triage jobs": 7, "vm output": 75916495, "vm restarts [base]": 54, "vm restarts [new]": 213 } 2025/11/10 14:21:55 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:21:55 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:22:06 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:22:06 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:22:44 runner 7 connected 2025/11/10 14:22:56 runner 8 connected 2025/11/10 14:22:58 base crash: WARNING in xfrm_state_fini 2025/11/10 14:23:02 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 14:23:21 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:23:37 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:23:37 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:23:42 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:23:48 runner 1 connected 2025/11/10 14:23:52 runner 6 connected 2025/11/10 14:24:26 runner 7 connected 2025/11/10 14:24:26 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:24:28 base crash: WARNING in xfrm_state_fini 2025/11/10 14:24:39 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:24:53 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:24:53 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:24:53 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:25:04 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:25:04 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:25:15 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:25:15 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:25:17 runner 1 connected 2025/11/10 14:25:29 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:25:41 runner 8 connected 2025/11/10 14:25:43 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:25:53 runner 7 connected 2025/11/10 14:25:57 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:26:06 runner 6 connected 2025/11/10 14:26:18 base crash: lost connection to test machine 2025/11/10 14:26:45 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:26:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 123, "corpus": 46222, "corpus [files]": 348, "corpus [symbols]": 47, "cover overflows": 59253, "coverage": 307568, "distributor delayed": 53960, "distributor undelayed": 53960, "distributor violated": 458, "exec candidate": 81717, "exec collide": 7528, "exec fuzz": 14656, "exec gen": 789, "exec hints": 7642, "exec inject": 0, "exec minimize": 4887, "exec retries": 29, "exec seeds": 640, "exec smash": 5020, "exec total [base]": 189862, "exec total [new]": 381497, "exec triage": 145954, "executor restarts [base]": 599, "executor restarts [new]": 1566, "fault jobs": 0, "fuzzer jobs": 18, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 6, "max signal": 311417, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3131, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47305, "no exec duration": 1582233000000, "no exec requests": 4627, "pending": 105, "prog exec time": 475, "reproducing": 4, "rpc recv": 18615607972, "rpc sent": 3179379104, "signal": 302354, "smash jobs": 6, "triage jobs": 6, "vm output": 78007938, "vm restarts [base]": 56, "vm restarts [new]": 220 } 2025/11/10 14:27:10 runner 0 connected 2025/11/10 14:27:17 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:27:49 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:27:49 repro finished 'possible deadlock in unmap_vmas (full)', repro=true crepro=true desc='possible deadlock in unmap_vmas' hub=false from_dashboard=false 2025/11/10 14:27:49 found repro for "possible deadlock in unmap_vmas" (orig title: "-SAME-", reliability: 1), took 7.48 minutes 2025/11/10 14:27:49 "possible deadlock in unmap_vmas": saved crash log into 1762784869.crash.log 2025/11/10 14:27:49 "possible deadlock in unmap_vmas": saved repro log into 1762784869.repro.log 2025/11/10 14:28:15 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:28:15 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:28:34 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:28:34 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:28:38 runner 1 connected 2025/11/10 14:28:59 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:28:59 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:29:04 runner 6 connected 2025/11/10 14:29:25 runner 7 connected 2025/11/10 14:29:26 base crash: lost connection to test machine 2025/11/10 14:29:40 attempt #0 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 14:29:45 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:29:45 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:29:47 runner 8 connected 2025/11/10 14:30:14 runner 2 connected 2025/11/10 14:30:34 runner 7 connected 2025/11/10 14:30:55 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/11/10 14:30:56 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:30:56 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:31:32 attempt #1 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 14:31:44 runner 1 connected 2025/11/10 14:31:44 runner 7 connected 2025/11/10 14:31:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 156, "corpus": 46243, "corpus [files]": 348, "corpus [symbols]": 47, "cover overflows": 60705, "coverage": 307592, "distributor delayed": 54017, "distributor undelayed": 54005, "distributor violated": 458, "exec candidate": 81717, "exec collide": 8507, "exec fuzz": 16609, "exec gen": 889, "exec hints": 8286, "exec inject": 0, "exec minimize": 5147, "exec retries": 29, "exec seeds": 713, "exec smash": 5539, "exec total [base]": 193855, "exec total [new]": 386089, "exec triage": 146020, "executor restarts [base]": 630, "executor restarts [new]": 1615, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 1, "max signal": 311559, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3274, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47337, "no exec duration": 1584807000000, "no exec requests": 4638, "pending": 110, "prog exec time": 372, "reproducing": 3, "rpc recv": 19034623900, "rpc sent": 3525210928, "signal": 302379, "smash jobs": 2, "triage jobs": 12, "vm output": 80424836, "vm restarts [base]": 58, "vm restarts [new]": 227 } 2025/11/10 14:31:51 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/11/10 14:32:21 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:32:21 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:32:42 runner 1 connected 2025/11/10 14:33:05 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/11/10 14:33:11 runner 6 connected 2025/11/10 14:33:23 attempt #2 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 14:33:23 patched-only: possible deadlock in unmap_vmas 2025/11/10 14:33:46 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 14:33:49 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 14:33:55 runner 7 connected 2025/11/10 14:34:02 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/11/10 14:34:12 runner 0 connected 2025/11/10 14:34:29 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 14:34:37 runner 2 connected 2025/11/10 14:34:39 runner 1 connected 2025/11/10 14:34:50 runner 0 connected 2025/11/10 14:34:51 runner 6 connected 2025/11/10 14:35:22 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:35:22 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:35:34 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:35:34 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:36:12 runner 6 connected 2025/11/10 14:36:22 runner 0 connected 2025/11/10 14:36:42 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:36:42 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:36:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 168, "corpus": 46263, "corpus [files]": 348, "corpus [symbols]": 47, "cover overflows": 62231, "coverage": 307701, "distributor delayed": 54080, "distributor undelayed": 54080, "distributor violated": 458, "exec candidate": 81717, "exec collide": 9704, "exec fuzz": 18841, "exec gen": 1015, "exec hints": 8857, "exec inject": 0, "exec minimize": 5680, "exec retries": 29, "exec seeds": 770, "exec smash": 5986, "exec total [base]": 197745, "exec total [new]": 391365, "exec triage": 146123, "executor restarts [base]": 667, "executor restarts [new]": 1674, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 2, "max signal": 311630, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3593, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47370, "no exec duration": 1594025000000, "no exec requests": 4650, "pending": 114, "prog exec time": 428, "reproducing": 3, "rpc recv": 19622389312, "rpc sent": 3716981328, "signal": 302488, "smash jobs": 2, "triage jobs": 4, "vm output": 82907848, "vm restarts [base]": 61, "vm restarts [new]": 234 } 2025/11/10 14:36:53 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:36:53 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:37:17 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:37:17 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:37:31 runner 6 connected 2025/11/10 14:37:36 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:37:36 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:37:43 runner 7 connected 2025/11/10 14:37:52 base crash: lost connection to test machine 2025/11/10 14:38:05 runner 8 connected 2025/11/10 14:38:17 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:38:17 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:38:25 runner 1 connected 2025/11/10 14:38:31 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 14:38:42 runner 0 connected 2025/11/10 14:38:42 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:38:42 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:38:52 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:38:52 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:38:54 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:38:54 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:39:07 runner 6 connected 2025/11/10 14:39:07 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:39:07 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:39:21 runner 1 connected 2025/11/10 14:39:26 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 14:39:31 runner 7 connected 2025/11/10 14:39:40 runner 8 connected 2025/11/10 14:39:44 runner 0 connected 2025/11/10 14:39:58 runner 1 connected 2025/11/10 14:40:15 runner 0 connected 2025/11/10 14:40:32 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:40:32 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:40:35 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 14:40:53 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:40:53 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:41:20 runner 0 connected 2025/11/10 14:41:25 runner 1 connected 2025/11/10 14:41:42 runner 6 connected 2025/11/10 14:41:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 171, "corpus": 46271, "corpus [files]": 348, "corpus [symbols]": 47, "cover overflows": 63623, "coverage": 307709, "distributor delayed": 54134, "distributor undelayed": 54134, "distributor violated": 458, "exec candidate": 81717, "exec collide": 10966, "exec fuzz": 21219, "exec gen": 1134, "exec hints": 9026, "exec inject": 0, "exec minimize": 5975, "exec retries": 29, "exec seeds": 794, "exec smash": 6196, "exec total [base]": 203357, "exec total [new]": 395897, "exec triage": 146194, "executor restarts [base]": 695, "executor restarts [new]": 1736, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 311660, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3779, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47395, "no exec duration": 1606880000000, "no exec requests": 4667, "pending": 124, "prog exec time": 439, "reproducing": 3, "rpc recv": 20323669552, "rpc sent": 3860228384, "signal": 302496, "smash jobs": 0, "triage jobs": 4, "vm output": 85431582, "vm restarts [base]": 64, "vm restarts [new]": 246 } 2025/11/10 14:42:03 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:42:03 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:42:52 runner 1 connected 2025/11/10 14:43:04 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 14:43:17 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:43:17 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:43:54 runner 0 connected 2025/11/10 14:44:07 runner 1 connected 2025/11/10 14:44:22 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 14:44:37 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:44:37 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:44:44 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:44:44 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:44:48 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:44:48 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:44:51 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:44:51 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:44:59 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:44:59 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:45:12 runner 2 connected 2025/11/10 14:45:27 runner 6 connected 2025/11/10 14:45:33 runner 1 connected 2025/11/10 14:45:38 runner 7 connected 2025/11/10 14:45:39 runner 8 connected 2025/11/10 14:45:49 runner 0 connected 2025/11/10 14:46:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 195, "corpus": 46290, "corpus [files]": 350, "corpus [symbols]": 47, "cover overflows": 65265, "coverage": 307733, "distributor delayed": 54208, "distributor undelayed": 54208, "distributor violated": 460, "exec candidate": 81717, "exec collide": 12466, "exec fuzz": 24067, "exec gen": 1297, "exec hints": 9164, "exec inject": 0, "exec minimize": 6384, "exec retries": 30, "exec seeds": 848, "exec smash": 6583, "exec total [base]": 209506, "exec total [new]": 401509, "exec triage": 146304, "executor restarts [base]": 720, "executor restarts [new]": 1805, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 2, "max signal": 311718, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3994, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47433, "no exec duration": 1614823000000, "no exec requests": 4677, "pending": 131, "prog exec time": 494, "reproducing": 3, "rpc recv": 20888306680, "rpc sent": 4016189888, "signal": 302520, "smash jobs": 3, "triage jobs": 3, "vm output": 89262742, "vm restarts [base]": 65, "vm restarts [new]": 254 } 2025/11/10 14:47:24 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/10 14:47:25 base crash: lost connection to test machine 2025/11/10 14:47:39 base crash: WARNING in xfrm6_tunnel_net_exit 2025/11/10 14:48:12 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:48:12 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:48:13 runner 2 connected 2025/11/10 14:48:13 runner 6 connected 2025/11/10 14:48:17 base crash: lost connection to test machine 2025/11/10 14:48:30 runner 1 connected 2025/11/10 14:48:44 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/10 14:49:01 runner 1 connected 2025/11/10 14:49:03 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:49:03 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:49:07 runner 0 connected 2025/11/10 14:49:14 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:49:14 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:49:25 repro finished 'INFO: task hung in reg_check_chans_work', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/10 14:49:25 failed repro for "INFO: task hung in reg_check_chans_work", err=%!s() 2025/11/10 14:49:25 "INFO: task hung in reg_check_chans_work": saved crash log into 1762786165.crash.log 2025/11/10 14:49:25 "INFO: task hung in reg_check_chans_work": saved repro log into 1762786165.repro.log 2025/11/10 14:49:25 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:49:25 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:49:32 runner 8 connected 2025/11/10 14:49:49 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:49:49 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:49:52 runner 6 connected 2025/11/10 14:50:04 runner 0 connected 2025/11/10 14:50:14 runner 7 connected 2025/11/10 14:50:40 runner 1 connected 2025/11/10 14:51:07 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 14:51:16 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:51:16 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:51:36 base crash: general protection fault in pcl818_ai_cancel 2025/11/10 14:51:39 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 14:51:45 base crash: WARNING in xfrm_state_fini 2025/11/10 14:51:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 230, "corpus": 46307, "corpus [files]": 350, "corpus [symbols]": 47, "cover overflows": 66864, "coverage": 307763, "distributor delayed": 54265, "distributor undelayed": 54263, "distributor violated": 460, "exec candidate": 81717, "exec collide": 13739, "exec fuzz": 26546, "exec gen": 1411, "exec hints": 9399, "exec inject": 0, "exec minimize": 6811, "exec retries": 31, "exec seeds": 896, "exec smash": 6960, "exec total [base]": 213388, "exec total [new]": 406533, "exec triage": 146377, "executor restarts [base]": 760, "executor restarts [new]": 1879, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 1, "max signal": 311798, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4197, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47462, "no exec duration": 1618492000000, "no exec requests": 4688, "pending": 137, "prog exec time": 668, "reproducing": 2, "rpc recv": 21414832968, "rpc sent": 4153715080, "signal": 302550, "smash jobs": 0, "triage jobs": 5, "vm output": 92918243, "vm restarts [base]": 68, "vm restarts [new]": 261 } 2025/11/10 14:51:54 runner 7 connected 2025/11/10 14:52:07 runner 8 connected 2025/11/10 14:52:23 runner 1 connected 2025/11/10 14:52:29 runner 1 connected 2025/11/10 14:52:34 runner 0 connected 2025/11/10 14:53:30 base crash: lost connection to test machine 2025/11/10 14:53:32 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 14:53:42 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:53:42 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:54:04 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/10 14:54:20 runner 1 connected 2025/11/10 14:54:20 runner 1 connected 2025/11/10 14:54:31 runner 6 connected 2025/11/10 14:54:34 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 14:54:36 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:54:36 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:54:46 runner 2 connected 2025/11/10 14:54:47 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:54:47 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:54:52 runner 7 connected 2025/11/10 14:55:24 runner 8 connected 2025/11/10 14:55:25 runner 0 connected 2025/11/10 14:55:33 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 14:55:37 runner 1 connected 2025/11/10 14:56:19 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/10 14:56:23 runner 2 connected 2025/11/10 14:56:35 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:56:35 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:56:41 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:56:41 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:56:49 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 241, "corpus": 46322, "corpus [files]": 350, "corpus [symbols]": 47, "cover overflows": 68717, "coverage": 307779, "distributor delayed": 54339, "distributor undelayed": 54338, "distributor violated": 460, "exec candidate": 81717, "exec collide": 15312, "exec fuzz": 29631, "exec gen": 1568, "exec hints": 9463, "exec inject": 0, "exec minimize": 7201, "exec retries": 31, "exec seeds": 932, "exec smash": 7271, "exec total [base]": 217418, "exec total [new]": 412255, "exec triage": 146477, "executor restarts [base]": 802, "executor restarts [new]": 1954, "fault jobs": 0, "fuzzer jobs": 7, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 1, "max signal": 311892, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4429, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47499, "no exec duration": 1619741000000, "no exec requests": 4696, "pending": 142, "prog exec time": 494, "reproducing": 2, "rpc recv": 22056320256, "rpc sent": 4309762944, "signal": 302565, "smash jobs": 2, "triage jobs": 4, "vm output": 95661828, "vm restarts [base]": 71, "vm restarts [new]": 272 } 2025/11/10 14:56:52 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:56:52 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:57:08 runner 0 connected 2025/11/10 14:57:26 runner 6 connected 2025/11/10 14:57:31 runner 7 connected 2025/11/10 14:57:41 runner 8 connected 2025/11/10 14:58:01 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:58:01 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:58:27 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:58:27 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:58:29 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 14:58:33 base crash: KASAN: slab-use-after-free Read in __usb_hcd_giveback_urb 2025/11/10 14:58:50 runner 1 connected 2025/11/10 14:58:56 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:58:56 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 14:59:15 runner 8 connected 2025/11/10 14:59:17 runner 0 connected 2025/11/10 14:59:21 runner 0 connected 2025/11/10 14:59:44 runner 7 connected 2025/11/10 14:59:55 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 14:59:55 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:00:03 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 15:00:19 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:00:19 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:00:44 runner 1 connected 2025/11/10 15:00:51 runner 6 connected 2025/11/10 15:01:09 runner 8 connected 2025/11/10 15:01:10 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:01:10 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:01:36 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:01:36 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:01:44 crash "kernel BUG in dbFindLeaf" is already known 2025/11/10 15:01:44 base crash "kernel BUG in dbFindLeaf" is to be ignored 2025/11/10 15:01:44 patched crashed: kernel BUG in dbFindLeaf [need repro = false] 2025/11/10 15:01:45 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:01:45 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:01:49 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 246, "corpus": 46331, "corpus [files]": 350, "corpus [symbols]": 47, "cover overflows": 70105, "coverage": 307791, "distributor delayed": 54382, "distributor undelayed": 54379, "distributor violated": 460, "exec candidate": 81717, "exec collide": 16578, "exec fuzz": 31986, "exec gen": 1681, "exec hints": 9631, "exec inject": 0, "exec minimize": 7590, "exec retries": 31, "exec seeds": 965, "exec smash": 7452, "exec total [base]": 222969, "exec total [new]": 416818, "exec triage": 146542, "executor restarts [base]": 841, "executor restarts [new]": 2056, "fault jobs": 0, "fuzzer jobs": 14, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 3, "max signal": 311923, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4662, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47525, "no exec duration": 1623565000000, "no exec requests": 4705, "pending": 151, "prog exec time": 445, "reproducing": 2, "rpc recv": 22687908588, "rpc sent": 4465058824, "signal": 302577, "smash jobs": 4, "triage jobs": 7, "vm output": 98971649, "vm restarts [base]": 73, "vm restarts [new]": 282 } 2025/11/10 15:02:00 runner 0 connected 2025/11/10 15:02:25 runner 8 connected 2025/11/10 15:02:35 runner 6 connected 2025/11/10 15:02:35 runner 1 connected 2025/11/10 15:02:43 patched crashed: no output from test machine [need repro = false] 2025/11/10 15:03:37 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:03:37 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:03:40 runner 2 connected 2025/11/10 15:03:47 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:03:47 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:04:23 base crash: lost connection to test machine 2025/11/10 15:04:25 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 15:04:28 runner 0 connected 2025/11/10 15:04:33 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:04:33 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:04:35 runner 6 connected 2025/11/10 15:05:13 runner 1 connected 2025/11/10 15:05:14 runner 1 connected 2025/11/10 15:05:22 runner 8 connected 2025/11/10 15:05:24 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:05:24 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:05:51 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:05:51 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:05:57 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:05:57 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:06:02 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:06:02 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:06:13 runner 6 connected 2025/11/10 15:06:15 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:06:15 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:06:31 base crash: lost connection to test machine 2025/11/10 15:06:41 runner 8 connected 2025/11/10 15:06:48 runner 0 connected 2025/11/10 15:06:49 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 270, "corpus": 46346, "corpus [files]": 350, "corpus [symbols]": 47, "cover overflows": 71312, "coverage": 307812, "distributor delayed": 54450, "distributor undelayed": 54450, "distributor violated": 460, "exec candidate": 81717, "exec collide": 17545, "exec fuzz": 33686, "exec gen": 1749, "exec hints": 9948, "exec inject": 0, "exec minimize": 7905, "exec retries": 31, "exec seeds": 999, "exec smash": 7807, "exec total [base]": 228210, "exec total [new]": 420682, "exec triage": 146646, "executor restarts [base]": 866, "executor restarts [new]": 2113, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 1, "max signal": 312005, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4854, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47562, "no exec duration": 1626831000000, "no exec requests": 4714, "pending": 159, "prog exec time": 1193, "reproducing": 2, "rpc recv": 23306302244, "rpc sent": 4630827944, "signal": 302603, "smash jobs": 3, "triage jobs": 8, "vm output": 100926782, "vm restarts [base]": 74, "vm restarts [new]": 294 } 2025/11/10 15:06:51 runner 2 connected 2025/11/10 15:07:01 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 15:07:07 runner 1 connected 2025/11/10 15:07:21 runner 1 connected 2025/11/10 15:07:50 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 15:07:50 repro finished 'KASAN: slab-use-after-free Read in jfs_lazycommit', repro=true crepro=false desc='kernel BUG in jfs_evict_inode' hub=false from_dashboard=false 2025/11/10 15:07:50 found repro for "kernel BUG in jfs_evict_inode" (orig title: "KASAN: slab-use-after-free Read in jfs_lazycommit", reliability: 0), took 103.01 minutes 2025/11/10 15:07:50 kernel BUG in jfs_evict_inode: repro is too unreliable, skipping 2025/11/10 15:07:50 "kernel BUG in jfs_evict_inode": saved crash log into 1762787270.crash.log 2025/11/10 15:07:50 "kernel BUG in jfs_evict_inode": saved repro log into 1762787270.repro.log 2025/11/10 15:08:04 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:08:04 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:08:15 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:08:15 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:08:29 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:08:29 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:08:31 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 15:08:43 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:08:43 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:08:53 runner 0 connected 2025/11/10 15:09:05 runner 8 connected 2025/11/10 15:09:13 patched crashed: INFO: task hung in addrconf_verify_work [need repro = true] 2025/11/10 15:09:13 scheduled a reproduction of 'INFO: task hung in addrconf_verify_work' 2025/11/10 15:09:13 start reproducing 'INFO: task hung in addrconf_verify_work' 2025/11/10 15:09:18 runner 6 connected 2025/11/10 15:09:20 runner 1 connected 2025/11/10 15:09:27 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 15:09:27 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 15:09:32 runner 2 connected 2025/11/10 15:10:04 runner 7 connected 2025/11/10 15:10:16 runner 8 connected 2025/11/10 15:11:32 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 15:11:44 status reporting terminated 2025/11/10 15:11:44 bug reporting terminated 2025/11/10 15:11:44 base: rpc server terminaled 2025/11/10 15:11:44 new: rpc server terminaled 2025/11/10 15:11:58 base: pool terminated 2025/11/10 15:11:58 base: kernel context loop terminated 2025/11/10 15:12:12 repro finished 'INFO: task hung in addrconf_verify_work', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/10 15:12:54 repro finished 'possible deadlock in hugetlb_change_protection', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/10 15:12:54 repro loop terminated 2025/11/10 15:12:54 new: pool terminated 2025/11/10 15:12:54 new: kernel context loop terminated 2025/11/10 15:12:54 diff fuzzing terminated 2025/11/10 15:12:54 fuzzing is finished 2025/11/10 15:12:54 status at the end: Title On-Base On-Patched possible deadlock in unmap_vmas 26 crashes[reproduced] BUG: sleeping function called from invalid context in hook_sb_delete 2 crashes 7 crashes INFO: task hung in __iterate_supers 2 crashes 1 crashes INFO: task hung in addrconf_verify_work 1 crashes INFO: task hung in corrupted 1 crashes 1 crashes INFO: task hung in jfs_commit_inode 1 crashes INFO: task hung in reg_check_chans_work 1 crashes INFO: task hung in sync_bdevs 1 crashes KASAN: slab-use-after-free Read in __usb_hcd_giveback_urb 1 crashes KASAN: slab-use-after-free Read in jfs_lazycommit 1 crashes WARNING in dbAdjTree 1 crashes WARNING in folio_memcg 8 crashes 22 crashes WARNING in io_ring_exit_work 2 crashes WARNING in xfrm6_tunnel_net_exit 3 crashes 5 crashes WARNING in xfrm_state_fini 8 crashes 11 crashes general protection fault in pcl818_ai_cancel 1 crashes 3 crashes kernel BUG in dbFindLeaf 1 crashes kernel BUG in jfs_evict_inode [reproduced] kernel BUG in txUnlock 2 crashes 4 crashes lost connection to test machine 18 crashes 22 crashes no output from test machine 1 crashes possible deadlock in ext4_destroy_inline_data 2 crashes possible deadlock in ext4_evict_inode 1 crashes 1 crashes possible deadlock in hugetlb_change_protection 165 crashes possible deadlock in mark_as_free_ex 1 crashes possible deadlock in ntfs_look_for_free_space 1 crashes possible deadlock in ocfs2_init_acl 7 crashes 5 crashes possible deadlock in ocfs2_reserve_suballoc_bits 1 crashes 1 crashes possible deadlock in ocfs2_try_remove_refcount_tree 4 crashes 5 crashes possible deadlock in run_unpack_ex 2 crashes 2 crashes unregister_netdevice: waiting for DEV to become free 1 crashes 2025/11/10 15:12:54 possibly patched-only: possible deadlock in hugetlb_change_protection 2025/11/10 15:12:54 possibly patched-only: possible deadlock in unmap_vmas