2025/11/10 06:01:35 extracted 322917 text symbol hashes for base and 322917 for patched 2025/11/10 06:01:36 binaries are different, continuing fuzzing 2025/11/10 06:01:36 adding modified_functions to focus areas: ["__hugetlb_zap_begin"] 2025/11/10 06:01:36 adding directly modified files to focus areas: ["mm/hugetlb.c"] 2025/11/10 06:01:36 downloading corpus #1: "https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db" 2025/11/10 06:02:34 runner 6 connected 2025/11/10 06:02:34 runner 5 connected 2025/11/10 06:02:34 runner 3 connected 2025/11/10 06:02:34 runner 2 connected 2025/11/10 06:02:34 runner 7 connected 2025/11/10 06:02:34 runner 0 connected 2025/11/10 06:02:34 runner 0 connected 2025/11/10 06:02:35 runner 4 connected 2025/11/10 06:02:35 runner 1 connected 2025/11/10 06:02:35 runner 8 connected 2025/11/10 06:02:35 runner 1 connected 2025/11/10 06:02:35 runner 2 connected 2025/11/10 06:02:41 initializing coverage information... 2025/11/10 06:02:41 executor cover filter: 0 PCs 2025/11/10 06:02:45 discovered 7611 source files, 333869 symbols 2025/11/10 06:02:45 coverage filter: __hugetlb_zap_begin: [__hugetlb_zap_begin] 2025/11/10 06:02:45 coverage filter: mm/hugetlb.c: [mm/hugetlb.c mm/hugetlb_cgroup.c mm/hugetlb_cma.c] 2025/11/10 06:02:45 area "symbols": 26 PCs in the cover filter 2025/11/10 06:02:45 area "files": 4211 PCs in the cover filter 2025/11/10 06:02:45 area "": 0 PCs in the cover filter 2025/11/10 06:02:45 executor cover filter: 0 PCs 2025/11/10 06:02:46 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8056 2025/11/10 06:02:46 base: machine check complete 2025/11/10 06:02:48 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8056 2025/11/10 06:02:48 new: machine check complete 2025/11/10 06:02:49 new: adding 81853 seeds 2025/11/10 06:05:02 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:05:02 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:05:13 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:05:13 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:05:59 runner 6 connected 2025/11/10 06:06:11 runner 0 connected 2025/11/10 06:06:38 STAT { "buffer too small": 0, "candidate triage jobs": 56, "candidates": 76972, "comps overflows": 0, "corpus": 4784, "corpus [files]": 66, "corpus [symbols]": 3, "cover overflows": 3032, "coverage": 165621, "distributor delayed": 4632, "distributor undelayed": 4630, "distributor violated": 0, "exec candidate": 4881, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 7853, "exec total [new]": 21373, "exec triage": 15192, "executor restarts [base]": 55, "executor restarts [new]": 111, "fault jobs": 0, "fuzzer jobs": 56, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 167527, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 4881, "no exec duration": 41905000000, "no exec requests": 338, "pending": 2, "prog exec time": 150, "reproducing": 0, "rpc recv": 1183272996, "rpc sent": 112170360, "signal": 162986, "smash jobs": 0, "triage jobs": 0, "vm output": 2797050, "vm restarts [base]": 3, "vm restarts [new]": 11 } 2025/11/10 06:06:48 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 06:06:50 base crash: possible deadlock in ocfs2_acquire_dquot 2025/11/10 06:06:57 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:06:57 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:07:07 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:07:07 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:07:08 crash "KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings" is already known 2025/11/10 06:07:08 base crash "KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings" is to be ignored 2025/11/10 06:07:08 patched crashed: KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings [need repro = false] 2025/11/10 06:07:09 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 06:07:18 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:07:18 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:07:23 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:07:23 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:07:29 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:07:29 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:07:34 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:07:34 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:07:37 runner 7 connected 2025/11/10 06:07:38 runner 1 connected 2025/11/10 06:07:45 runner 3 connected 2025/11/10 06:07:58 runner 4 connected 2025/11/10 06:07:58 runner 6 connected 2025/11/10 06:07:59 runner 5 connected 2025/11/10 06:08:07 runner 8 connected 2025/11/10 06:08:12 runner 0 connected 2025/11/10 06:08:18 runner 1 connected 2025/11/10 06:08:23 runner 2 connected 2025/11/10 06:10:16 crash "kernel BUG in jfs_evict_inode" is already known 2025/11/10 06:10:16 base crash "kernel BUG in jfs_evict_inode" is to be ignored 2025/11/10 06:10:16 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/11/10 06:11:07 runner 7 connected 2025/11/10 06:11:38 STAT { "buffer too small": 0, "candidate triage jobs": 59, "candidates": 72142, "comps overflows": 0, "corpus": 9554, "corpus [files]": 99, "corpus [symbols]": 5, "cover overflows": 5995, "coverage": 202045, "distributor delayed": 9736, "distributor undelayed": 9736, "distributor violated": 0, "exec candidate": 9711, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 18232, "exec total [new]": 42195, "exec triage": 29965, "executor restarts [base]": 68, "executor restarts [new]": 201, "fault jobs": 0, "fuzzer jobs": 59, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 203764, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 9710, "no exec duration": 41920000000, "no exec requests": 339, "pending": 8, "prog exec time": 332, "reproducing": 0, "rpc recv": 2303914060, "rpc sent": 235391688, "signal": 199097, "smash jobs": 0, "triage jobs": 0, "vm output": 6913773, "vm restarts [base]": 4, "vm restarts [new]": 21 } 2025/11/10 06:13:20 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:13:20 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:13:31 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:13:31 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:13:38 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:13:38 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:13:41 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:13:41 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:13:48 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:13:48 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:13:52 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:13:52 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:14:03 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:14:03 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:14:10 runner 1 connected 2025/11/10 06:14:19 runner 5 connected 2025/11/10 06:14:26 runner 6 connected 2025/11/10 06:14:29 runner 7 connected 2025/11/10 06:14:38 runner 3 connected 2025/11/10 06:14:40 runner 8 connected 2025/11/10 06:14:46 base crash: WARNING in xfrm_state_fini 2025/11/10 06:14:52 runner 4 connected 2025/11/10 06:15:35 runner 0 connected 2025/11/10 06:15:53 crash "WARNING in folio_memcg" is already known 2025/11/10 06:15:53 base crash "WARNING in folio_memcg" is to be ignored 2025/11/10 06:15:53 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 06:16:03 crash "WARNING in folio_memcg" is already known 2025/11/10 06:16:03 base crash "WARNING in folio_memcg" is to be ignored 2025/11/10 06:16:03 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 06:16:13 crash "WARNING in folio_memcg" is already known 2025/11/10 06:16:13 base crash "WARNING in folio_memcg" is to be ignored 2025/11/10 06:16:13 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 06:16:38 STAT { "buffer too small": 0, "candidate triage jobs": 34, "candidates": 67100, "comps overflows": 0, "corpus": 14563, "corpus [files]": 139, "corpus [symbols]": 10, "cover overflows": 8988, "coverage": 226785, "distributor delayed": 15465, "distributor undelayed": 15465, "distributor violated": 136, "exec candidate": 14753, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 31136, "exec total [new]": 64988, "exec triage": 45468, "executor restarts [base]": 76, "executor restarts [new]": 265, "fault jobs": 0, "fuzzer jobs": 34, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 228661, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 14752, "no exec duration": 45409000000, "no exec requests": 350, "pending": 15, "prog exec time": 133, "reproducing": 0, "rpc recv": 3349636092, "rpc sent": 376682632, "signal": 223248, "smash jobs": 0, "triage jobs": 0, "vm output": 10402446, "vm restarts [base]": 5, "vm restarts [new]": 28 } 2025/11/10 06:16:41 runner 5 connected 2025/11/10 06:16:59 runner 6 connected 2025/11/10 06:17:02 base crash: WARNING in folio_memcg 2025/11/10 06:17:03 runner 4 connected 2025/11/10 06:17:13 base crash: WARNING in folio_memcg 2025/11/10 06:17:17 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:17:17 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:17:28 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:17:28 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:17:49 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:17:49 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:17:52 runner 1 connected 2025/11/10 06:18:00 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:18:00 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:18:01 runner 2 connected 2025/11/10 06:18:07 runner 1 connected 2025/11/10 06:18:08 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:18:08 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:18:16 runner 3 connected 2025/11/10 06:18:20 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:18:20 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:18:30 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:18:30 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:18:40 runner 4 connected 2025/11/10 06:18:50 runner 5 connected 2025/11/10 06:18:59 runner 6 connected 2025/11/10 06:19:02 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:19:02 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:19:08 runner 2 connected 2025/11/10 06:19:09 crash "BUG: sleeping function called from invalid context in hook_sb_delete" is already known 2025/11/10 06:19:09 base crash "BUG: sleeping function called from invalid context in hook_sb_delete" is to be ignored 2025/11/10 06:19:09 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 06:19:09 crash "BUG: sleeping function called from invalid context in hook_sb_delete" is already known 2025/11/10 06:19:09 base crash "BUG: sleeping function called from invalid context in hook_sb_delete" is to be ignored 2025/11/10 06:19:09 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 06:19:12 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 06:19:12 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 06:19:19 runner 8 connected 2025/11/10 06:19:21 crash "BUG: sleeping function called from invalid context in hook_sb_delete" is already known 2025/11/10 06:19:21 base crash "BUG: sleeping function called from invalid context in hook_sb_delete" is to be ignored 2025/11/10 06:19:21 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 06:19:22 crash "BUG: sleeping function called from invalid context in hook_sb_delete" is already known 2025/11/10 06:19:22 base crash "BUG: sleeping function called from invalid context in hook_sb_delete" is to be ignored 2025/11/10 06:19:22 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 06:19:25 base crash: BUG: sleeping function called from invalid context in hook_sb_delete 2025/11/10 06:19:34 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 06:19:45 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 06:19:50 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 06:19:51 runner 0 connected 2025/11/10 06:19:57 runner 7 connected 2025/11/10 06:19:57 runner 1 connected 2025/11/10 06:20:01 runner 4 connected 2025/11/10 06:20:11 runner 3 connected 2025/11/10 06:20:12 runner 5 connected 2025/11/10 06:20:16 runner 2 connected 2025/11/10 06:20:18 base crash: BUG: sleeping function called from invalid context in hook_sb_delete 2025/11/10 06:20:24 runner 6 connected 2025/11/10 06:20:35 runner 2 connected 2025/11/10 06:20:41 runner 8 connected 2025/11/10 06:21:08 runner 1 connected 2025/11/10 06:21:12 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 06:21:23 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 06:21:38 STAT { "buffer too small": 0, "candidate triage jobs": 43, "candidates": 63469, "comps overflows": 0, "corpus": 18163, "corpus [files]": 159, "corpus [symbols]": 11, "cover overflows": 10986, "coverage": 240658, "distributor delayed": 20776, "distributor undelayed": 20774, "distributor violated": 138, "exec candidate": 18384, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 8, "exec seeds": 0, "exec smash": 0, "exec total [base]": 38537, "exec total [new]": 81665, "exec triage": 56500, "executor restarts [base]": 100, "executor restarts [new]": 338, "fault jobs": 0, "fuzzer jobs": 43, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 242279, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 18383, "no exec duration": 46462000000, "no exec requests": 353, "pending": 24, "prog exec time": 221, "reproducing": 0, "rpc recv": 4621507628, "rpc sent": 512087512, "signal": 237109, "smash jobs": 0, "triage jobs": 0, "vm output": 13147432, "vm restarts [base]": 9, "vm restarts [new]": 47 } 2025/11/10 06:22:01 runner 3 connected 2025/11/10 06:22:13 runner 0 connected 2025/11/10 06:23:20 crash "kernel BUG in jfs_evict_inode" is already known 2025/11/10 06:23:20 base crash "kernel BUG in jfs_evict_inode" is to be ignored 2025/11/10 06:23:20 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/11/10 06:23:24 crash "possible deadlock in mark_as_free_ex" is already known 2025/11/10 06:23:24 base crash "possible deadlock in mark_as_free_ex" is to be ignored 2025/11/10 06:23:24 patched crashed: possible deadlock in mark_as_free_ex [need repro = false] 2025/11/10 06:24:09 runner 6 connected 2025/11/10 06:24:21 runner 1 connected 2025/11/10 06:24:40 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 06:24:40 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 06:24:52 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:24:52 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:25:30 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 06:25:37 runner 6 connected 2025/11/10 06:25:41 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 06:25:41 runner 5 connected 2025/11/10 06:25:46 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 06:25:47 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 06:25:58 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 06:26:09 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 06:26:12 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 06:26:19 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 06:26:19 runner 0 connected 2025/11/10 06:26:29 runner 1 connected 2025/11/10 06:26:35 runner 3 connected 2025/11/10 06:26:38 STAT { "buffer too small": 0, "candidate triage jobs": 67, "candidates": 58900, "comps overflows": 0, "corpus": 22641, "corpus [files]": 184, "corpus [symbols]": 12, "cover overflows": 14122, "coverage": 254479, "distributor delayed": 25854, "distributor undelayed": 25837, "distributor violated": 143, "exec candidate": 22953, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 13, "exec seeds": 0, "exec smash": 0, "exec total [base]": 52092, "exec total [new]": 105282, "exec triage": 70661, "executor restarts [base]": 106, "executor restarts [new]": 378, "fault jobs": 0, "fuzzer jobs": 67, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 256416, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 22952, "no exec duration": 46911000000, "no exec requests": 358, "pending": 26, "prog exec time": 40, "reproducing": 0, "rpc recv": 5610386052, "rpc sent": 663982944, "signal": 250527, "smash jobs": 0, "triage jobs": 0, "vm output": 15935610, "vm restarts [base]": 9, "vm restarts [new]": 56 } 2025/11/10 06:26:38 runner 8 connected 2025/11/10 06:26:47 runner 7 connected 2025/11/10 06:26:59 runner 2 connected 2025/11/10 06:27:02 runner 5 connected 2025/11/10 06:27:09 runner 6 connected 2025/11/10 06:27:34 base crash: WARNING in folio_memcg 2025/11/10 06:27:46 base crash: WARNING in folio_memcg 2025/11/10 06:28:18 crash "WARNING in xfrm6_tunnel_net_exit" is already known 2025/11/10 06:28:18 base crash "WARNING in xfrm6_tunnel_net_exit" is to be ignored 2025/11/10 06:28:18 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/11/10 06:28:31 runner 2 connected 2025/11/10 06:28:40 crash "WARNING in xfrm6_tunnel_net_exit" is already known 2025/11/10 06:28:40 base crash "WARNING in xfrm6_tunnel_net_exit" is to be ignored 2025/11/10 06:28:40 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/11/10 06:28:42 runner 0 connected 2025/11/10 06:28:43 base crash: WARNING in xfrm6_tunnel_net_exit 2025/11/10 06:29:10 runner 7 connected 2025/11/10 06:29:26 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/11/10 06:29:30 runner 6 connected 2025/11/10 06:29:34 runner 1 connected 2025/11/10 06:29:41 patched crashed: no output from test machine [need repro = false] 2025/11/10 06:30:16 runner 0 connected 2025/11/10 06:30:17 base crash: lost connection to test machine 2025/11/10 06:30:31 runner 4 connected 2025/11/10 06:30:58 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:30:58 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:31:07 runner 0 connected 2025/11/10 06:31:08 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:31:08 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:31:38 STAT { "buffer too small": 0, "candidate triage jobs": 41, "candidates": 54345, "comps overflows": 0, "corpus": 27172, "corpus [files]": 208, "corpus [symbols]": 14, "cover overflows": 17186, "coverage": 266081, "distributor delayed": 30843, "distributor undelayed": 30843, "distributor violated": 143, "exec candidate": 27508, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 13, "exec seeds": 0, "exec smash": 0, "exec total [base]": 58734, "exec total [new]": 128287, "exec triage": 84521, "executor restarts [base]": 133, "executor restarts [new]": 461, "fault jobs": 0, "fuzzer jobs": 41, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 268100, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 27507, "no exec duration": 46927000000, "no exec requests": 359, "pending": 28, "prog exec time": 255, "reproducing": 0, "rpc recv": 6624647704, "rpc sent": 795403768, "signal": 262094, "smash jobs": 0, "triage jobs": 0, "vm output": 19367400, "vm restarts [base]": 13, "vm restarts [new]": 65 } 2025/11/10 06:31:49 runner 8 connected 2025/11/10 06:31:58 runner 6 connected 2025/11/10 06:32:10 base crash: WARNING in xfrm_state_fini 2025/11/10 06:32:31 crash "possible deadlock in ext4_evict_inode" is already known 2025/11/10 06:32:31 base crash "possible deadlock in ext4_evict_inode" is to be ignored 2025/11/10 06:32:31 patched crashed: possible deadlock in ext4_evict_inode [need repro = false] 2025/11/10 06:32:59 crash "possible deadlock in ext4_destroy_inline_data" is already known 2025/11/10 06:32:59 base crash "possible deadlock in ext4_destroy_inline_data" is to be ignored 2025/11/10 06:32:59 patched crashed: possible deadlock in ext4_destroy_inline_data [need repro = false] 2025/11/10 06:33:06 runner 1 connected 2025/11/10 06:33:20 runner 3 connected 2025/11/10 06:33:22 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:33:22 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:33:32 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:33:32 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:33:50 runner 4 connected 2025/11/10 06:34:05 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 06:34:05 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 06:34:11 runner 5 connected 2025/11/10 06:34:15 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 06:34:15 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 06:34:21 runner 6 connected 2025/11/10 06:34:51 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:34:51 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:34:55 runner 3 connected 2025/11/10 06:35:01 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:35:01 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:35:04 runner 4 connected 2025/11/10 06:35:42 runner 7 connected 2025/11/10 06:35:45 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 06:35:50 runner 2 connected 2025/11/10 06:36:36 runner 1 connected 2025/11/10 06:36:38 STAT { "buffer too small": 0, "candidate triage jobs": 40, "candidates": 49576, "comps overflows": 0, "corpus": 31888, "corpus [files]": 233, "corpus [symbols]": 16, "cover overflows": 20275, "coverage": 277167, "distributor delayed": 36119, "distributor undelayed": 36118, "distributor violated": 145, "exec candidate": 32277, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 17, "exec seeds": 0, "exec smash": 0, "exec total [base]": 69181, "exec total [new]": 153767, "exec triage": 99113, "executor restarts [base]": 149, "executor restarts [new]": 527, "fault jobs": 0, "fuzzer jobs": 40, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 279130, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 32276, "no exec duration": 46993000000, "no exec requests": 363, "pending": 34, "prog exec time": 200, "reproducing": 0, "rpc recv": 7679769040, "rpc sent": 946963336, "signal": 272826, "smash jobs": 0, "triage jobs": 0, "vm output": 23211815, "vm restarts [base]": 14, "vm restarts [new]": 76 } 2025/11/10 06:37:36 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:37:36 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:37:48 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:37:48 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:37:58 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:37:58 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:38:04 crash "possible deadlock in ext4_destroy_inline_data" is already known 2025/11/10 06:38:04 base crash "possible deadlock in ext4_destroy_inline_data" is to be ignored 2025/11/10 06:38:04 patched crashed: possible deadlock in ext4_destroy_inline_data [need repro = false] 2025/11/10 06:38:09 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:38:09 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:38:19 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:38:19 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:38:26 runner 6 connected 2025/11/10 06:38:30 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:38:30 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:38:38 runner 4 connected 2025/11/10 06:38:39 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:38:39 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:38:47 runner 8 connected 2025/11/10 06:38:49 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:38:49 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:38:54 runner 0 connected 2025/11/10 06:38:59 runner 3 connected 2025/11/10 06:39:00 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:39:00 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:39:10 runner 5 connected 2025/11/10 06:39:10 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:39:10 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:39:21 runner 2 connected 2025/11/10 06:39:21 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:39:21 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:39:27 runner 7 connected 2025/11/10 06:39:39 runner 1 connected 2025/11/10 06:39:50 runner 6 connected 2025/11/10 06:40:00 runner 4 connected 2025/11/10 06:40:10 runner 8 connected 2025/11/10 06:41:06 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 06:41:27 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 06:41:38 STAT { "buffer too small": 0, "candidate triage jobs": 46, "candidates": 45696, "comps overflows": 0, "corpus": 35713, "corpus [files]": 250, "corpus [symbols]": 17, "cover overflows": 22753, "coverage": 285417, "distributor delayed": 40766, "distributor undelayed": 40765, "distributor violated": 168, "exec candidate": 36157, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 17, "exec seeds": 0, "exec smash": 0, "exec total [base]": 84725, "exec total [new]": 175199, "exec triage": 110897, "executor restarts [base]": 159, "executor restarts [new]": 616, "fault jobs": 0, "fuzzer jobs": 46, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 287488, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 36155, "no exec duration": 47173000000, "no exec requests": 370, "pending": 45, "prog exec time": 273, "reproducing": 0, "rpc recv": 8792577576, "rpc sent": 1129220672, "signal": 280999, "smash jobs": 0, "triage jobs": 0, "vm output": 26551565, "vm restarts [base]": 14, "vm restarts [new]": 88 } 2025/11/10 06:41:55 runner 1 connected 2025/11/10 06:42:16 runner 5 connected 2025/11/10 06:43:15 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:43:15 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:43:25 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:43:25 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:44:02 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 06:44:03 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 06:44:04 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 06:44:04 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 06:44:05 runner 6 connected 2025/11/10 06:44:17 runner 0 connected 2025/11/10 06:44:35 crash "general protection fault in pcl818_ai_cancel" is already known 2025/11/10 06:44:35 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/11/10 06:44:35 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/10 06:44:46 crash "general protection fault in pcl818_ai_cancel" is already known 2025/11/10 06:44:46 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/11/10 06:44:46 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/10 06:44:52 runner 4 connected 2025/11/10 06:44:52 runner 5 connected 2025/11/10 06:44:53 runner 3 connected 2025/11/10 06:44:53 runner 2 connected 2025/11/10 06:44:58 crash "general protection fault in pcl818_ai_cancel" is already known 2025/11/10 06:44:58 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/11/10 06:44:58 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/10 06:45:24 runner 6 connected 2025/11/10 06:45:25 crash "general protection fault in pcl818_ai_cancel" is already known 2025/11/10 06:45:25 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/11/10 06:45:25 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/10 06:45:36 runner 8 connected 2025/11/10 06:45:47 runner 1 connected 2025/11/10 06:46:14 runner 7 connected 2025/11/10 06:46:38 STAT { "buffer too small": 0, "candidate triage jobs": 37, "candidates": 42067, "comps overflows": 0, "corpus": 39305, "corpus [files]": 265, "corpus [symbols]": 18, "cover overflows": 24848, "coverage": 292697, "distributor delayed": 45227, "distributor undelayed": 45226, "distributor violated": 229, "exec candidate": 39786, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 17, "exec seeds": 0, "exec smash": 0, "exec total [base]": 100295, "exec total [new]": 195676, "exec triage": 121903, "executor restarts [base]": 164, "executor restarts [new]": 701, "fault jobs": 0, "fuzzer jobs": 37, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 294829, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 39784, "no exec duration": 47848000000, "no exec requests": 375, "pending": 47, "prog exec time": 477, "reproducing": 0, "rpc recv": 9854737096, "rpc sent": 1301544080, "signal": 288224, "smash jobs": 0, "triage jobs": 0, "vm output": 30148867, "vm restarts [base]": 14, "vm restarts [new]": 100 } 2025/11/10 06:46:55 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:46:55 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:47:06 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:47:06 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:47:15 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:47:15 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:47:23 crash "kernel BUG in txUnlock" is already known 2025/11/10 06:47:23 base crash "kernel BUG in txUnlock" is to be ignored 2025/11/10 06:47:23 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/10 06:47:24 crash "kernel BUG in txUnlock" is already known 2025/11/10 06:47:24 base crash "kernel BUG in txUnlock" is to be ignored 2025/11/10 06:47:24 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/10 06:47:24 crash "kernel BUG in txUnlock" is already known 2025/11/10 06:47:24 base crash "kernel BUG in txUnlock" is to be ignored 2025/11/10 06:47:24 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/10 06:47:26 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:47:26 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:47:26 crash "kernel BUG in txUnlock" is already known 2025/11/10 06:47:26 base crash "kernel BUG in txUnlock" is to be ignored 2025/11/10 06:47:26 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/10 06:47:45 runner 5 connected 2025/11/10 06:47:47 base crash: kernel BUG in txUnlock 2025/11/10 06:47:55 runner 3 connected 2025/11/10 06:48:04 runner 1 connected 2025/11/10 06:48:12 runner 6 connected 2025/11/10 06:48:13 runner 8 connected 2025/11/10 06:48:14 runner 2 connected 2025/11/10 06:48:15 runner 7 connected 2025/11/10 06:48:16 runner 0 connected 2025/11/10 06:48:36 runner 2 connected 2025/11/10 06:48:52 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 06:48:52 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 06:49:03 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:49:03 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:49:03 base crash: WARNING in xfrm_state_fini 2025/11/10 06:49:41 runner 8 connected 2025/11/10 06:49:51 runner 4 connected 2025/11/10 06:49:53 runner 0 connected 2025/11/10 06:49:53 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 06:50:43 runner 1 connected 2025/11/10 06:51:13 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:51:13 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:51:24 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:51:24 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:51:27 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:51:27 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:51:35 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:51:35 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:51:38 STAT { "buffer too small": 0, "candidate triage jobs": 20, "candidates": 38747, "comps overflows": 0, "corpus": 42584, "corpus [files]": 278, "corpus [symbols]": 19, "cover overflows": 27134, "coverage": 299103, "distributor delayed": 48975, "distributor undelayed": 48962, "distributor violated": 229, "exec candidate": 43106, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 18, "exec seeds": 0, "exec smash": 0, "exec total [base]": 112122, "exec total [new]": 216626, "exec triage": 132034, "executor restarts [base]": 179, "executor restarts [new]": 785, "fault jobs": 0, "fuzzer jobs": 20, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 301232, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 43104, "no exec duration": 47988000000, "no exec requests": 377, "pending": 57, "prog exec time": 288, "reproducing": 0, "rpc recv": 10842191100, "rpc sent": 1462295704, "signal": 294561, "smash jobs": 0, "triage jobs": 0, "vm output": 33580123, "vm restarts [base]": 16, "vm restarts [new]": 111 } 2025/11/10 06:51:39 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:51:39 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:51:46 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:51:46 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:51:49 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:51:49 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:52:01 runner 3 connected 2025/11/10 06:52:02 crash "WARNING in rate_control_rate_init" is already known 2025/11/10 06:52:02 base crash "WARNING in rate_control_rate_init" is to be ignored 2025/11/10 06:52:02 patched crashed: WARNING in rate_control_rate_init [need repro = false] 2025/11/10 06:52:15 runner 8 connected 2025/11/10 06:52:17 runner 4 connected 2025/11/10 06:52:22 runner 1 connected 2025/11/10 06:52:28 runner 6 connected 2025/11/10 06:52:37 runner 0 connected 2025/11/10 06:52:39 runner 5 connected 2025/11/10 06:52:49 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:52:49 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:52:51 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 06:52:51 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 06:52:53 runner 2 connected 2025/11/10 06:53:01 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:53:01 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:53:02 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 06:53:02 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 06:53:09 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 06:53:20 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 06:53:26 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:53:26 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:53:38 runner 8 connected 2025/11/10 06:53:40 runner 6 connected 2025/11/10 06:53:41 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:53:41 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:53:51 runner 4 connected 2025/11/10 06:53:52 runner 5 connected 2025/11/10 06:53:59 runner 0 connected 2025/11/10 06:54:00 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 06:54:00 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 06:54:11 runner 2 connected 2025/11/10 06:54:15 runner 7 connected 2025/11/10 06:54:18 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 06:54:18 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 06:54:31 runner 3 connected 2025/11/10 06:54:35 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 06:54:49 runner 1 connected 2025/11/10 06:54:53 base crash: lost connection to test machine 2025/11/10 06:55:07 runner 4 connected 2025/11/10 06:55:25 runner 5 connected 2025/11/10 06:55:43 runner 2 connected 2025/11/10 06:56:06 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:56:06 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:56:26 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 06:56:38 STAT { "buffer too small": 0, "candidate triage jobs": 20, "candidates": 37489, "comps overflows": 0, "corpus": 43791, "corpus [files]": 283, "corpus [symbols]": 19, "cover overflows": 29014, "coverage": 301542, "distributor delayed": 50700, "distributor undelayed": 50700, "distributor violated": 231, "exec candidate": 44364, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 123393, "exec total [new]": 230715, "exec triage": 135916, "executor restarts [base]": 192, "executor restarts [new]": 878, "fault jobs": 0, "fuzzer jobs": 20, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 303971, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44362, "no exec duration": 48037000000, "no exec requests": 379, "pending": 69, "prog exec time": 300, "reproducing": 0, "rpc recv": 11861498196, "rpc sent": 1623824584, "signal": 296978, "smash jobs": 0, "triage jobs": 0, "vm output": 36627989, "vm restarts [base]": 17, "vm restarts [new]": 130 } 2025/11/10 06:56:57 runner 4 connected 2025/11/10 06:57:16 runner 1 connected 2025/11/10 06:57:42 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:57:42 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:57:44 base crash: WARNING in xfrm_state_fini 2025/11/10 06:58:17 crash "INFO: task hung in __iterate_supers" is already known 2025/11/10 06:58:17 base crash "INFO: task hung in __iterate_supers" is to be ignored 2025/11/10 06:58:17 patched crashed: INFO: task hung in __iterate_supers [need repro = false] 2025/11/10 06:58:22 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 06:58:22 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 06:58:33 runner 1 connected 2025/11/10 06:58:38 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:58:38 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:58:39 runner 2 connected 2025/11/10 06:58:59 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 06:58:59 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 06:59:06 runner 6 connected 2025/11/10 06:59:10 runner 0 connected 2025/11/10 06:59:10 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 06:59:10 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 06:59:27 runner 4 connected 2025/11/10 06:59:49 runner 3 connected 2025/11/10 07:00:01 runner 2 connected 2025/11/10 07:00:41 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:00:41 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:00:46 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 07:00:46 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 07:00:56 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 07:00:56 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 07:01:05 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 07:01:30 runner 5 connected 2025/11/10 07:01:38 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 36240, "comps overflows": 0, "corpus": 44993, "corpus [files]": 289, "corpus [symbols]": 19, "cover overflows": 32359, "coverage": 304126, "distributor delayed": 52084, "distributor undelayed": 52084, "distributor violated": 239, "exec candidate": 45613, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 130968, "exec total [new]": 250603, "exec triage": 139878, "executor restarts [base]": 208, "executor restarts [new]": 935, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 306659, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45611, "no exec duration": 48218000000, "no exec requests": 381, "pending": 77, "prog exec time": 264, "reproducing": 0, "rpc recv": 12509321140, "rpc sent": 1766025232, "signal": 299518, "smash jobs": 0, "triage jobs": 0, "vm output": 39453049, "vm restarts [base]": 18, "vm restarts [new]": 139 } 2025/11/10 07:01:43 runner 2 connected 2025/11/10 07:01:47 runner 7 connected 2025/11/10 07:01:55 runner 6 connected 2025/11/10 07:02:16 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:02:16 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:03:08 runner 0 connected 2025/11/10 07:03:17 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 07:03:17 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 07:03:22 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 07:03:22 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 07:03:43 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 07:03:43 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 07:03:52 patched crashed: INFO: task hung in reg_process_self_managed_hints [need repro = true] 2025/11/10 07:03:52 scheduled a reproduction of 'INFO: task hung in reg_process_self_managed_hints' 2025/11/10 07:04:06 runner 5 connected 2025/11/10 07:04:20 runner 7 connected 2025/11/10 07:04:21 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 07:04:33 runner 0 connected 2025/11/10 07:04:42 runner 4 connected 2025/11/10 07:04:55 crash "possible deadlock in ext4_writepages" is already known 2025/11/10 07:04:55 base crash "possible deadlock in ext4_writepages" is to be ignored 2025/11/10 07:04:55 patched crashed: possible deadlock in ext4_writepages [need repro = false] 2025/11/10 07:05:10 runner 1 connected 2025/11/10 07:05:15 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:05:15 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:05:15 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:05:15 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:05:26 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:05:26 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:05:37 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:05:37 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:05:45 runner 3 connected 2025/11/10 07:05:57 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 07:05:57 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 07:05:58 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 07:05:58 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 07:06:04 runner 2 connected 2025/11/10 07:06:04 runner 8 connected 2025/11/10 07:06:14 base crash: WARNING in xfrm6_tunnel_net_exit 2025/11/10 07:06:17 runner 7 connected 2025/11/10 07:06:22 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 07:06:24 runner 4 connected 2025/11/10 07:06:27 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 07:06:27 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 07:06:32 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 07:06:38 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 35562, "comps overflows": 0, "corpus": 45586, "corpus [files]": 292, "corpus [symbols]": 19, "cover overflows": 35326, "coverage": 305923, "distributor delayed": 52986, "distributor undelayed": 52982, "distributor violated": 245, "exec candidate": 46291, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 22, "exec seeds": 0, "exec smash": 0, "exec total [base]": 138270, "exec total [new]": 268122, "exec triage": 141977, "executor restarts [base]": 224, "executor restarts [new]": 1017, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 308573, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46256, "no exec duration": 48289000000, "no exec requests": 385, "pending": 89, "prog exec time": 448, "reproducing": 0, "rpc recv": 13171088796, "rpc sent": 1901503288, "signal": 301271, "smash jobs": 0, "triage jobs": 0, "vm output": 41788330, "vm restarts [base]": 18, "vm restarts [new]": 153 } 2025/11/10 07:06:38 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:06:38 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:06:42 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 07:06:46 runner 5 connected 2025/11/10 07:06:47 runner 0 connected 2025/11/10 07:06:49 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 07:06:49 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 07:07:04 runner 1 connected 2025/11/10 07:07:06 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 07:07:07 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 07:07:07 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 07:07:10 runner 3 connected 2025/11/10 07:07:18 runner 2 connected 2025/11/10 07:07:22 runner 8 connected 2025/11/10 07:07:27 runner 1 connected 2025/11/10 07:07:31 runner 7 connected 2025/11/10 07:07:39 runner 4 connected 2025/11/10 07:07:51 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:07:51 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:07:56 runner 5 connected 2025/11/10 07:07:57 runner 0 connected 2025/11/10 07:08:05 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 07:08:32 base crash: INFO: task hung in user_get_super 2025/11/10 07:08:33 base crash: WARNING in folio_memcg 2025/11/10 07:08:37 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/10 07:08:41 runner 2 connected 2025/11/10 07:08:56 runner 1 connected 2025/11/10 07:09:02 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 07:09:02 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 07:09:16 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 07:09:21 runner 1 connected 2025/11/10 07:09:23 runner 2 connected 2025/11/10 07:09:29 runner 5 connected 2025/11/10 07:09:52 runner 4 connected 2025/11/10 07:10:05 runner 0 connected 2025/11/10 07:10:13 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 07:10:13 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 07:10:39 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 07:10:39 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 07:10:44 patched crashed: no output from test machine [need repro = false] 2025/11/10 07:10:46 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 07:10:46 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 07:11:04 runner 5 connected 2025/11/10 07:11:11 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:11:11 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:11:28 runner 1 connected 2025/11/10 07:11:35 runner 6 connected 2025/11/10 07:11:36 runner 0 connected 2025/11/10 07:11:38 STAT { "buffer too small": 0, "candidate triage jobs": 6, "candidates": 35189, "comps overflows": 0, "corpus": 45868, "corpus [files]": 293, "corpus [symbols]": 19, "cover overflows": 38220, "coverage": 306674, "distributor delayed": 53460, "distributor undelayed": 53460, "distributor violated": 254, "exec candidate": 46664, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 24, "exec seeds": 0, "exec smash": 0, "exec total [base]": 147079, "exec total [new]": 284948, "exec triage": 142997, "executor restarts [base]": 247, "executor restarts [new]": 1089, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 309459, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46566, "no exec duration": 48372000000, "no exec requests": 389, "pending": 98, "prog exec time": 205, "reproducing": 0, "rpc recv": 14009636736, "rpc sent": 2047064136, "signal": 302002, "smash jobs": 0, "triage jobs": 0, "vm output": 44116995, "vm restarts [base]": 21, "vm restarts [new]": 172 } 2025/11/10 07:12:00 runner 2 connected 2025/11/10 07:12:17 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 07:12:17 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 07:12:26 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 07:12:26 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 07:12:36 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 07:12:36 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 07:13:06 runner 0 connected 2025/11/10 07:13:15 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:13:15 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:13:15 runner 2 connected 2025/11/10 07:13:25 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:13:25 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:13:26 runner 6 connected 2025/11/10 07:13:36 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:13:36 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:14:04 runner 8 connected 2025/11/10 07:14:13 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 07:14:13 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 07:14:16 runner 4 connected 2025/11/10 07:14:17 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 07:14:17 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 07:14:25 runner 5 connected 2025/11/10 07:14:41 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 07:14:41 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 07:15:01 base crash: kernel BUG in jfs_evict_inode 2025/11/10 07:15:03 runner 6 connected 2025/11/10 07:15:05 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 07:15:06 runner 7 connected 2025/11/10 07:15:31 runner 8 connected 2025/11/10 07:15:51 runner 2 connected 2025/11/10 07:15:56 runner 5 connected 2025/11/10 07:16:38 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 25974, "comps overflows": 0, "corpus": 46178, "corpus [files]": 294, "corpus [symbols]": 19, "cover overflows": 42751, "coverage": 307154, "distributor delayed": 54015, "distributor undelayed": 54015, "distributor violated": 254, "exec candidate": 55879, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 26, "exec seeds": 0, "exec smash": 0, "exec total [base]": 158223, "exec total [new]": 310745, "exec triage": 144267, "executor restarts [base]": 261, "executor restarts [new]": 1163, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 310111, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46938, "no exec duration": 48578000000, "no exec requests": 397, "pending": 107, "prog exec time": 206, "reproducing": 0, "rpc recv": 14709156800, "rpc sent": 2218666832, "signal": 302471, "smash jobs": 0, "triage jobs": 0, "vm output": 47017030, "vm restarts [base]": 22, "vm restarts [new]": 183 } 2025/11/10 07:19:08 triaged 90.9% of the corpus 2025/11/10 07:19:08 starting bug reproductions 2025/11/10 07:19:08 starting bug reproductions (max 6 VMs, 4 repros) 2025/11/10 07:19:08 start reproducing 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:19:08 start reproducing 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 07:19:08 start reproducing 'possible deadlock in unmap_vmas' 2025/11/10 07:19:08 start reproducing 'INFO: task hung in reg_process_self_managed_hints' 2025/11/10 07:20:20 reproducing crash 'possible deadlock in move_hugetlb_page_tables': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:20:22 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:20:34 base crash: WARNING in io_ring_exit_work 2025/11/10 07:21:23 runner 1 connected 2025/11/10 07:21:29 base crash: WARNING in xfrm_state_fini 2025/11/10 07:21:38 STAT { "buffer too small": 0, "candidate triage jobs": 1, "candidates": 1286, "comps overflows": 0, "corpus": 46311, "corpus [files]": 294, "corpus [symbols]": 19, "cover overflows": 47537, "coverage": 307400, "distributor delayed": 54270, "distributor undelayed": 54270, "distributor violated": 259, "exec candidate": 80567, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 27, "exec seeds": 0, "exec smash": 0, "exec total [base]": 168690, "exec total [new]": 336220, "exec triage": 145055, "executor restarts [base]": 276, "executor restarts [new]": 1183, "fault jobs": 0, "fuzzer jobs": 1, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 310498, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47139, "no exec duration": 48665000000, "no exec requests": 403, "pending": 103, "prog exec time": 197, "reproducing": 4, "rpc recv": 14986012676, "rpc sent": 2347820936, "signal": 302716, "smash jobs": 0, "triage jobs": 0, "vm output": 49078460, "vm restarts [base]": 23, "vm restarts [new]": 183 } 2025/11/10 07:21:44 reproducing crash 'possible deadlock in move_hugetlb_page_tables': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:21:49 base crash: WARNING in xfrm_state_fini 2025/11/10 07:22:13 reproducing crash 'possible deadlock in move_hugetlb_page_tables': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:22:20 runner 0 connected 2025/11/10 07:22:31 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:22:31 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:22:37 runner 1 connected 2025/11/10 07:23:02 reproducing crash 'possible deadlock in move_hugetlb_page_tables': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:23:03 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:23:16 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:23:16 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:23:20 runner 7 connected 2025/11/10 07:24:06 runner 6 connected 2025/11/10 07:24:16 reproducing crash 'possible deadlock in move_hugetlb_page_tables': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:24:17 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:24:41 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:24:41 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:24:56 reproducing crash 'possible deadlock in move_hugetlb_page_tables': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:24:58 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:25:02 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:25:02 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:25:12 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:25:12 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:25:30 runner 6 connected 2025/11/10 07:25:32 reproducing crash 'possible deadlock in move_hugetlb_page_tables': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:25:32 repro finished 'possible deadlock in move_hugetlb_page_tables', repro=true crepro=false desc='possible deadlock in move_hugetlb_page_tables' hub=false from_dashboard=false 2025/11/10 07:25:32 found repro for "possible deadlock in move_hugetlb_page_tables" (orig title: "-SAME-", reliability: 1), took 6.39 minutes 2025/11/10 07:25:32 "possible deadlock in move_hugetlb_page_tables": saved crash log into 1762759532.crash.log 2025/11/10 07:25:32 "possible deadlock in move_hugetlb_page_tables": saved repro log into 1762759532.repro.log 2025/11/10 07:25:32 start reproducing 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 07:25:40 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:25:51 runner 8 connected 2025/11/10 07:26:02 runner 7 connected 2025/11/10 07:26:13 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:26:16 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:26:16 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:26:17 base crash: possible deadlock in ocfs2_xattr_set 2025/11/10 07:26:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 14, "corpus": 46343, "corpus [files]": 297, "corpus [symbols]": 20, "cover overflows": 48474, "coverage": 307489, "distributor delayed": 54378, "distributor undelayed": 54370, "distributor violated": 259, "exec candidate": 81853, "exec collide": 812, "exec fuzz": 1445, "exec gen": 66, "exec hints": 473, "exec inject": 0, "exec minimize": 285, "exec retries": 27, "exec seeds": 69, "exec smash": 430, "exec total [base]": 175629, "exec total [new]": 341271, "exec triage": 145239, "executor restarts [base]": 307, "executor restarts [new]": 1217, "fault jobs": 0, "fuzzer jobs": 17, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 3, "max signal": 310645, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 182, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47205, "no exec duration": 54241000000, "no exec requests": 413, "pending": 108, "prog exec time": 398, "reproducing": 4, "rpc recv": 15428358212, "rpc sent": 2440003768, "signal": 302769, "smash jobs": 6, "triage jobs": 8, "vm output": 51105284, "vm restarts [base]": 25, "vm restarts [new]": 188 } 2025/11/10 07:26:54 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:26:54 repro finished 'possible deadlock in hugetlb_change_protection', repro=true crepro=false desc='possible deadlock in unmap_vmas' hub=false from_dashboard=false 2025/11/10 07:26:54 found repro for "possible deadlock in unmap_vmas" (orig title: "possible deadlock in hugetlb_change_protection", reliability: 1), took 7.37 minutes 2025/11/10 07:26:54 "possible deadlock in unmap_vmas": saved crash log into 1762759614.crash.log 2025/11/10 07:26:54 "possible deadlock in unmap_vmas": saved repro log into 1762759614.repro.log 2025/11/10 07:26:54 start reproducing 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:27:04 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:27:04 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:27:05 runner 8 connected 2025/11/10 07:27:19 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/10 07:27:23 attempt #0 to run "possible deadlock in move_hugetlb_page_tables" on base: did not crash 2025/11/10 07:27:28 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:27:53 runner 7 connected 2025/11/10 07:28:08 runner 6 connected 2025/11/10 07:28:24 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:28:24 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:28:58 attempt #0 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 07:29:14 runner 8 connected 2025/11/10 07:29:16 attempt #1 to run "possible deadlock in move_hugetlb_page_tables" on base: did not crash 2025/11/10 07:29:19 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:29:19 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:29:31 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:29:31 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:29:37 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:30:08 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:30:09 runner 6 connected 2025/11/10 07:30:21 runner 7 connected 2025/11/10 07:30:48 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:30:48 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:30:51 attempt #1 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 07:30:55 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:31:09 attempt #2 to run "possible deadlock in move_hugetlb_page_tables" on base: did not crash 2025/11/10 07:31:10 patched-only: possible deadlock in move_hugetlb_page_tables 2025/11/10 07:31:10 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables (full)' 2025/11/10 07:31:30 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 07:31:36 runner 6 connected 2025/11/10 07:31:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 17, "corpus": 46363, "corpus [files]": 297, "corpus [symbols]": 20, "cover overflows": 48979, "coverage": 307521, "distributor delayed": 54447, "distributor undelayed": 54445, "distributor violated": 259, "exec candidate": 81853, "exec collide": 1323, "exec fuzz": 2374, "exec gen": 107, "exec hints": 1146, "exec inject": 0, "exec minimize": 630, "exec retries": 27, "exec seeds": 141, "exec smash": 991, "exec total [base]": 178675, "exec total [new]": 344505, "exec triage": 145343, "executor restarts [base]": 316, "executor restarts [new]": 1258, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 1, "hints jobs": 2, "max signal": 310714, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 409, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47243, "no exec duration": 57559000000, "no exec requests": 420, "pending": 112, "prog exec time": 264, "reproducing": 4, "rpc recv": 15733629548, "rpc sent": 2504495792, "signal": 302801, "smash jobs": 1, "triage jobs": 5, "vm output": 54651481, "vm restarts [base]": 25, "vm restarts [new]": 195 } 2025/11/10 07:31:46 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:32:00 runner 0 connected 2025/11/10 07:32:12 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:32:17 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:32:17 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:32:19 runner 8 connected 2025/11/10 07:32:45 attempt #2 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 07:32:45 patched-only: possible deadlock in unmap_vmas 2025/11/10 07:32:45 scheduled a reproduction of 'possible deadlock in unmap_vmas (full)' 2025/11/10 07:32:47 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:32:47 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:33:04 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:33:04 repro finished 'possible deadlock in hugetlb_change_protection', repro=true crepro=false desc='possible deadlock in unmap_vmas' hub=false from_dashboard=false 2025/11/10 07:33:04 found repro for "possible deadlock in unmap_vmas" (orig title: "possible deadlock in hugetlb_change_protection", reliability: 1), took 6.16 minutes 2025/11/10 07:33:04 start reproducing 'possible deadlock in move_hugetlb_page_tables (full)' 2025/11/10 07:33:04 "possible deadlock in unmap_vmas": saved crash log into 1762759984.crash.log 2025/11/10 07:33:04 "possible deadlock in unmap_vmas": saved repro log into 1762759984.repro.log 2025/11/10 07:33:06 runner 7 connected 2025/11/10 07:33:31 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:33:31 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:33:35 runner 1 connected 2025/11/10 07:33:36 runner 6 connected 2025/11/10 07:33:36 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:33:47 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:33:47 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:33:59 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:33:59 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:34:21 runner 8 connected 2025/11/10 07:34:35 runner 7 connected 2025/11/10 07:34:40 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 07:34:40 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 07:34:47 runner 6 connected 2025/11/10 07:34:56 attempt #0 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 07:35:17 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:35:17 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:35:28 runner 8 connected 2025/11/10 07:35:58 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:36:06 runner 6 connected 2025/11/10 07:36:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 22, "corpus": 46373, "corpus [files]": 298, "corpus [symbols]": 21, "cover overflows": 49506, "coverage": 307538, "distributor delayed": 54499, "distributor undelayed": 54498, "distributor violated": 259, "exec candidate": 81853, "exec collide": 1778, "exec fuzz": 3335, "exec gen": 160, "exec hints": 1527, "exec inject": 0, "exec minimize": 984, "exec retries": 27, "exec seeds": 169, "exec smash": 1195, "exec total [base]": 182342, "exec total [new]": 347024, "exec triage": 145418, "executor restarts [base]": 348, "executor restarts [new]": 1309, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 4, "max signal": 310747, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 642, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47272, "no exec duration": 169470000000, "no exec requests": 783, "pending": 119, "prog exec time": 519, "reproducing": 4, "rpc recv": 16231903332, "rpc sent": 2584331704, "signal": 302817, "smash jobs": 2, "triage jobs": 9, "vm output": 56667500, "vm restarts [base]": 27, "vm restarts [new]": 203 } 2025/11/10 07:36:43 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:36:43 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:36:43 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:36:50 attempt #1 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 07:37:06 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:37:15 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/10 07:37:22 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:37:33 runner 8 connected 2025/11/10 07:38:04 runner 6 connected 2025/11/10 07:38:09 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:38:14 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:38:14 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:38:26 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:38:26 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:38:26 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:38:41 attempt #2 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 07:38:41 patched-only: possible deadlock in unmap_vmas 2025/11/10 07:38:41 scheduled a reproduction of 'possible deadlock in unmap_vmas (full)' 2025/11/10 07:39:02 runner 8 connected 2025/11/10 07:39:07 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:39:15 runner 6 connected 2025/11/10 07:39:30 runner 0 connected 2025/11/10 07:39:31 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:40:02 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:40:06 base crash: WARNING in rate_control_rate_init 2025/11/10 07:40:17 base crash: WARNING in rate_control_rate_init 2025/11/10 07:40:32 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:40:35 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:40:35 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:40:56 runner 0 connected 2025/11/10 07:41:01 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:41:01 repro finished 'possible deadlock in move_hugetlb_page_tables (full)', repro=true crepro=true desc='possible deadlock in move_hugetlb_page_tables' hub=false from_dashboard=false 2025/11/10 07:41:01 found repro for "possible deadlock in move_hugetlb_page_tables" (orig title: "-SAME-", reliability: 1), took 7.94 minutes 2025/11/10 07:41:01 "possible deadlock in move_hugetlb_page_tables": saved crash log into 1762760461.crash.log 2025/11/10 07:41:01 start reproducing 'possible deadlock in unmap_vmas (full)' 2025/11/10 07:41:01 "possible deadlock in move_hugetlb_page_tables": saved repro log into 1762760461.repro.log 2025/11/10 07:41:01 failed to recv *flatrpc.InfoRequestRawT: unexpected EOF 2025/11/10 07:41:05 runner 2 connected 2025/11/10 07:41:08 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:41:08 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:41:24 runner 8 connected 2025/11/10 07:41:28 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:41:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 54, "corpus": 46393, "corpus [files]": 299, "corpus [symbols]": 21, "cover overflows": 50209, "coverage": 307570, "distributor delayed": 54554, "distributor undelayed": 54549, "distributor violated": 259, "exec candidate": 81853, "exec collide": 2272, "exec fuzz": 4398, "exec gen": 221, "exec hints": 2464, "exec inject": 0, "exec minimize": 1390, "exec retries": 27, "exec seeds": 237, "exec smash": 1748, "exec total [base]": 185716, "exec total [new]": 350680, "exec triage": 145493, "executor restarts [base]": 360, "executor restarts [new]": 1336, "fault jobs": 0, "fuzzer jobs": 17, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 7, "max signal": 310797, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 867, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47303, "no exec duration": 404395000000, "no exec requests": 1412, "pending": 124, "prog exec time": 399, "reproducing": 4, "rpc recv": 16608342060, "rpc sent": 2652726256, "signal": 302846, "smash jobs": 2, "triage jobs": 8, "vm output": 61070364, "vm restarts [base]": 30, "vm restarts [new]": 208 } 2025/11/10 07:41:43 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:41:43 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:41:58 runner 7 connected 2025/11/10 07:42:32 runner 6 connected 2025/11/10 07:42:53 attempt #0 to run "possible deadlock in move_hugetlb_page_tables" on base: did not crash 2025/11/10 07:42:59 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 07:43:03 base crash: lost connection to test machine 2025/11/10 07:43:10 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:43:10 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:43:11 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:43:47 runner 7 connected 2025/11/10 07:43:52 runner 1 connected 2025/11/10 07:43:58 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:44:01 runner 6 connected 2025/11/10 07:44:06 repro finished 'possible deadlock in unmap_vmas', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/10 07:44:06 failed repro for "possible deadlock in unmap_vmas", err=%!s() 2025/11/10 07:44:06 start reproducing 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:44:06 "possible deadlock in unmap_vmas": saved crash log into 1762760646.crash.log 2025/11/10 07:44:06 "possible deadlock in unmap_vmas": saved repro log into 1762760646.repro.log 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:06 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 07:44:16 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:44:45 attempt #1 to run "possible deadlock in move_hugetlb_page_tables" on base: did not crash 2025/11/10 07:44:47 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:44:47 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:45:01 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:45:21 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:45:37 runner 8 connected 2025/11/10 07:45:42 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:46:17 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/10 07:46:18 base crash: kernel BUG in txUnlock 2025/11/10 07:46:24 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:46:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 78, "corpus": 46409, "corpus [files]": 300, "corpus [symbols]": 21, "cover overflows": 50725, "coverage": 307630, "distributor delayed": 54608, "distributor undelayed": 54606, "distributor violated": 259, "exec candidate": 81853, "exec collide": 2616, "exec fuzz": 5067, "exec gen": 261, "exec hints": 2928, "exec inject": 0, "exec minimize": 1830, "exec retries": 27, "exec seeds": 284, "exec smash": 2028, "exec total [base]": 187961, "exec total [new]": 353052, "exec triage": 145579, "executor restarts [base]": 401, "executor restarts [new]": 1398, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 4, "max signal": 310929, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1168, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47333, "no exec duration": 404553000000, "no exec requests": 1413, "pending": 107, "prog exec time": 0, "reproducing": 4, "rpc recv": 16926506204, "rpc sent": 2738917688, "signal": 302923, "smash jobs": 3, "triage jobs": 4, "vm output": 64078977, "vm restarts [base]": 31, "vm restarts [new]": 213 } 2025/11/10 07:46:38 attempt #2 to run "possible deadlock in move_hugetlb_page_tables" on base: did not crash 2025/11/10 07:46:38 patched-only: possible deadlock in move_hugetlb_page_tables 2025/11/10 07:46:43 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:47:06 runner 8 connected 2025/11/10 07:47:06 runner 2 connected 2025/11/10 07:47:28 runner 0 connected 2025/11/10 07:47:33 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/11/10 07:47:38 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:48:06 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:48:23 runner 7 connected 2025/11/10 07:48:57 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:48:57 repro finished 'possible deadlock in unmap_vmas (full)', repro=true crepro=true desc='possible deadlock in unmap_vmas' hub=false from_dashboard=false 2025/11/10 07:48:57 found repro for "possible deadlock in unmap_vmas" (orig title: "-SAME-", reliability: 1), took 7.93 minutes 2025/11/10 07:48:57 start reproducing 'possible deadlock in unmap_vmas (full)' 2025/11/10 07:48:57 "possible deadlock in unmap_vmas": saved crash log into 1762760937.crash.log 2025/11/10 07:48:57 "possible deadlock in unmap_vmas": saved repro log into 1762760937.repro.log 2025/11/10 07:49:12 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:49:12 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:49:18 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:49:18 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:49:22 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:49:23 base crash: kernel BUG in jfs_evict_inode 2025/11/10 07:49:56 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:49:56 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:50:02 runner 6 connected 2025/11/10 07:50:07 runner 8 connected 2025/11/10 07:50:12 runner 2 connected 2025/11/10 07:50:44 runner 7 connected 2025/11/10 07:50:51 attempt #0 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 07:50:56 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:51:03 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 07:51:13 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:51:15 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:51:15 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:51:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 83, "corpus": 46415, "corpus [files]": 300, "corpus [symbols]": 21, "cover overflows": 51057, "coverage": 307643, "distributor delayed": 54631, "distributor undelayed": 54627, "distributor violated": 259, "exec candidate": 81853, "exec collide": 3022, "exec fuzz": 5848, "exec gen": 302, "exec hints": 3419, "exec inject": 0, "exec minimize": 1944, "exec retries": 27, "exec seeds": 303, "exec smash": 2113, "exec total [base]": 189431, "exec total [new]": 355013, "exec triage": 145607, "executor restarts [base]": 453, "executor restarts [new]": 1458, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 6, "max signal": 310966, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1250, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47346, "no exec duration": 406725000000, "no exec requests": 1417, "pending": 110, "prog exec time": 300, "reproducing": 4, "rpc recv": 17254022392, "rpc sent": 2799188568, "signal": 302935, "smash jobs": 1, "triage jobs": 4, "vm output": 67176808, "vm restarts [base]": 34, "vm restarts [new]": 218 } 2025/11/10 07:51:42 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:51:53 runner 6 connected 2025/11/10 07:51:59 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:52:03 runner 8 connected 2025/11/10 07:52:16 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:52:42 attempt #1 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 07:52:46 base crash: WARNING in xfrm_state_fini 2025/11/10 07:52:46 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:52:46 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:52:57 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:52:57 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:53:02 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:53:19 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:53:34 runner 1 connected 2025/11/10 07:53:35 runner 6 connected 2025/11/10 07:53:48 runner 7 connected 2025/11/10 07:53:58 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:53:58 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:54:04 crash "kernel BUG in ocfs2_write_cluster_by_desc" is already known 2025/11/10 07:54:04 base crash "kernel BUG in ocfs2_write_cluster_by_desc" is to be ignored 2025/11/10 07:54:04 patched crashed: kernel BUG in ocfs2_write_cluster_by_desc [need repro = false] 2025/11/10 07:54:07 base crash: kernel BUG in ocfs2_write_cluster_by_desc 2025/11/10 07:54:11 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:54:36 attempt #2 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 07:54:36 patched-only: possible deadlock in unmap_vmas 2025/11/10 07:54:46 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:54:47 runner 8 connected 2025/11/10 07:54:53 runner 6 connected 2025/11/10 07:54:56 runner 2 connected 2025/11/10 07:55:17 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:55:19 crash "possible deadlock in ocfs2_init_acl" is already known 2025/11/10 07:55:19 base crash "possible deadlock in ocfs2_init_acl" is to be ignored 2025/11/10 07:55:19 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/11/10 07:55:25 runner 0 connected 2025/11/10 07:55:47 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 07:55:47 repro finished 'possible deadlock in unmap_vmas (full)', repro=true crepro=true desc='possible deadlock in unmap_vmas' hub=false from_dashboard=false 2025/11/10 07:55:47 found repro for "possible deadlock in unmap_vmas" (orig title: "-SAME-", reliability: 1), took 6.84 minutes 2025/11/10 07:55:47 "possible deadlock in unmap_vmas": saved crash log into 1762761347.crash.log 2025/11/10 07:55:47 "possible deadlock in unmap_vmas": saved repro log into 1762761347.repro.log 2025/11/10 07:56:07 runner 1 connected 2025/11/10 07:56:08 runner 6 connected 2025/11/10 07:56:28 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 07:56:31 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 07:56:31 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 07:56:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 89, "corpus": 46430, "corpus [files]": 300, "corpus [symbols]": 21, "cover overflows": 51563, "coverage": 307739, "distributor delayed": 54691, "distributor undelayed": 54691, "distributor violated": 259, "exec candidate": 81853, "exec collide": 3581, "exec fuzz": 6888, "exec gen": 372, "exec hints": 4229, "exec inject": 0, "exec minimize": 2186, "exec retries": 27, "exec seeds": 360, "exec smash": 2495, "exec total [base]": 191976, "exec total [new]": 358265, "exec triage": 145691, "executor restarts [base]": 489, "executor restarts [new]": 1507, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 3, "hints jobs": 5, "max signal": 311105, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1430, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47375, "no exec duration": 409725000000, "no exec requests": 1420, "pending": 114, "prog exec time": 464, "reproducing": 3, "rpc recv": 17717646756, "rpc sent": 2872211472, "signal": 303028, "smash jobs": 3, "triage jobs": 4, "vm output": 70623712, "vm restarts [base]": 37, "vm restarts [new]": 226 } 2025/11/10 07:56:40 base crash: possible deadlock in hfsplus_block_allocate 2025/11/10 07:56:58 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/11/10 07:57:16 runner 1 connected 2025/11/10 07:57:20 runner 7 connected 2025/11/10 07:57:28 runner 2 connected 2025/11/10 07:57:34 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/11/10 07:57:41 attempt #0 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 07:57:45 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 07:57:46 runner 8 connected 2025/11/10 07:57:49 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 07:58:14 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:58:14 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:58:24 runner 0 connected 2025/11/10 07:58:24 runner 6 connected 2025/11/10 07:58:35 runner 1 connected 2025/11/10 07:58:38 runner 2 connected 2025/11/10 07:58:42 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 07:58:42 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 07:58:54 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/11/10 07:58:54 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 07:59:03 runner 8 connected 2025/11/10 07:59:08 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/10 07:59:30 runner 1 connected 2025/11/10 07:59:34 attempt #1 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 07:59:42 runner 7 connected 2025/11/10 07:59:44 runner 6 connected 2025/11/10 07:59:57 runner 0 connected 2025/11/10 08:00:13 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:00:13 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:00:18 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:00:18 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:00:19 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:00:19 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:00:25 crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/11/10 08:00:25 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is to be ignored 2025/11/10 08:00:25 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/10 08:00:32 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:00:32 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:00:45 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 08:01:02 runner 7 connected 2025/11/10 08:01:07 runner 1 connected 2025/11/10 08:01:08 runner 8 connected 2025/11/10 08:01:15 runner 0 connected 2025/11/10 08:01:20 runner 6 connected 2025/11/10 08:01:22 base crash: WARNING in folio_memcg 2025/11/10 08:01:26 attempt #2 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 08:01:26 patched-only: possible deadlock in unmap_vmas 2025/11/10 08:01:27 runner 2 connected 2025/11/10 08:01:37 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 08:01:37 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 08:01:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 103, "corpus": 46446, "corpus [files]": 300, "corpus [symbols]": 21, "cover overflows": 52130, "coverage": 307760, "distributor delayed": 54749, "distributor undelayed": 54749, "distributor violated": 259, "exec candidate": 81853, "exec collide": 4262, "exec fuzz": 8143, "exec gen": 436, "exec hints": 5032, "exec inject": 0, "exec minimize": 2497, "exec retries": 28, "exec seeds": 403, "exec smash": 2892, "exec total [base]": 193990, "exec total [new]": 361902, "exec triage": 145765, "executor restarts [base]": 512, "executor restarts [new]": 1570, "fault jobs": 0, "fuzzer jobs": 14, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 5, "max signal": 311168, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1630, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47405, "no exec duration": 412972000000, "no exec requests": 1426, "pending": 121, "prog exec time": 377, "reproducing": 3, "rpc recv": 18405293740, "rpc sent": 2986823840, "signal": 303049, "smash jobs": 2, "triage jobs": 7, "vm output": 73706521, "vm restarts [base]": 42, "vm restarts [new]": 240 } 2025/11/10 08:01:44 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:01:44 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:01:45 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:01:45 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:01:54 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:01:54 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:02:06 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:02:06 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:02:07 runner 0 connected 2025/11/10 08:02:11 runner 1 connected 2025/11/10 08:02:14 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 08:02:25 runner 1 connected 2025/11/10 08:02:32 runner 8 connected 2025/11/10 08:02:34 runner 0 connected 2025/11/10 08:02:44 runner 6 connected 2025/11/10 08:02:44 repro finished 'INFO: task hung in reg_process_self_managed_hints', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/10 08:02:44 failed repro for "INFO: task hung in reg_process_self_managed_hints", err=%!s() 2025/11/10 08:02:44 "INFO: task hung in reg_process_self_managed_hints": saved crash log into 1762761764.crash.log 2025/11/10 08:02:44 "INFO: task hung in reg_process_self_managed_hints": saved repro log into 1762761764.repro.log 2025/11/10 08:02:45 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 08:02:45 base crash: possible deadlock in ocfs2_xattr_set 2025/11/10 08:02:49 runner 7 connected 2025/11/10 08:02:55 runner 2 connected 2025/11/10 08:03:15 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 08:03:15 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 08:03:34 runner 0 connected 2025/11/10 08:03:35 runner 1 connected 2025/11/10 08:03:45 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 08:04:05 runner 7 connected 2025/11/10 08:04:34 runner 8 connected 2025/11/10 08:04:59 runner 2 connected 2025/11/10 08:05:16 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:05:16 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:05:27 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:05:27 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:05:48 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:05:48 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:05:50 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:05:50 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:05:57 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 08:05:59 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:05:59 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:06:05 runner 8 connected 2025/11/10 08:06:10 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:06:10 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:06:17 runner 7 connected 2025/11/10 08:06:38 runner 0 connected 2025/11/10 08:06:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 142, "corpus": 46465, "corpus [files]": 300, "corpus [symbols]": 21, "cover overflows": 53289, "coverage": 307784, "distributor delayed": 54801, "distributor undelayed": 54796, "distributor violated": 259, "exec candidate": 81853, "exec collide": 5441, "exec fuzz": 10340, "exec gen": 550, "exec hints": 5335, "exec inject": 0, "exec minimize": 3018, "exec retries": 29, "exec seeds": 457, "exec smash": 3271, "exec total [base]": 197663, "exec total [new]": 366725, "exec triage": 145842, "executor restarts [base]": 546, "executor restarts [new]": 1630, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 2, "max signal": 311281, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1946, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47436, "no exec duration": 419871000000, "no exec requests": 1437, "pending": 132, "prog exec time": 470, "reproducing": 2, "rpc recv": 19074199980, "rpc sent": 3193620488, "signal": 303073, "smash jobs": 2, "triage jobs": 7, "vm output": 78026035, "vm restarts [base]": 47, "vm restarts [new]": 251 } 2025/11/10 08:06:39 runner 2 connected 2025/11/10 08:06:46 runner 1 connected 2025/11/10 08:06:47 runner 1 connected 2025/11/10 08:07:00 base crash: WARNING in xfrm_state_fini 2025/11/10 08:07:01 runner 6 connected 2025/11/10 08:07:17 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:07:17 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:07:28 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:07:28 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:07:38 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:07:38 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:07:46 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 08:07:48 runner 2 connected 2025/11/10 08:07:50 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:07:50 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:08:06 runner 7 connected 2025/11/10 08:08:08 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 08:08:17 runner 2 connected 2025/11/10 08:08:26 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 08:08:26 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 08:08:26 start reproducing 'possible deadlock in unmap_vmas' 2025/11/10 08:08:27 runner 1 connected 2025/11/10 08:08:36 runner 6 connected 2025/11/10 08:08:37 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:08:37 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:08:49 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:08:49 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:08:54 base crash: WARNING in xfrm_state_fini 2025/11/10 08:08:57 runner 8 connected 2025/11/10 08:09:15 runner 7 connected 2025/11/10 08:09:26 runner 2 connected 2025/11/10 08:09:39 runner 1 connected 2025/11/10 08:09:44 runner 1 connected 2025/11/10 08:10:35 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 08:10:37 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:10:37 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:10:38 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:10:38 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:10:40 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/10 08:11:04 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 08:11:14 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:11:14 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:11:25 runner 8 connected 2025/11/10 08:11:25 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:11:25 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:11:27 runner 6 connected 2025/11/10 08:11:29 runner 1 connected 2025/11/10 08:11:36 runner 1 connected 2025/11/10 08:11:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 152, "corpus": 46482, "corpus [files]": 302, "corpus [symbols]": 21, "cover overflows": 53962, "coverage": 307874, "distributor delayed": 54875, "distributor undelayed": 54872, "distributor violated": 259, "exec candidate": 81853, "exec collide": 6288, "exec fuzz": 11985, "exec gen": 647, "exec hints": 5818, "exec inject": 0, "exec minimize": 3474, "exec retries": 29, "exec seeds": 510, "exec smash": 3641, "exec total [base]": 202699, "exec total [new]": 370802, "exec triage": 145962, "executor restarts [base]": 580, "executor restarts [new]": 1702, "fault jobs": 0, "fuzzer jobs": 9, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 2, "max signal": 311381, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2244, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47478, "no exec duration": 420246000000, "no exec requests": 1441, "pending": 142, "prog exec time": 802, "reproducing": 3, "rpc recv": 19861659560, "rpc sent": 3344154472, "signal": 303138, "smash jobs": 2, "triage jobs": 5, "vm output": 81095999, "vm restarts [base]": 51, "vm restarts [new]": 265 } 2025/11/10 08:11:46 patched crashed: possible deadlock in unmap_vmas [need repro = false] 2025/11/10 08:11:50 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:11:50 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:11:53 runner 0 connected 2025/11/10 08:12:03 runner 2 connected 2025/11/10 08:12:15 runner 7 connected 2025/11/10 08:12:20 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/10 08:12:23 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:12:23 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:12:35 runner 6 connected 2025/11/10 08:12:39 runner 8 connected 2025/11/10 08:12:40 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:12:40 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:12:59 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:12:59 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:13:10 runner 1 connected 2025/11/10 08:13:11 runner 1 connected 2025/11/10 08:13:29 runner 2 connected 2025/11/10 08:13:29 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:13:29 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:13:47 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:13:47 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:13:49 runner 8 connected 2025/11/10 08:14:19 runner 6 connected 2025/11/10 08:14:22 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:14:22 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:14:37 runner 1 connected 2025/11/10 08:14:55 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 08:14:55 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 08:15:06 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 08:15:11 runner 8 connected 2025/11/10 08:15:21 patched crashed: kernel BUG in dbFindLeaf [need repro = true] 2025/11/10 08:15:21 scheduled a reproduction of 'kernel BUG in dbFindLeaf' 2025/11/10 08:15:21 start reproducing 'kernel BUG in dbFindLeaf' 2025/11/10 08:15:43 base crash: kernel BUG in dbFindLeaf 2025/11/10 08:15:46 runner 6 connected 2025/11/10 08:15:51 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:15:55 runner 7 connected 2025/11/10 08:15:56 base crash: possible deadlock in ocfs2_init_acl 2025/11/10 08:16:07 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:16:07 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:16:23 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:16:23 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:16:32 runner 2 connected 2025/11/10 08:16:34 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:16:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 164, "corpus": 46493, "corpus [files]": 302, "corpus [symbols]": 21, "cover overflows": 54700, "coverage": 307891, "distributor delayed": 54935, "distributor undelayed": 54933, "distributor violated": 259, "exec candidate": 81853, "exec collide": 7063, "exec fuzz": 13456, "exec gen": 722, "exec hints": 6332, "exec inject": 0, "exec minimize": 3755, "exec retries": 29, "exec seeds": 543, "exec smash": 3872, "exec total [base]": 208440, "exec total [new]": 374252, "exec triage": 146032, "executor restarts [base]": 615, "executor restarts [new]": 1763, "fault jobs": 0, "fuzzer jobs": 7, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 1, "hints jobs": 3, "max signal": 311419, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2484, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47505, "no exec duration": 422782000000, "no exec requests": 1451, "pending": 152, "prog exec time": 255, "reproducing": 4, "rpc recv": 20612640980, "rpc sent": 3456892328, "signal": 303155, "smash jobs": 1, "triage jobs": 3, "vm output": 84611465, "vm restarts [base]": 54, "vm restarts [new]": 277 } 2025/11/10 08:16:46 runner 1 connected 2025/11/10 08:16:56 runner 6 connected 2025/11/10 08:17:03 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:17:12 runner 7 connected 2025/11/10 08:17:39 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:17:39 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:17:40 base crash: lost connection to test machine 2025/11/10 08:17:43 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:17:43 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:17:48 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:17:51 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:17:51 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:18:14 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:18:21 runner 2 connected 2025/11/10 08:18:21 runner 7 connected 2025/11/10 08:18:31 runner 6 connected 2025/11/10 08:18:33 runner 8 connected 2025/11/10 08:18:36 repro finished 'possible deadlock in hugetlb_change_protection', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/10 08:18:36 failed repro for "possible deadlock in hugetlb_change_protection", err=%!s() 2025/11/10 08:18:36 "possible deadlock in hugetlb_change_protection": saved crash log into 1762762716.crash.log 2025/11/10 08:18:36 "possible deadlock in hugetlb_change_protection": saved repro log into 1762762716.repro.log 2025/11/10 08:18:36 start reproducing 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:18:39 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:18:51 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:18:51 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:19:05 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:19:10 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:19:10 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:19:23 base crash: possible deadlock in ocfs2_setattr 2025/11/10 08:19:23 patched crashed: possible deadlock in ocfs2_setattr [need repro = false] 2025/11/10 08:19:27 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:19:41 runner 6 connected 2025/11/10 08:19:50 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:19:51 runner 7 connected 2025/11/10 08:19:52 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:20:11 runner 8 connected 2025/11/10 08:20:12 runner 2 connected 2025/11/10 08:20:43 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:20:49 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:20:51 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:20:51 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:21:17 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:21:17 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:21:17 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:21:35 crash "general protection fault in pcl818_ai_cancel" is already known 2025/11/10 08:21:35 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/11/10 08:21:35 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/10 08:21:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 167, "corpus": 46497, "corpus [files]": 303, "corpus [symbols]": 21, "cover overflows": 55179, "coverage": 307897, "distributor delayed": 54966, "distributor undelayed": 54966, "distributor violated": 260, "exec candidate": 81853, "exec collide": 7574, "exec fuzz": 14488, "exec gen": 774, "exec hints": 6725, "exec inject": 0, "exec minimize": 3847, "exec retries": 29, "exec seeds": 555, "exec smash": 3956, "exec total [base]": 210966, "exec total [new]": 376476, "exec triage": 146083, "executor restarts [base]": 639, "executor restarts [new]": 1803, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 2, "max signal": 311630, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2549, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47522, "no exec duration": 739378000000, "no exec requests": 2310, "pending": 158, "prog exec time": 564, "reproducing": 4, "rpc recv": 21060861152, "rpc sent": 3517106784, "signal": 303161, "smash jobs": 1, "triage jobs": 2, "vm output": 86765770, "vm restarts [base]": 57, "vm restarts [new]": 285 } 2025/11/10 08:21:40 runner 7 connected 2025/11/10 08:21:58 runner 8 connected 2025/11/10 08:22:00 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:22:00 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:22:24 runner 6 connected 2025/11/10 08:22:33 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:22:58 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:23:15 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:23:28 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:23:41 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:23:41 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:23:43 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:23:43 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:23:52 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:23:53 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:23:53 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:24:25 runner 8 connected 2025/11/10 08:24:27 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:24:30 runner 7 connected 2025/11/10 08:24:34 runner 6 connected 2025/11/10 08:24:52 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:25:00 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:25:00 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:25:09 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:25:16 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:25:16 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:25:30 patched crashed: possible deadlock in remove_inode_hugepages [need repro = true] 2025/11/10 08:25:30 scheduled a reproduction of 'possible deadlock in remove_inode_hugepages' 2025/11/10 08:25:39 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:25:43 runner 7 connected 2025/11/10 08:25:59 runner 8 connected 2025/11/10 08:26:03 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:26:04 reproducing crash 'possible deadlock in hugetlb_change_protection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:26:04 repro finished 'possible deadlock in hugetlb_change_protection', repro=true crepro=false desc='possible deadlock in unmap_vmas' hub=false from_dashboard=false 2025/11/10 08:26:04 found repro for "possible deadlock in unmap_vmas" (orig title: "possible deadlock in hugetlb_change_protection", reliability: 1), took 7.46 minutes 2025/11/10 08:26:04 start reproducing 'possible deadlock in remove_inode_hugepages' 2025/11/10 08:26:04 "possible deadlock in unmap_vmas": saved crash log into 1762763164.crash.log 2025/11/10 08:26:04 "possible deadlock in unmap_vmas": saved repro log into 1762763164.repro.log 2025/11/10 08:26:12 runner 6 connected 2025/11/10 08:26:21 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:26:21 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:26:33 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:26:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 170, "corpus": 46506, "corpus [files]": 303, "corpus [symbols]": 21, "cover overflows": 55593, "coverage": 307912, "distributor delayed": 54993, "distributor undelayed": 54993, "distributor violated": 260, "exec candidate": 81853, "exec collide": 8003, "exec fuzz": 15367, "exec gen": 821, "exec hints": 6890, "exec inject": 0, "exec minimize": 4085, "exec retries": 29, "exec seeds": 573, "exec smash": 4131, "exec total [base]": 212968, "exec total [new]": 378473, "exec triage": 146125, "executor restarts [base]": 652, "executor restarts [new]": 1851, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 2, "max signal": 311653, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2693, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47538, "no exec duration": 1069715000000, "no exec requests": 3166, "pending": 163, "prog exec time": 15013, "reproducing": 4, "rpc recv": 21426632948, "rpc sent": 3576838792, "signal": 303176, "smash jobs": 1, "triage jobs": 1, "vm output": 89999132, "vm restarts [base]": 57, "vm restarts [new]": 294 } 2025/11/10 08:26:43 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:26:43 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:26:54 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:26:54 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:26:57 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:27:10 runner 7 connected 2025/11/10 08:27:16 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:27:29 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 08:27:29 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 08:27:33 runner 6 connected 2025/11/10 08:27:36 runner 8 connected 2025/11/10 08:27:55 attempt #0 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 08:28:14 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:28:19 runner 7 connected 2025/11/10 08:28:21 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:28:30 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:28:31 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:28:39 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:28:41 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:28:41 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:29:19 runner 8 connected 2025/11/10 08:29:30 runner 7 connected 2025/11/10 08:29:30 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:29:40 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:29:47 attempt #1 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 08:30:01 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:30:26 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:30:49 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:31:00 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:31:00 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:31:20 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:31:34 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:31:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 174, "corpus": 46515, "corpus [files]": 304, "corpus [symbols]": 21, "cover overflows": 56205, "coverage": 307922, "distributor delayed": 55026, "distributor undelayed": 55023, "distributor violated": 260, "exec candidate": 81853, "exec collide": 8783, "exec fuzz": 16793, "exec gen": 901, "exec hints": 7173, "exec inject": 0, "exec minimize": 4330, "exec retries": 29, "exec seeds": 601, "exec smash": 4335, "exec total [base]": 215829, "exec total [new]": 381576, "exec triage": 146179, "executor restarts [base]": 673, "executor restarts [new]": 1896, "fault jobs": 0, "fuzzer jobs": 7, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 3, "max signal": 311699, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2868, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47561, "no exec duration": 1201283000000, "no exec requests": 3479, "pending": 169, "prog exec time": 395, "reproducing": 4, "rpc recv": 21746796460, "rpc sent": 3647651448, "signal": 303186, "smash jobs": 1, "triage jobs": 3, "vm output": 93132825, "vm restarts [base]": 57, "vm restarts [new]": 300 } 2025/11/10 08:31:39 attempt #2 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 08:31:39 patched-only: possible deadlock in unmap_vmas 2025/11/10 08:31:39 scheduled a reproduction of 'possible deadlock in unmap_vmas (full)' 2025/11/10 08:31:45 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:31:45 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:31:50 runner 6 connected 2025/11/10 08:31:52 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:32:18 repro finished 'possible deadlock in unmap_vmas', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/10 08:32:18 failed repro for "possible deadlock in unmap_vmas", err=%!s() 2025/11/10 08:32:18 "possible deadlock in unmap_vmas": saved crash log into 1762763538.crash.log 2025/11/10 08:32:18 start reproducing 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:32:18 "possible deadlock in unmap_vmas": saved repro log into 1762763538.repro.log 2025/11/10 08:32:30 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:32:30 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:32:30 runner 0 connected 2025/11/10 08:32:35 runner 8 connected 2025/11/10 08:32:35 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:32:41 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:32:41 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:33:07 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:33:19 runner 6 connected 2025/11/10 08:33:29 runner 7 connected 2025/11/10 08:33:34 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:33:39 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:33:39 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:33:48 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:34:18 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:34:21 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:34:21 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:34:28 runner 6 connected 2025/11/10 08:34:49 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:34:54 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:34:57 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:34:57 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:35:12 runner 8 connected 2025/11/10 08:35:32 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 08:35:32 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 08:35:34 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:35:41 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:35:41 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:35:46 runner 6 connected 2025/11/10 08:36:05 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:36:05 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:36:14 runner 8 connected 2025/11/10 08:36:26 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:36:29 runner 7 connected 2025/11/10 08:36:32 patched crashed: possible deadlock in unmap_vmas [need repro = true] 2025/11/10 08:36:32 scheduled a reproduction of 'possible deadlock in unmap_vmas' 2025/11/10 08:36:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 180, "corpus": 46518, "corpus [files]": 304, "corpus [symbols]": 21, "cover overflows": 56665, "coverage": 307925, "distributor delayed": 55043, "distributor undelayed": 55041, "distributor violated": 260, "exec candidate": 81853, "exec collide": 9291, "exec fuzz": 17706, "exec gen": 963, "exec hints": 7545, "exec inject": 0, "exec minimize": 4378, "exec retries": 29, "exec seeds": 612, "exec smash": 4405, "exec total [base]": 218073, "exec total [new]": 383583, "exec triage": 146205, "executor restarts [base]": 691, "executor restarts [new]": 1934, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 311712, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2932, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47569, "no exec duration": 1662702000000, "no exec requests": 4621, "pending": 179, "prog exec time": 121, "reproducing": 4, "rpc recv": 22151760008, "rpc sent": 3719372064, "signal": 303189, "smash jobs": 0, "triage jobs": 3, "vm output": 94721753, "vm restarts [base]": 58, "vm restarts [new]": 309 } 2025/11/10 08:36:48 runner 6 connected 2025/11/10 08:36:50 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:37:02 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:37:17 base crash: lost connection to test machine 2025/11/10 08:37:20 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 08:37:20 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:37:20 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:37:20 base crash: lost connection to test machine 2025/11/10 08:37:21 runner 8 connected 2025/11/10 08:37:22 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:37:42 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:37:42 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:38:03 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:38:06 runner 0 connected 2025/11/10 08:38:08 runner 6 connected 2025/11/10 08:38:09 reproducing crash 'kernel BUG in dbFindLeaf': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dmap.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:38:09 repro finished 'kernel BUG in dbFindLeaf', repro=true crepro=false desc='kernel BUG in dbFindLeaf' hub=false from_dashboard=false 2025/11/10 08:38:09 found repro for "kernel BUG in dbFindLeaf" (orig title: "-SAME-", reliability: 1), took 22.75 minutes 2025/11/10 08:38:09 start reproducing 'possible deadlock in unmap_vmas (full)' 2025/11/10 08:38:09 failed to recv *flatrpc.InfoRequestRawT: EOF 2025/11/10 08:38:09 "kernel BUG in dbFindLeaf": saved crash log into 1762763889.crash.log 2025/11/10 08:38:09 "kernel BUG in dbFindLeaf": saved repro log into 1762763889.repro.log 2025/11/10 08:38:09 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 08:38:09 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 08:38:09 reproduction of "possible deadlock in unmap_vmas" aborted: it's no longer needed 2025/11/10 08:38:10 runner 7 connected 2025/11/10 08:38:10 runner 1 connected 2025/11/10 08:38:31 runner 8 connected 2025/11/10 08:38:32 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/10 08:38:38 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:38:55 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/11/10 08:39:05 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:39:21 runner 2 connected 2025/11/10 08:39:21 crash "general protection fault in pcl818_ai_cancel" is already known 2025/11/10 08:39:21 base crash "general protection fault in pcl818_ai_cancel" is to be ignored 2025/11/10 08:39:21 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/10 08:39:28 attempt #0 to run "kernel BUG in dbFindLeaf" on base: crashed with kernel BUG in dbFindLeaf 2025/11/10 08:39:28 crashes both: kernel BUG in dbFindLeaf / kernel BUG in dbFindLeaf 2025/11/10 08:39:28 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:39:45 runner 7 connected 2025/11/10 08:39:51 base crash: general protection fault in pcl818_ai_cancel 2025/11/10 08:40:10 runner 8 connected 2025/11/10 08:40:19 runner 0 connected 2025/11/10 08:40:19 reproducing crash 'possible deadlock in remove_inode_hugepages': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:40:19 repro finished 'possible deadlock in remove_inode_hugepages', repro=true crepro=false desc='possible deadlock in remove_inode_hugepages' hub=false from_dashboard=false 2025/11/10 08:40:19 found repro for "possible deadlock in remove_inode_hugepages" (orig title: "-SAME-", reliability: 1), took 14.23 minutes 2025/11/10 08:40:19 "possible deadlock in remove_inode_hugepages": saved crash log into 1762764019.crash.log 2025/11/10 08:40:19 failed to recv *flatrpc.InfoRequestRawT: EOF 2025/11/10 08:40:19 "possible deadlock in remove_inode_hugepages": saved repro log into 1762764019.repro.log 2025/11/10 08:40:39 runner 2 connected 2025/11/10 08:40:41 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/10 08:41:03 base crash: general protection fault in pcl818_ai_cancel 2025/11/10 08:41:29 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:41:30 runner 7 connected 2025/11/10 08:41:33 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:41:33 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:41:35 runner 0 connected 2025/11/10 08:41:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 180, "corpus": 46522, "corpus [files]": 304, "corpus [symbols]": 21, "cover overflows": 57238, "coverage": 307931, "distributor delayed": 55066, "distributor undelayed": 55066, "distributor violated": 260, "exec candidate": 81853, "exec collide": 9922, "exec fuzz": 18839, "exec gen": 1030, "exec hints": 7551, "exec inject": 0, "exec minimize": 4559, "exec retries": 30, "exec seeds": 624, "exec smash": 4470, "exec total [base]": 219424, "exec total [new]": 385713, "exec triage": 146237, "executor restarts [base]": 716, "executor restarts [new]": 1984, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 311726, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3086, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47583, "no exec duration": 1730041000000, "no exec requests": 4781, "pending": 179, "prog exec time": 454, "reproducing": 3, "rpc recv": 22560545388, "rpc sent": 3774020808, "signal": 303195, "smash jobs": 1, "triage jobs": 5, "vm output": 96577146, "vm restarts [base]": 63, "vm restarts [new]": 318 } 2025/11/10 08:41:40 runner 1 connected 2025/11/10 08:41:54 runner 1 connected 2025/11/10 08:42:11 attempt #0 to run "possible deadlock in remove_inode_hugepages" on base: did not crash 2025/11/10 08:42:16 base crash: general protection fault in pcl818_ai_cancel 2025/11/10 08:42:24 runner 6 connected 2025/11/10 08:42:28 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:42:28 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:42:31 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:42:31 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:42:34 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:43:06 runner 2 connected 2025/11/10 08:43:08 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:43:08 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:43:16 runner 8 connected 2025/11/10 08:43:21 runner 1 connected 2025/11/10 08:43:23 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:43:48 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:43:48 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:43:52 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 08:43:56 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:43:57 runner 0 connected 2025/11/10 08:43:59 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:43:59 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:44:03 attempt #1 to run "possible deadlock in remove_inode_hugepages" on base: did not crash 2025/11/10 08:44:25 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:44:37 runner 1 connected 2025/11/10 08:44:41 runner 8 connected 2025/11/10 08:44:48 runner 6 connected 2025/11/10 08:45:02 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:45:29 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:45:39 patched crashed: lost connection to test machine [need repro = false] 2025/11/10 08:45:45 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/10 08:45:51 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:45:51 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:45:55 attempt #2 to run "possible deadlock in remove_inode_hugepages" on base: did not crash 2025/11/10 08:45:55 patched-only: possible deadlock in remove_inode_hugepages 2025/11/10 08:45:55 scheduled a reproduction of 'possible deadlock in remove_inode_hugepages (full)' 2025/11/10 08:45:55 start reproducing 'possible deadlock in remove_inode_hugepages (full)' 2025/11/10 08:46:22 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:46:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 187, "corpus": 46537, "corpus [files]": 305, "corpus [symbols]": 21, "cover overflows": 57909, "coverage": 307947, "distributor delayed": 55146, "distributor undelayed": 55139, "distributor violated": 260, "exec candidate": 81853, "exec collide": 10746, "exec fuzz": 20472, "exec gen": 1123, "exec hints": 7599, "exec inject": 0, "exec minimize": 4872, "exec retries": 31, "exec seeds": 659, "exec smash": 4765, "exec total [base]": 222555, "exec total [new]": 389057, "exec triage": 146333, "executor restarts [base]": 749, "executor restarts [new]": 2055, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 311881, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3299, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47624, "no exec duration": 1730078000000, "no exec requests": 4783, "pending": 185, "prog exec time": 1007, "reproducing": 4, "rpc recv": 23068763700, "rpc sent": 3865345920, "signal": 303211, "smash jobs": 2, "triage jobs": 10, "vm output": 99590016, "vm restarts [base]": 65, "vm restarts [new]": 326 } 2025/11/10 08:46:40 runner 6 connected 2025/11/10 08:46:44 runner 0 connected 2025/11/10 08:46:53 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:47:06 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:47:14 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 08:47:14 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 08:47:36 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:47:36 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:47:40 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:47:49 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:48:03 runner 7 connected 2025/11/10 08:48:23 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:48:23 repro finished 'possible deadlock in unmap_vmas (full)', repro=true crepro=true desc='possible deadlock in unmap_vmas' hub=false from_dashboard=false 2025/11/10 08:48:23 found repro for "possible deadlock in unmap_vmas" (orig title: "-SAME-", reliability: 1), took 10.24 minutes 2025/11/10 08:48:23 "possible deadlock in unmap_vmas": saved crash log into 1762764503.crash.log 2025/11/10 08:48:23 "possible deadlock in unmap_vmas": saved repro log into 1762764503.repro.log 2025/11/10 08:48:26 runner 6 connected 2025/11/10 08:48:38 runner 1 connected 2025/11/10 08:49:17 runner 0 connected 2025/11/10 08:49:23 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/10 08:49:40 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:49:40 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:50:11 runner 8 connected 2025/11/10 08:50:14 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/10 08:50:16 attempt #0 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 08:50:19 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:50:29 runner 6 connected 2025/11/10 08:51:05 runner 7 connected 2025/11/10 08:51:27 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:51:27 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:51:34 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 08:51:34 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 08:51:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 199, "corpus": 46556, "corpus [files]": 305, "corpus [symbols]": 21, "cover overflows": 58721, "coverage": 307979, "distributor delayed": 55216, "distributor undelayed": 55214, "distributor violated": 263, "exec candidate": 81853, "exec collide": 11701, "exec fuzz": 22266, "exec gen": 1211, "exec hints": 7699, "exec inject": 0, "exec minimize": 5505, "exec retries": 32, "exec seeds": 707, "exec smash": 5210, "exec total [base]": 226927, "exec total [new]": 393241, "exec triage": 146449, "executor restarts [base]": 786, "executor restarts [new]": 2127, "fault jobs": 0, "fuzzer jobs": 10, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 311953, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3724, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 2, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47666, "no exec duration": 1733078000000, "no exec requests": 4786, "pending": 190, "prog exec time": 457, "reproducing": 3, "rpc recv": 23580625192, "rpc sent": 3980367792, "signal": 303243, "smash jobs": 2, "triage jobs": 8, "vm output": 104072324, "vm restarts [base]": 66, "vm restarts [new]": 334 } 2025/11/10 08:51:52 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:51:52 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:52:07 attempt #1 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 08:52:16 runner 1 connected 2025/11/10 08:52:23 runner 8 connected 2025/11/10 08:52:42 runner 0 connected 2025/11/10 08:52:48 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:53:02 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:53:02 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:53:07 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:53:07 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:53:36 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:53:51 runner 7 connected 2025/11/10 08:53:56 runner 6 connected 2025/11/10 08:53:57 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:53:58 attempt #2 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 08:53:58 patched-only: possible deadlock in unmap_vmas 2025/11/10 08:54:44 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:54:48 runner 0 connected 2025/11/10 08:55:01 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:55:21 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 08:55:32 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:55:32 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:55:50 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:55:55 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:55:55 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:56:06 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:56:06 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:56:07 base crash: lost connection to test machine 2025/11/10 08:56:11 runner 0 connected 2025/11/10 08:56:18 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:56:18 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:56:21 runner 7 connected 2025/11/10 08:56:23 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:56:31 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:56:31 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:56:36 base crash: WARNING in xfrm_state_fini 2025/11/10 08:56:38 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 200, "corpus": 46582, "corpus [files]": 306, "corpus [symbols]": 21, "cover overflows": 59747, "coverage": 308074, "distributor delayed": 55317, "distributor undelayed": 55317, "distributor violated": 263, "exec candidate": 81853, "exec collide": 13000, "exec fuzz": 24754, "exec gen": 1335, "exec hints": 7921, "exec inject": 0, "exec minimize": 5980, "exec retries": 33, "exec seeds": 773, "exec smash": 5751, "exec total [base]": 232329, "exec total [new]": 398616, "exec triage": 146610, "executor restarts [base]": 812, "executor restarts [new]": 2205, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 312101, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4038, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 2, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47721, "no exec duration": 1734100000000, "no exec requests": 4789, "pending": 198, "prog exec time": 475, "reproducing": 3, "rpc recv": 24068779652, "rpc sent": 4096066744, "signal": 303337, "smash jobs": 3, "triage jobs": 1, "vm output": 107056526, "vm restarts [base]": 67, "vm restarts [new]": 341 } 2025/11/10 08:56:45 runner 8 connected 2025/11/10 08:56:56 runner 2 connected 2025/11/10 08:56:57 runner 6 connected 2025/11/10 08:57:07 runner 1 connected 2025/11/10 08:57:20 runner 0 connected 2025/11/10 08:57:24 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/10 08:57:26 runner 1 connected 2025/11/10 08:57:26 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hugetlbfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:57:26 base crash: general protection fault in pcl818_ai_cancel 2025/11/10 08:57:38 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/10 08:57:46 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:57:46 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:57:58 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:57:58 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:58:02 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/hugetlb.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/10 08:58:02 repro finished 'possible deadlock in remove_inode_hugepages (full)', repro=true crepro=true desc='possible deadlock in unmap_vmas' hub=false from_dashboard=false 2025/11/10 08:58:02 found repro for "possible deadlock in unmap_vmas" (orig title: "possible deadlock in remove_inode_hugepages", reliability: 1), took 12.12 minutes 2025/11/10 08:58:02 "possible deadlock in unmap_vmas": saved crash log into 1762765082.crash.log 2025/11/10 08:58:02 "possible deadlock in unmap_vmas": saved repro log into 1762765082.repro.log 2025/11/10 08:58:10 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:58:10 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:58:13 runner 7 connected 2025/11/10 08:58:14 runner 2 connected 2025/11/10 08:58:28 runner 6 connected 2025/11/10 08:58:34 runner 8 connected 2025/11/10 08:58:42 base crash: WARNING in xfrm_state_fini 2025/11/10 08:58:47 runner 1 connected 2025/11/10 08:58:57 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 08:58:57 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 08:58:58 runner 0 connected 2025/11/10 08:59:29 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 08:59:29 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 08:59:31 runner 1 connected 2025/11/10 08:59:47 runner 7 connected 2025/11/10 08:59:48 patched crashed: possible deadlock in move_hugetlb_page_tables [need repro = true] 2025/11/10 08:59:48 scheduled a reproduction of 'possible deadlock in move_hugetlb_page_tables' 2025/11/10 08:59:52 attempt #0 to run "possible deadlock in unmap_vmas" on base: did not crash 2025/11/10 09:00:01 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/10 09:00:20 runner 6 connected 2025/11/10 09:00:37 runner 0 connected 2025/11/10 09:00:43 runner 8 connected 2025/11/10 09:01:02 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 09:01:02 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 09:01:04 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 09:01:04 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 09:01:16 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 09:01:16 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 09:01:17 base crash: possible deadlock in ocfs2_setattr 2025/11/10 09:01:18 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 09:01:18 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 09:01:23 patched crashed: possible deadlock in hugetlb_change_protection [need repro = true] 2025/11/10 09:01:23 scheduled a reproduction of 'possible deadlock in hugetlb_change_protection' 2025/11/10 09:01:34 bug reporting terminated 2025/11/10 09:01:34 status reporting terminated 2025/11/10 09:01:34 new: rpc server terminaled 2025/11/10 09:01:34 base: rpc server terminaled 2025/11/10 09:01:41 repro finished 'possible deadlock in hugetlb_change_protection', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/10 09:01:44 attempt #1 to run "possible deadlock in unmap_vmas" on base: skipping due to errors: context deadline exceeded / 2025/11/10 09:01:58 base: pool terminated 2025/11/10 09:01:58 base: kernel context loop terminated 2025/11/10 09:02:14 repro finished 'possible deadlock in move_hugetlb_page_tables', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/10 09:02:14 repro loop terminated 2025/11/10 09:02:14 new: pool terminated 2025/11/10 09:02:14 new: kernel context loop terminated 2025/11/10 09:02:14 diff fuzzing terminated 2025/11/10 09:02:14 fuzzing is finished 2025/11/10 09:02:14 status at the end: Title On-Base On-Patched possible deadlock in move_hugetlb_page_tables 36 crashes[reproduced] possible deadlock in remove_inode_hugepages 1 crashes[reproduced] possible deadlock in unmap_vmas 26 crashes[reproduced] BUG: sleeping function called from invalid context in hook_sb_delete 2 crashes 15 crashes INFO: task hung in __iterate_supers 1 crashes INFO: task hung in reg_process_self_managed_hints 1 crashes INFO: task hung in user_get_super 1 crashes KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings 1 crashes WARNING in folio_memcg 6 crashes 20 crashes WARNING in io_ring_exit_work 1 crashes WARNING in rate_control_rate_init 2 crashes 1 crashes WARNING in xfrm6_tunnel_net_exit 2 crashes 3 crashes WARNING in xfrm_state_fini 11 crashes 6 crashes general protection fault in pcl818_ai_cancel 4 crashes 8 crashes kernel BUG in dbFindLeaf 2 crashes 1 crashes[reproduced] kernel BUG in jfs_evict_inode 2 crashes 3 crashes kernel BUG in ocfs2_write_cluster_by_desc 1 crashes 1 crashes kernel BUG in txUnlock 2 crashes 7 crashes lost connection to test machine 7 crashes 18 crashes no output from test machine 2 crashes possible deadlock in ext4_destroy_inline_data 2 crashes possible deadlock in ext4_evict_inode 1 crashes possible deadlock in ext4_writepages 1 crashes possible deadlock in hfsplus_block_allocate 1 crashes possible deadlock in hugetlb_change_protection 180 crashes possible deadlock in mark_as_free_ex 1 crashes possible deadlock in ocfs2_acquire_dquot 1 crashes possible deadlock in ocfs2_init_acl 9 crashes 4 crashes possible deadlock in ocfs2_setattr 2 crashes 1 crashes possible deadlock in ocfs2_try_remove_refcount_tree 3 crashes 5 crashes possible deadlock in ocfs2_xattr_set 2 crashes 1 crashes 2025/11/10 09:02:14 possibly patched-only: possible deadlock in move_hugetlb_page_tables 2025/11/10 09:02:14 possibly patched-only: possible deadlock in unmap_vmas 2025/11/10 09:02:14 possibly patched-only: possible deadlock in hugetlb_change_protection