2025/10/14 08:03:09 extracted 329834 text symbol hashes for base and 329838 for patched 2025/10/14 08:03:09 binaries are different, continuing fuzzing 2025/10/14 08:03:09 adding modified_functions to focus areas: ["__pfx_ksm_pte_entry" "__pfx_ksm_walk_test" "__stable_node_chain" "break_cow" "ksm_do_scan" "ksm_get_folio" "ksm_memory_callback" "ksm_pte_entry" "ksm_scan_thread" "ksm_walk_test" "max_page_sharing_store" "merge_across_nodes_store" "remove_rmap_item_from_tree" "remove_stable_node" "replace_page" "rmap_walk_ksm" "run_store" "try_to_merge_one_page" "try_to_merge_with_ksm_page" "unmerge_ksm_pages"] 2025/10/14 08:03:09 adding directly modified files to focus areas: ["mm/ksm.c"] 2025/10/14 08:03:09 downloading corpus #1: "https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db" 2025/10/14 08:04:08 runner 5 connected 2025/10/14 08:04:08 runner 6 connected 2025/10/14 08:04:08 runner 1 connected 2025/10/14 08:04:08 runner 1 connected 2025/10/14 08:04:08 runner 2 connected 2025/10/14 08:04:08 runner 7 connected 2025/10/14 08:04:08 runner 3 connected 2025/10/14 08:04:09 runner 2 connected 2025/10/14 08:04:09 runner 8 connected 2025/10/14 08:04:09 runner 0 connected 2025/10/14 08:04:10 runner 4 connected 2025/10/14 08:04:10 runner 0 connected 2025/10/14 08:04:15 initializing coverage information... 2025/10/14 08:04:15 executor cover filter: 0 PCs 2025/10/14 08:04:20 discovered 7757 source files, 340773 symbols 2025/10/14 08:04:20 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8056 2025/10/14 08:04:20 base: machine check complete 2025/10/14 08:04:20 coverage filter: __pfx_ksm_pte_entry: [] 2025/10/14 08:04:20 coverage filter: __pfx_ksm_walk_test: [] 2025/10/14 08:04:20 coverage filter: __stable_node_chain: [__stable_node_chain] 2025/10/14 08:04:20 coverage filter: break_cow: [break_cow] 2025/10/14 08:04:20 coverage filter: ksm_do_scan: [ksm_do_scan] 2025/10/14 08:04:20 coverage filter: ksm_get_folio: [ksm_get_folio] 2025/10/14 08:04:20 coverage filter: ksm_memory_callback: [ksm_memory_callback] 2025/10/14 08:04:20 coverage filter: ksm_pte_entry: [ksm_pte_entry] 2025/10/14 08:04:20 coverage filter: ksm_scan_thread: [ksm_scan_thread] 2025/10/14 08:04:20 coverage filter: ksm_walk_test: [ksm_walk_test] 2025/10/14 08:04:20 coverage filter: max_page_sharing_store: [max_page_sharing_store] 2025/10/14 08:04:20 coverage filter: merge_across_nodes_store: [merge_across_nodes_store] 2025/10/14 08:04:20 coverage filter: remove_rmap_item_from_tree: [remove_rmap_item_from_tree] 2025/10/14 08:04:20 coverage filter: remove_stable_node: [remove_stable_node] 2025/10/14 08:04:20 coverage filter: replace_page: [__bpf_trace_svc_replace_page_err __probestub_svc_replace_page_err __traceiter_svc_replace_page_err perf_trace_svc_replace_page_err replace_page replace_page_cache_folio svc_rqst_replace_page trace_event_raw_event_svc_replace_page_err trace_raw_output_svc_replace_page_err trace_svc_replace_page_err] 2025/10/14 08:04:20 coverage filter: rmap_walk_ksm: [rmap_walk_ksm] 2025/10/14 08:04:20 coverage filter: run_store: [run_store] 2025/10/14 08:04:20 coverage filter: try_to_merge_one_page: [try_to_merge_one_page] 2025/10/14 08:04:20 coverage filter: try_to_merge_with_ksm_page: [try_to_merge_with_ksm_page] 2025/10/14 08:04:20 coverage filter: unmerge_ksm_pages: [unmerge_ksm_pages] 2025/10/14 08:04:20 coverage filter: mm/ksm.c: [mm/ksm.c] 2025/10/14 08:04:20 area "symbols": 1638 PCs in the cover filter 2025/10/14 08:04:20 area "files": 2293 PCs in the cover filter 2025/10/14 08:04:20 area "": 0 PCs in the cover filter 2025/10/14 08:04:20 executor cover filter: 0 PCs 2025/10/14 08:04:24 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8056 2025/10/14 08:04:24 new: machine check complete 2025/10/14 08:04:24 new: adding 81571 seeds 2025/10/14 08:05:58 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 08:06:55 runner 0 connected 2025/10/14 08:08:12 STAT { "buffer too small": 0, "candidate triage jobs": 49, "candidates": 77136, "comps overflows": 0, "corpus": 4335, "corpus [files]": 203, "corpus [symbols]": 1, "cover overflows": 2837, "coverage": 158598, "distributor delayed": 4153, "distributor undelayed": 4152, "distributor violated": 0, "exec candidate": 4435, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 7420, "exec total [new]": 19393, "exec triage": 13776, "executor restarts [base]": 58, "executor restarts [new]": 137, "fault jobs": 0, "fuzzer jobs": 49, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 161011, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 4435, "no exec duration": 42249000000, "no exec requests": 313, "pending": 0, "prog exec time": 272, "reproducing": 0, "rpc recv": 1096087768, "rpc sent": 98577608, "signal": 156274, "smash jobs": 0, "triage jobs": 0, "vm output": 3078903, "vm restarts [base]": 3, "vm restarts [new]": 10 } 2025/10/14 08:10:20 base crash: general protection fault in pcl818_ai_cancel 2025/10/14 08:10:21 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/10/14 08:10:32 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/10/14 08:10:43 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/10/14 08:11:11 runner 8 connected 2025/10/14 08:11:16 runner 1 connected 2025/10/14 08:11:21 runner 1 connected 2025/10/14 08:11:27 crash "WARNING in xfrm_state_fini" is already known 2025/10/14 08:11:27 base crash "WARNING in xfrm_state_fini" is to be ignored 2025/10/14 08:11:27 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 08:11:35 crash "unregister_netdevice: waiting for DEV to become free" is already known 2025/10/14 08:11:35 base crash "unregister_netdevice: waiting for DEV to become free" is to be ignored 2025/10/14 08:11:35 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/10/14 08:11:39 runner 2 connected 2025/10/14 08:12:24 runner 4 connected 2025/10/14 08:12:32 runner 5 connected 2025/10/14 08:13:12 STAT { "buffer too small": 0, "candidate triage jobs": 52, "candidates": 71369, "comps overflows": 0, "corpus": 10071, "corpus [files]": 374, "corpus [symbols]": 4, "cover overflows": 6415, "coverage": 203675, "distributor delayed": 9992, "distributor undelayed": 9992, "distributor violated": 2, "exec candidate": 10202, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 18045, "exec total [new]": 44412, "exec triage": 31532, "executor restarts [base]": 64, "executor restarts [new]": 171, "fault jobs": 0, "fuzzer jobs": 52, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 205219, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 10202, "no exec duration": 43122000000, "no exec requests": 317, "pending": 0, "prog exec time": 183, "reproducing": 0, "rpc recv": 2119455392, "rpc sent": 239161464, "signal": 200538, "smash jobs": 0, "triage jobs": 0, "vm output": 5958286, "vm restarts [base]": 4, "vm restarts [new]": 15 } 2025/10/14 08:13:49 base crash: WARNING in xfrm_state_fini 2025/10/14 08:13:56 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 08:14:45 runner 0 connected 2025/10/14 08:14:53 runner 7 connected 2025/10/14 08:15:12 base crash: possible deadlock in dqget 2025/10/14 08:15:45 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 08:16:02 runner 2 connected 2025/10/14 08:16:42 runner 6 connected 2025/10/14 08:17:41 crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/10/14 08:17:41 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is to be ignored 2025/10/14 08:17:41 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 08:17:45 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/10/14 08:17:48 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 08:17:48 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 08:17:53 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 08:17:59 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 08:17:59 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 08:18:09 crash "possible deadlock in ocfs2_init_acl" is already known 2025/10/14 08:18:09 base crash "possible deadlock in ocfs2_init_acl" is to be ignored 2025/10/14 08:18:09 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/10/14 08:18:10 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 08:18:10 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 08:18:12 STAT { "buffer too small": 0, "candidate triage jobs": 111, "candidates": 65440, "comps overflows": 0, "corpus": 15881, "corpus [files]": 552, "corpus [symbols]": 5, "cover overflows": 10630, "coverage": 230787, "distributor delayed": 15706, "distributor undelayed": 15611, "distributor violated": 3, "exec candidate": 16131, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 2, "exec seeds": 0, "exec smash": 0, "exec total [base]": 26770, "exec total [new]": 72782, "exec triage": 49881, "executor restarts [base]": 81, "executor restarts [new]": 222, "fault jobs": 0, "fuzzer jobs": 111, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 232714, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 16131, "no exec duration": 43129000000, "no exec requests": 318, "pending": 3, "prog exec time": 391, "reproducing": 0, "rpc recv": 2992721404, "rpc sent": 370167592, "signal": 227056, "smash jobs": 0, "triage jobs": 0, "vm output": 8767933, "vm restarts [base]": 6, "vm restarts [new]": 17 } 2025/10/14 08:18:21 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 08:18:21 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 08:18:30 runner 4 connected 2025/10/14 08:18:34 runner 0 connected 2025/10/14 08:18:37 runner 1 connected 2025/10/14 08:18:43 runner 2 connected 2025/10/14 08:18:50 runner 8 connected 2025/10/14 08:18:58 runner 7 connected 2025/10/14 08:18:59 runner 6 connected 2025/10/14 08:19:11 runner 5 connected 2025/10/14 08:20:55 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 08:21:44 runner 4 connected 2025/10/14 08:21:59 base crash: unregister_netdevice: waiting for DEV to become free 2025/10/14 08:22:17 base crash: WARNING in xfrm_state_fini 2025/10/14 08:22:18 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 08:22:56 runner 1 connected 2025/10/14 08:23:12 STAT { "buffer too small": 0, "candidate triage jobs": 46, "candidates": 60487, "comps overflows": 0, "corpus": 20834, "corpus [files]": 664, "corpus [symbols]": 6, "cover overflows": 14212, "coverage": 246878, "distributor delayed": 21684, "distributor undelayed": 21676, "distributor violated": 284, "exec candidate": 21084, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 36184, "exec total [new]": 98113, "exec triage": 65271, "executor restarts [base]": 101, "executor restarts [new]": 275, "fault jobs": 0, "fuzzer jobs": 46, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 248650, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 21084, "no exec duration": 43129000000, "no exec requests": 318, "pending": 4, "prog exec time": 173, "reproducing": 0, "rpc recv": 3961603884, "rpc sent": 494871064, "signal": 242811, "smash jobs": 0, "triage jobs": 0, "vm output": 10946312, "vm restarts [base]": 8, "vm restarts [new]": 25 } 2025/10/14 08:23:15 runner 2 connected 2025/10/14 08:23:15 runner 8 connected 2025/10/14 08:24:02 crash "INFO: task hung in corrupted" is already known 2025/10/14 08:24:02 base crash "INFO: task hung in corrupted" is to be ignored 2025/10/14 08:24:02 patched crashed: INFO: task hung in corrupted [need repro = false] 2025/10/14 08:24:30 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 08:24:50 base crash: lost connection to test machine 2025/10/14 08:25:00 runner 2 connected 2025/10/14 08:25:27 runner 5 connected 2025/10/14 08:25:31 crash "INFO: task hung in read_part_sector" is already known 2025/10/14 08:25:31 base crash "INFO: task hung in read_part_sector" is to be ignored 2025/10/14 08:25:31 patched crashed: INFO: task hung in read_part_sector [need repro = false] 2025/10/14 08:25:47 runner 2 connected 2025/10/14 08:26:27 runner 3 connected 2025/10/14 08:27:07 base crash: INFO: task hung in corrupted 2025/10/14 08:27:09 patched crashed: no output from test machine [need repro = false] 2025/10/14 08:27:14 patched crashed: no output from test machine [need repro = false] 2025/10/14 08:27:24 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 08:27:48 patched crashed: no output from test machine [need repro = false] 2025/10/14 08:27:57 runner 0 connected 2025/10/14 08:27:58 runner 6 connected 2025/10/14 08:28:12 STAT { "buffer too small": 0, "candidate triage jobs": 37, "candidates": 57076, "comps overflows": 0, "corpus": 24205, "corpus [files]": 744, "corpus [symbols]": 6, "cover overflows": 16549, "coverage": 257049, "distributor delayed": 27041, "distributor undelayed": 27040, "distributor violated": 309, "exec candidate": 24495, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 6, "exec seeds": 0, "exec smash": 0, "exec total [base]": 42296, "exec total [new]": 115836, "exec triage": 75779, "executor restarts [base]": 116, "executor restarts [new]": 311, "fault jobs": 0, "fuzzer jobs": 37, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 258935, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 24495, "no exec duration": 43149000000, "no exec requests": 321, "pending": 4, "prog exec time": 197, "reproducing": 0, "rpc recv": 4681096492, "rpc sent": 586880112, "signal": 252934, "smash jobs": 0, "triage jobs": 0, "vm output": 13118804, "vm restarts [base]": 11, "vm restarts [new]": 30 } 2025/10/14 08:28:12 runner 0 connected 2025/10/14 08:28:13 runner 8 connected 2025/10/14 08:28:38 runner 7 connected 2025/10/14 08:31:04 crash "kernel BUG in txUnlock" is already known 2025/10/14 08:31:04 base crash "kernel BUG in txUnlock" is to be ignored 2025/10/14 08:31:04 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 08:31:15 crash "kernel BUG in txUnlock" is already known 2025/10/14 08:31:15 base crash "kernel BUG in txUnlock" is to be ignored 2025/10/14 08:31:15 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 08:31:18 crash "kernel BUG in txUnlock" is already known 2025/10/14 08:31:18 base crash "kernel BUG in txUnlock" is to be ignored 2025/10/14 08:31:18 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 08:31:19 crash "kernel BUG in txUnlock" is already known 2025/10/14 08:31:19 base crash "kernel BUG in txUnlock" is to be ignored 2025/10/14 08:31:19 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 08:31:30 base crash: kernel BUG in txUnlock 2025/10/14 08:31:30 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 08:31:52 runner 2 connected 2025/10/14 08:32:04 runner 0 connected 2025/10/14 08:32:07 runner 4 connected 2025/10/14 08:32:08 runner 6 connected 2025/10/14 08:32:19 runner 8 connected 2025/10/14 08:32:20 runner 2 connected 2025/10/14 08:32:29 base crash: lost connection to test machine 2025/10/14 08:32:49 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 08:32:57 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 08:32:59 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 08:32:59 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 08:33:10 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 08:33:10 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 08:33:12 STAT { "buffer too small": 0, "candidate triage jobs": 38, "candidates": 52378, "comps overflows": 0, "corpus": 28843, "corpus [files]": 828, "corpus [symbols]": 7, "cover overflows": 19487, "coverage": 268947, "distributor delayed": 31909, "distributor undelayed": 31908, "distributor violated": 316, "exec candidate": 29193, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 6, "exec seeds": 0, "exec smash": 0, "exec total [base]": 51378, "exec total [new]": 139853, "exec triage": 90094, "executor restarts [base]": 133, "executor restarts [new]": 372, "fault jobs": 0, "fuzzer jobs": 38, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 270806, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 29193, "no exec duration": 43196000000, "no exec requests": 323, "pending": 6, "prog exec time": 275, "reproducing": 0, "rpc recv": 5656225676, "rpc sent": 727509688, "signal": 264675, "smash jobs": 0, "triage jobs": 0, "vm output": 16705685, "vm restarts [base]": 12, "vm restarts [new]": 38 } 2025/10/14 08:33:17 runner 1 connected 2025/10/14 08:33:21 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 08:33:21 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 08:33:38 runner 4 connected 2025/10/14 08:33:47 runner 2 connected 2025/10/14 08:33:48 runner 3 connected 2025/10/14 08:33:59 runner 0 connected 2025/10/14 08:34:10 runner 6 connected 2025/10/14 08:34:22 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 08:34:45 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 08:34:46 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 08:34:46 base crash: kernel BUG in txUnlock 2025/10/14 08:34:46 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 08:34:48 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 08:34:59 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 08:35:11 runner 1 connected 2025/10/14 08:35:33 runner 8 connected 2025/10/14 08:35:36 runner 0 connected 2025/10/14 08:35:36 runner 3 connected 2025/10/14 08:35:36 base crash: kernel BUG in txUnlock 2025/10/14 08:35:37 runner 2 connected 2025/10/14 08:35:37 runner 6 connected 2025/10/14 08:35:51 runner 5 connected 2025/10/14 08:35:58 patched crashed: INFO: rcu detected stall in corrupted [need repro = false] 2025/10/14 08:36:25 runner 2 connected 2025/10/14 08:36:54 runner 4 connected 2025/10/14 08:37:55 base crash: INFO: rcu detected stall in corrupted 2025/10/14 08:38:12 STAT { "buffer too small": 0, "candidate triage jobs": 48, "candidates": 49118, "comps overflows": 0, "corpus": 32062, "corpus [files]": 874, "corpus [symbols]": 9, "cover overflows": 21362, "coverage": 276385, "distributor delayed": 36622, "distributor undelayed": 36622, "distributor violated": 436, "exec candidate": 32453, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 58050, "exec total [new]": 156693, "exec triage": 100001, "executor restarts [base]": 152, "executor restarts [new]": 418, "fault jobs": 0, "fuzzer jobs": 48, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 278269, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 32453, "no exec duration": 43346000000, "no exec requests": 325, "pending": 7, "prog exec time": 253, "reproducing": 0, "rpc recv": 6640295024, "rpc sent": 848026016, "signal": 272002, "smash jobs": 0, "triage jobs": 0, "vm output": 19427965, "vm restarts [base]": 15, "vm restarts [new]": 50 } 2025/10/14 08:38:51 runner 0 connected 2025/10/14 08:39:12 patched crashed: INFO: task hung in corrupted [need repro = false] 2025/10/14 08:39:16 crash "KASAN: use-after-free Read in hpfs_get_ea" is already known 2025/10/14 08:39:16 base crash "KASAN: use-after-free Read in hpfs_get_ea" is to be ignored 2025/10/14 08:39:16 patched crashed: KASAN: use-after-free Read in hpfs_get_ea [need repro = false] 2025/10/14 08:39:53 patched crashed: INFO: task hung in corrupted [need repro = false] 2025/10/14 08:40:10 runner 8 connected 2025/10/14 08:40:13 runner 5 connected 2025/10/14 08:40:42 runner 1 connected 2025/10/14 08:41:38 crash "BUG: sleeping function called from invalid context in hook_sb_delete" is already known 2025/10/14 08:41:38 base crash "BUG: sleeping function called from invalid context in hook_sb_delete" is to be ignored 2025/10/14 08:41:38 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/10/14 08:41:48 crash "BUG: sleeping function called from invalid context in hook_sb_delete" is already known 2025/10/14 08:41:48 base crash "BUG: sleeping function called from invalid context in hook_sb_delete" is to be ignored 2025/10/14 08:41:48 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/10/14 08:42:27 runner 7 connected 2025/10/14 08:42:38 runner 2 connected 2025/10/14 08:43:12 STAT { "buffer too small": 0, "candidate triage jobs": 41, "candidates": 43881, "comps overflows": 0, "corpus": 37172, "corpus [files]": 986, "corpus [symbols]": 12, "cover overflows": 25769, "coverage": 286364, "distributor delayed": 42371, "distributor undelayed": 42371, "distributor violated": 436, "exec candidate": 37690, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 8, "exec seeds": 0, "exec smash": 0, "exec total [base]": 70263, "exec total [new]": 190866, "exec triage": 116549, "executor restarts [base]": 160, "executor restarts [new]": 452, "fault jobs": 0, "fuzzer jobs": 41, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 288705, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 37690, "no exec duration": 43371000000, "no exec requests": 328, "pending": 7, "prog exec time": 197, "reproducing": 0, "rpc recv": 7525937236, "rpc sent": 1020461488, "signal": 281747, "smash jobs": 0, "triage jobs": 0, "vm output": 21801382, "vm restarts [base]": 16, "vm restarts [new]": 55 } 2025/10/14 08:46:22 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 08:46:22 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 08:46:32 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 08:46:32 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 08:47:10 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 08:47:10 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 08:47:18 runner 7 connected 2025/10/14 08:47:22 runner 1 connected 2025/10/14 08:48:00 runner 2 connected 2025/10/14 08:48:06 crash "WARNING in xfrm6_tunnel_net_exit" is already known 2025/10/14 08:48:06 base crash "WARNING in xfrm6_tunnel_net_exit" is to be ignored 2025/10/14 08:48:06 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 08:48:07 base crash: WARNING in xfrm6_tunnel_net_exit 2025/10/14 08:48:12 STAT { "buffer too small": 0, "candidate triage jobs": 10, "candidates": 40971, "comps overflows": 0, "corpus": 39739, "corpus [files]": 1056, "corpus [symbols]": 15, "cover overflows": 32022, "coverage": 291215, "distributor delayed": 45394, "distributor undelayed": 45394, "distributor violated": 437, "exec candidate": 40600, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 9, "exec seeds": 0, "exec smash": 0, "exec total [base]": 84287, "exec total [new]": 229383, "exec triage": 126547, "executor restarts [base]": 168, "executor restarts [new]": 483, "fault jobs": 0, "fuzzer jobs": 10, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 294705, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 40600, "no exec duration": 43407000000, "no exec requests": 330, "pending": 10, "prog exec time": 273, "reproducing": 0, "rpc recv": 8142342456, "rpc sent": 1219863880, "signal": 286423, "smash jobs": 0, "triage jobs": 0, "vm output": 23636545, "vm restarts [base]": 16, "vm restarts [new]": 58 } 2025/10/14 08:48:12 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 08:48:12 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 08:48:32 base crash: WARNING in xfrm6_tunnel_net_exit 2025/10/14 08:48:46 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 08:49:02 runner 4 connected 2025/10/14 08:49:03 runner 1 connected 2025/10/14 08:49:11 runner 5 connected 2025/10/14 08:49:28 runner 0 connected 2025/10/14 08:49:38 crash "KASAN: use-after-free Read in hpfs_get_ea" is already known 2025/10/14 08:49:38 base crash "KASAN: use-after-free Read in hpfs_get_ea" is to be ignored 2025/10/14 08:49:38 patched crashed: KASAN: use-after-free Read in hpfs_get_ea [need repro = false] 2025/10/14 08:49:42 runner 0 connected 2025/10/14 08:49:50 crash "KASAN: use-after-free Read in hpfs_get_ea" is already known 2025/10/14 08:49:50 base crash "KASAN: use-after-free Read in hpfs_get_ea" is to be ignored 2025/10/14 08:49:50 patched crashed: KASAN: use-after-free Read in hpfs_get_ea [need repro = false] 2025/10/14 08:50:35 runner 6 connected 2025/10/14 08:50:47 runner 4 connected 2025/10/14 08:51:29 crash "BUG: sleeping function called from invalid context in hook_sb_delete" is already known 2025/10/14 08:51:29 base crash "BUG: sleeping function called from invalid context in hook_sb_delete" is to be ignored 2025/10/14 08:51:29 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/10/14 08:51:30 crash "possible deadlock in ntfs_fiemap" is already known 2025/10/14 08:51:30 base crash "possible deadlock in ntfs_fiemap" is to be ignored 2025/10/14 08:51:30 patched crashed: possible deadlock in ntfs_fiemap [need repro = false] 2025/10/14 08:52:16 crash "kernel BUG in jfs_evict_inode" is already known 2025/10/14 08:52:16 base crash "kernel BUG in jfs_evict_inode" is to be ignored 2025/10/14 08:52:16 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/10/14 08:52:20 runner 1 connected 2025/10/14 08:52:26 runner 8 connected 2025/10/14 08:52:29 crash "kernel BUG in jfs_evict_inode" is already known 2025/10/14 08:52:29 base crash "kernel BUG in jfs_evict_inode" is to be ignored 2025/10/14 08:52:29 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/10/14 08:53:05 runner 0 connected 2025/10/14 08:53:12 STAT { "buffer too small": 0, "candidate triage jobs": 21, "candidates": 39123, "comps overflows": 0, "corpus": 41521, "corpus [files]": 1115, "corpus [symbols]": 15, "cover overflows": 35481, "coverage": 295543, "distributor delayed": 47343, "distributor undelayed": 47343, "distributor violated": 437, "exec candidate": 42448, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 92249, "exec total [new]": 250860, "exec triage": 132275, "executor restarts [base]": 189, "executor restarts [new]": 543, "fault jobs": 0, "fuzzer jobs": 21, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 299022, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 42448, "no exec duration": 52682000000, "no exec requests": 345, "pending": 11, "prog exec time": 250, "reproducing": 0, "rpc recv": 8856615016, "rpc sent": 1369719360, "signal": 290686, "smash jobs": 0, "triage jobs": 0, "vm output": 26782252, "vm restarts [base]": 18, "vm restarts [new]": 66 } 2025/10/14 08:53:25 runner 2 connected 2025/10/14 08:54:57 crash "kernel BUG in jfs_evict_inode" is already known 2025/10/14 08:54:57 base crash "kernel BUG in jfs_evict_inode" is to be ignored 2025/10/14 08:54:57 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/10/14 08:55:47 runner 6 connected 2025/10/14 08:56:43 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 08:56:43 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 08:56:54 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 08:56:54 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 08:57:42 runner 5 connected 2025/10/14 08:57:51 runner 2 connected 2025/10/14 08:58:12 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 37516, "comps overflows": 0, "corpus": 42986, "corpus [files]": 1174, "corpus [symbols]": 15, "cover overflows": 40739, "coverage": 298383, "distributor delayed": 48965, "distributor undelayed": 48965, "distributor violated": 437, "exec candidate": 44055, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 101960, "exec total [new]": 281607, "exec triage": 137693, "executor restarts [base]": 199, "executor restarts [new]": 578, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 302117, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44034, "no exec duration": 52748000000, "no exec requests": 347, "pending": 13, "prog exec time": 247, "reproducing": 0, "rpc recv": 9346892564, "rpc sent": 1549732128, "signal": 293410, "smash jobs": 0, "triage jobs": 0, "vm output": 29229202, "vm restarts [base]": 18, "vm restarts [new]": 70 } 2025/10/14 08:58:21 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 08:58:21 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 08:59:18 runner 8 connected 2025/10/14 09:00:35 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 09:00:35 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:00:46 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 09:00:46 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:01:34 runner 5 connected 2025/10/14 09:01:35 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 09:01:42 runner 8 connected 2025/10/14 09:02:39 runner 3 connected 2025/10/14 09:02:59 crash "KASAN: out-of-bounds Read in ext4_xattr_set_entry" is already known 2025/10/14 09:02:59 base crash "KASAN: out-of-bounds Read in ext4_xattr_set_entry" is to be ignored 2025/10/14 09:02:59 patched crashed: KASAN: out-of-bounds Read in ext4_xattr_set_entry [need repro = false] 2025/10/14 09:03:12 STAT { "buffer too small": 0, "candidate triage jobs": 13, "candidates": 31178, "comps overflows": 0, "corpus": 43868, "corpus [files]": 1212, "corpus [symbols]": 15, "cover overflows": 45762, "coverage": 300378, "distributor delayed": 49994, "distributor undelayed": 49994, "distributor violated": 437, "exec candidate": 50393, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 107574, "exec total [new]": 309380, "exec triage": 140968, "executor restarts [base]": 210, "executor restarts [new]": 615, "fault jobs": 0, "fuzzer jobs": 13, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 304276, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45011, "no exec duration": 53110000000, "no exec requests": 355, "pending": 16, "prog exec time": 229, "reproducing": 0, "rpc recv": 9694379800, "rpc sent": 1682219280, "signal": 295343, "smash jobs": 0, "triage jobs": 0, "vm output": 31564965, "vm restarts [base]": 18, "vm restarts [new]": 74 } 2025/10/14 09:03:56 runner 2 connected 2025/10/14 09:04:45 base crash: no output from test machine 2025/10/14 09:05:35 runner 2 connected 2025/10/14 09:07:12 triaged 93.4% of the corpus 2025/10/14 09:07:12 starting bug reproductions 2025/10/14 09:07:12 starting bug reproductions (max 6 VMs, 4 repros) 2025/10/14 09:07:12 start reproducing 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:08:12 STAT { "buffer too small": 0, "candidate triage jobs": 6, "candidates": 51, "comps overflows": 0, "corpus": 44496, "corpus [files]": 1244, "corpus [symbols]": 15, "cover overflows": 52289, "coverage": 301657, "distributor delayed": 50887, "distributor undelayed": 50887, "distributor violated": 437, "exec candidate": 81520, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 112524, "exec total [new]": 343304, "exec triage": 143759, "executor restarts [base]": 226, "executor restarts [new]": 645, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 305913, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45778, "no exec duration": 53131000000, "no exec requests": 357, "pending": 15, "prog exec time": 274, "reproducing": 1, "rpc recv": 9964525184, "rpc sent": 1832076024, "signal": 296555, "smash jobs": 0, "triage jobs": 0, "vm output": 33635328, "vm restarts [base]": 19, "vm restarts [new]": 75 } 2025/10/14 09:08:45 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:08:48 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:09:05 reproducing crash 'possible deadlock in __pte_offset_map_lock': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:09:33 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:09:35 base crash: lost connection to test machine 2025/10/14 09:09:41 runner 2 connected 2025/10/14 09:09:43 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:09:44 runner 3 connected 2025/10/14 09:09:47 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 09:09:47 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:09:48 reproducing crash 'possible deadlock in __pte_offset_map_lock': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:09:57 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 09:09:57 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:10:09 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 09:10:09 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:10:13 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:10:13 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:10:21 runner 7 connected 2025/10/14 09:10:22 reproducing crash 'possible deadlock in __pte_offset_map_lock': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:10:25 runner 2 connected 2025/10/14 09:10:33 runner 8 connected 2025/10/14 09:10:34 base crash: lost connection to test machine 2025/10/14 09:10:36 runner 6 connected 2025/10/14 09:10:45 runner 4 connected 2025/10/14 09:10:57 runner 3 connected 2025/10/14 09:10:57 base crash: lost connection to test machine 2025/10/14 09:11:01 runner 5 connected 2025/10/14 09:11:01 runner 2 connected 2025/10/14 09:11:07 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:11:12 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:11:23 runner 1 connected 2025/10/14 09:11:31 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:11:37 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:11:45 runner 2 connected 2025/10/14 09:12:01 runner 7 connected 2025/10/14 09:12:04 runner 6 connected 2025/10/14 09:12:17 base crash: lost connection to test machine 2025/10/14 09:12:20 runner 5 connected 2025/10/14 09:12:23 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:12:26 runner 2 connected 2025/10/14 09:12:34 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 09:13:01 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:13:07 runner 2 connected 2025/10/14 09:13:12 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 16, "corpus": 44540, "corpus [files]": 1247, "corpus [symbols]": 16, "cover overflows": 53846, "coverage": 301771, "distributor delayed": 51081, "distributor undelayed": 51076, "distributor violated": 437, "exec candidate": 81571, "exec collide": 1127, "exec fuzz": 2206, "exec gen": 114, "exec hints": 497, "exec inject": 0, "exec minimize": 1177, "exec retries": 15, "exec seeds": 102, "exec smash": 663, "exec total [base]": 115672, "exec total [new]": 349533, "exec triage": 144052, "executor restarts [base]": 252, "executor restarts [new]": 706, "fault jobs": 0, "fuzzer jobs": 40, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 11, "max signal": 306216, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 659, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45880, "no exec duration": 60092000000, "no exec requests": 369, "pending": 18, "prog exec time": 409, "reproducing": 1, "rpc recv": 10641295612, "rpc sent": 2009444920, "signal": 296670, "smash jobs": 14, "triage jobs": 15, "vm output": 35441578, "vm restarts [base]": 23, "vm restarts [new]": 88 } 2025/10/14 09:13:12 runner 3 connected 2025/10/14 09:13:23 runner 6 connected 2025/10/14 09:13:24 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 09:13:24 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:13:31 reproducing crash 'possible deadlock in __pte_offset_map_lock': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:13:38 base crash: lost connection to test machine 2025/10/14 09:13:50 runner 7 connected 2025/10/14 09:14:10 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:14:19 reproducing crash 'possible deadlock in __pte_offset_map_lock': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:14:21 runner 5 connected 2025/10/14 09:14:26 runner 2 connected 2025/10/14 09:14:32 patched crashed: INFO: task hung in corrupted [need repro = false] 2025/10/14 09:14:33 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:14:45 base crash: lost connection to test machine 2025/10/14 09:14:48 reproducing crash 'possible deadlock in __pte_offset_map_lock': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:14:53 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:14:59 runner 3 connected 2025/10/14 09:15:22 runner 4 connected 2025/10/14 09:15:22 runner 7 connected 2025/10/14 09:15:33 runner 1 connected 2025/10/14 09:15:37 reproducing crash 'possible deadlock in __pte_offset_map_lock': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:15:37 repro finished 'possible deadlock in __pte_offset_map_lock', repro=true crepro=false desc='possible deadlock in __pte_offset_map_lock' hub=false from_dashboard=false 2025/10/14 09:15:37 found repro for "possible deadlock in __pte_offset_map_lock" (orig title: "-SAME-", reliability: 1), took 8.40 minutes 2025/10/14 09:15:37 "possible deadlock in __pte_offset_map_lock": saved crash log into 1760433337.crash.log 2025/10/14 09:15:37 "possible deadlock in __pte_offset_map_lock": saved repro log into 1760433337.repro.log 2025/10/14 09:15:37 start reproducing 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:15:42 runner 2 connected 2025/10/14 09:15:52 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:16:13 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:16:21 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:16:32 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:16:34 base crash: lost connection to test machine 2025/10/14 09:16:41 runner 7 connected 2025/10/14 09:17:02 base crash: lost connection to test machine 2025/10/14 09:17:03 runner 3 connected 2025/10/14 09:17:08 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:17:10 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:17:10 runner 5 connected 2025/10/14 09:17:21 runner 8 connected 2025/10/14 09:17:23 runner 2 connected 2025/10/14 09:17:29 attempt #0 to run "possible deadlock in __pte_offset_map_lock" on base: did not crash 2025/10/14 09:17:31 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:17:50 runner 1 connected 2025/10/14 09:17:57 runner 2 connected 2025/10/14 09:17:59 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = false] 2025/10/14 09:18:00 runner 6 connected 2025/10/14 09:18:00 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:18:04 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:18:12 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 30, "corpus": 44605, "corpus [files]": 1249, "corpus [symbols]": 16, "cover overflows": 55213, "coverage": 301871, "distributor delayed": 51281, "distributor undelayed": 51280, "distributor violated": 437, "exec candidate": 81571, "exec collide": 1751, "exec fuzz": 3445, "exec gen": 183, "exec hints": 1124, "exec inject": 0, "exec minimize": 2686, "exec retries": 15, "exec seeds": 269, "exec smash": 1804, "exec total [base]": 117848, "exec total [new]": 355225, "exec triage": 144349, "executor restarts [base]": 280, "executor restarts [new]": 763, "fault jobs": 0, "fuzzer jobs": 84, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 27, "max signal": 306450, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1482, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45984, "no exec duration": 62023000000, "no exec requests": 375, "pending": 18, "prog exec time": 447, "reproducing": 1, "rpc recv": 11422543220, "rpc sent": 2157214696, "signal": 296770, "smash jobs": 42, "triage jobs": 15, "vm output": 37939682, "vm restarts [base]": 27, "vm restarts [new]": 102 } 2025/10/14 09:18:20 runner 4 connected 2025/10/14 09:18:33 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:18:42 base crash: lost connection to test machine 2025/10/14 09:18:48 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:18:49 runner 5 connected 2025/10/14 09:18:49 runner 8 connected 2025/10/14 09:18:55 runner 7 connected 2025/10/14 09:19:21 runner 2 connected 2025/10/14 09:19:22 attempt #1 to run "possible deadlock in __pte_offset_map_lock" on base: did not crash 2025/10/14 09:19:26 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:19:32 runner 1 connected 2025/10/14 09:19:36 runner 6 connected 2025/10/14 09:19:48 base crash: lost connection to test machine 2025/10/14 09:19:57 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:20:04 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:20:12 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:20:13 base crash: lost connection to test machine 2025/10/14 09:20:16 runner 7 connected 2025/10/14 09:20:21 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:20:37 runner 2 connected 2025/10/14 09:20:47 runner 2 connected 2025/10/14 09:20:47 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:21:00 runner 5 connected 2025/10/14 09:21:02 runner 4 connected 2025/10/14 09:21:03 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:21:09 runner 1 connected 2025/10/14 09:21:10 runner 6 connected 2025/10/14 09:21:11 base crash: lost connection to test machine 2025/10/14 09:21:14 attempt #2 to run "possible deadlock in __pte_offset_map_lock" on base: did not crash 2025/10/14 09:21:14 patched-only: possible deadlock in __pte_offset_map_lock 2025/10/14 09:21:14 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock (full)' 2025/10/14 09:21:14 start reproducing 'possible deadlock in __pte_offset_map_lock (full)' 2025/10/14 09:21:24 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:21:36 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:21:37 runner 7 connected 2025/10/14 09:21:40 base crash: lost connection to test machine 2025/10/14 09:21:59 runner 3 connected 2025/10/14 09:22:00 runner 2 connected 2025/10/14 09:22:00 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:22:03 runner 0 connected 2025/10/14 09:22:05 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = false] 2025/10/14 09:22:25 runner 4 connected 2025/10/14 09:22:25 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:22:28 runner 1 connected 2025/10/14 09:22:34 base crash: lost connection to test machine 2025/10/14 09:22:44 base crash: lost connection to test machine 2025/10/14 09:22:50 runner 8 connected 2025/10/14 09:22:53 runner 7 connected 2025/10/14 09:23:01 base crash: lost connection to test machine 2025/10/14 09:23:12 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 49, "corpus": 44646, "corpus [files]": 1251, "corpus [symbols]": 16, "cover overflows": 56375, "coverage": 302005, "distributor delayed": 51389, "distributor undelayed": 51389, "distributor violated": 437, "exec candidate": 81571, "exec collide": 2320, "exec fuzz": 4547, "exec gen": 248, "exec hints": 1791, "exec inject": 0, "exec minimize": 3634, "exec retries": 15, "exec seeds": 387, "exec smash": 2755, "exec total [base]": 118522, "exec total [new]": 359815, "exec triage": 144489, "executor restarts [base]": 299, "executor restarts [new]": 820, "fault jobs": 0, "fuzzer jobs": 94, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 4, "hints jobs": 39, "max signal": 306545, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2069, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46044, "no exec duration": 62023000000, "no exec requests": 375, "pending": 18, "prog exec time": 500, "reproducing": 2, "rpc recv": 12202028672, "rpc sent": 2272264824, "signal": 296891, "smash jobs": 43, "triage jobs": 12, "vm output": 40297978, "vm restarts [base]": 33, "vm restarts [new]": 118 } 2025/10/14 09:23:14 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:23:22 runner 2 connected 2025/10/14 09:23:28 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:23:32 runner 0 connected 2025/10/14 09:23:50 runner 1 connected 2025/10/14 09:23:53 base crash: lost connection to test machine 2025/10/14 09:24:03 runner 3 connected 2025/10/14 09:24:17 runner 8 connected 2025/10/14 09:24:24 base crash: lost connection to test machine 2025/10/14 09:24:49 runner 2 connected 2025/10/14 09:25:03 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:25:13 runner 1 connected 2025/10/14 09:25:50 crash "kernel BUG in jfs_evict_inode" is already known 2025/10/14 09:25:50 base crash "kernel BUG in jfs_evict_inode" is to be ignored 2025/10/14 09:25:50 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/10/14 09:25:59 runner 8 connected 2025/10/14 09:26:13 base crash: lost connection to test machine 2025/10/14 09:26:37 base crash: lost connection to test machine 2025/10/14 09:26:39 runner 6 connected 2025/10/14 09:27:02 runner 1 connected 2025/10/14 09:27:13 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:27:33 runner 2 connected 2025/10/14 09:27:33 base crash: lost connection to test machine 2025/10/14 09:27:54 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:28:03 runner 3 connected 2025/10/14 09:28:12 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 76, "corpus": 44707, "corpus [files]": 1252, "corpus [symbols]": 16, "cover overflows": 58192, "coverage": 302115, "distributor delayed": 51551, "distributor undelayed": 51551, "distributor violated": 437, "exec candidate": 81571, "exec collide": 3226, "exec fuzz": 6343, "exec gen": 335, "exec hints": 2942, "exec inject": 0, "exec minimize": 5332, "exec retries": 16, "exec seeds": 535, "exec smash": 4251, "exec total [base]": 120466, "exec total [new]": 367387, "exec triage": 144778, "executor restarts [base]": 329, "executor restarts [new]": 868, "fault jobs": 0, "fuzzer jobs": 92, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 38, "max signal": 306787, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2964, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46147, "no exec duration": 65711000000, "no exec requests": 381, "pending": 18, "prog exec time": 532, "reproducing": 2, "rpc recv": 12765037516, "rpc sent": 2418417288, "signal": 296999, "smash jobs": 42, "triage jobs": 12, "vm output": 43754107, "vm restarts [base]": 40, "vm restarts [new]": 123 } 2025/10/14 09:28:16 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = false] 2025/10/14 09:28:24 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:28:29 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:28:30 runner 1 connected 2025/10/14 09:29:04 base crash: lost connection to test machine 2025/10/14 09:29:05 runner 4 connected 2025/10/14 09:29:17 runner 8 connected 2025/10/14 09:29:19 base crash: lost connection to test machine 2025/10/14 09:29:32 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:29:54 runner 2 connected 2025/10/14 09:30:08 runner 1 connected 2025/10/14 09:30:13 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:30:35 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:31:04 base crash: WARNING in xfrm_state_fini 2025/10/14 09:31:17 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:31:39 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:31:48 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:31:53 runner 2 connected 2025/10/14 09:32:22 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:32:28 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:32:37 runner 8 connected 2025/10/14 09:32:42 base crash: lost connection to test machine 2025/10/14 09:33:01 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:33:12 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 109, "corpus": 44757, "corpus [files]": 1253, "corpus [symbols]": 16, "cover overflows": 60359, "coverage": 302205, "distributor delayed": 51703, "distributor undelayed": 51703, "distributor violated": 437, "exec candidate": 81571, "exec collide": 4409, "exec fuzz": 8642, "exec gen": 432, "exec hints": 4868, "exec inject": 0, "exec minimize": 6562, "exec retries": 16, "exec seeds": 668, "exec smash": 5767, "exec total [base]": 123815, "exec total [new]": 376071, "exec triage": 145075, "executor restarts [base]": 363, "executor restarts [new]": 935, "fault jobs": 0, "fuzzer jobs": 60, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 34, "max signal": 307010, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3591, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46248, "no exec duration": 69639000000, "no exec requests": 387, "pending": 18, "prog exec time": 442, "reproducing": 2, "rpc recv": 13224146232, "rpc sent": 2595212504, "signal": 297078, "smash jobs": 16, "triage jobs": 10, "vm output": 46622288, "vm restarts [base]": 44, "vm restarts [new]": 126 } 2025/10/14 09:33:18 runner 4 connected 2025/10/14 09:33:32 base crash: lost connection to test machine 2025/10/14 09:33:38 runner 1 connected 2025/10/14 09:33:54 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:34:04 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 09:34:17 base crash: lost connection to test machine 2025/10/14 09:34:29 runner 0 connected 2025/10/14 09:34:52 runner 6 connected 2025/10/14 09:35:05 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 09:35:05 repro finished 'possible deadlock in __pte_offset_map_lock (full)', repro=true crepro=true desc='possible deadlock in __pte_offset_map_lock' hub=false from_dashboard=false 2025/10/14 09:35:05 found repro for "possible deadlock in __pte_offset_map_lock" (orig title: "-SAME-", reliability: 1), took 13.84 minutes 2025/10/14 09:35:05 "possible deadlock in __pte_offset_map_lock": saved crash log into 1760434505.crash.log 2025/10/14 09:35:05 "possible deadlock in __pte_offset_map_lock": saved repro log into 1760434505.repro.log 2025/10/14 09:35:06 runner 0 connected 2025/10/14 09:35:07 runner 1 connected 2025/10/14 09:35:10 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 09:35:50 base crash: lost connection to test machine 2025/10/14 09:35:59 runner 4 connected 2025/10/14 09:36:40 runner 1 connected 2025/10/14 09:36:43 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/10/14 09:37:04 attempt #0 to run "possible deadlock in __pte_offset_map_lock" on base: did not crash 2025/10/14 09:37:07 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 09:37:31 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 09:37:40 runner 5 connected 2025/10/14 09:38:05 runner 3 connected 2025/10/14 09:38:12 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 138, "corpus": 44802, "corpus [files]": 1254, "corpus [symbols]": 16, "cover overflows": 63284, "coverage": 302304, "distributor delayed": 51827, "distributor undelayed": 51827, "distributor violated": 437, "exec candidate": 81571, "exec collide": 6253, "exec fuzz": 12119, "exec gen": 600, "exec hints": 8813, "exec inject": 0, "exec minimize": 7666, "exec retries": 16, "exec seeds": 787, "exec smash": 7052, "exec total [base]": 126559, "exec total [new]": 388308, "exec triage": 145372, "executor restarts [base]": 382, "executor restarts [new]": 990, "fault jobs": 0, "fuzzer jobs": 24, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 14, "max signal": 307195, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4125, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46344, "no exec duration": 71892000000, "no exec requests": 401, "pending": 18, "prog exec time": 383, "reproducing": 1, "rpc recv": 13722193436, "rpc sent": 2821305584, "signal": 297165, "smash jobs": 4, "triage jobs": 6, "vm output": 49934802, "vm restarts [base]": 48, "vm restarts [new]": 132 } 2025/10/14 09:38:21 runner 0 connected 2025/10/14 09:38:28 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:38:31 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 09:38:31 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:38:33 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:38:42 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 09:38:42 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:39:03 attempt #1 to run "possible deadlock in __pte_offset_map_lock" on base: did not crash 2025/10/14 09:39:15 runner 5 connected 2025/10/14 09:39:20 runner 8 connected 2025/10/14 09:39:24 runner 6 connected 2025/10/14 09:39:31 runner 0 connected 2025/10/14 09:39:31 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:39:32 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:40:12 base crash: lost connection to test machine 2025/10/14 09:40:19 runner 3 connected 2025/10/14 09:40:22 runner 4 connected 2025/10/14 09:40:56 attempt #2 to run "possible deadlock in __pte_offset_map_lock" on base: did not crash 2025/10/14 09:40:56 patched-only: possible deadlock in __pte_offset_map_lock 2025/10/14 09:41:02 runner 1 connected 2025/10/14 09:41:08 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:41:11 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:41:30 base crash: lost connection to test machine 2025/10/14 09:41:38 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:41:47 runner 0 connected 2025/10/14 09:41:56 runner 7 connected 2025/10/14 09:42:00 runner 8 connected 2025/10/14 09:42:10 base crash: lost connection to test machine 2025/10/14 09:42:20 runner 2 connected 2025/10/14 09:42:27 runner 4 connected 2025/10/14 09:42:31 base crash: lost connection to test machine 2025/10/14 09:43:00 runner 1 connected 2025/10/14 09:43:02 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:43:12 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 171, "corpus": 44856, "corpus [files]": 1254, "corpus [symbols]": 16, "cover overflows": 65379, "coverage": 302416, "distributor delayed": 52000, "distributor undelayed": 51999, "distributor violated": 437, "exec candidate": 81571, "exec collide": 7586, "exec fuzz": 14603, "exec gen": 719, "exec hints": 11032, "exec inject": 0, "exec minimize": 8845, "exec retries": 17, "exec seeds": 944, "exec smash": 8269, "exec total [base]": 128844, "exec total [new]": 397325, "exec triage": 145671, "executor restarts [base]": 410, "executor restarts [new]": 1059, "fault jobs": 0, "fuzzer jobs": 45, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 4, "hints jobs": 19, "max signal": 307419, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4799, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46453, "no exec duration": 73751000000, "no exec requests": 412, "pending": 20, "prog exec time": 403, "reproducing": 1, "rpc recv": 14344611188, "rpc sent": 3047282520, "signal": 297264, "smash jobs": 15, "triage jobs": 11, "vm output": 53735498, "vm restarts [base]": 52, "vm restarts [new]": 142 } 2025/10/14 09:43:15 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:43:20 runner 0 connected 2025/10/14 09:43:31 base crash: lost connection to test machine 2025/10/14 09:43:31 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:43:33 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/10/14 09:43:43 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:43:51 runner 6 connected 2025/10/14 09:44:04 runner 5 connected 2025/10/14 09:44:20 runner 1 connected 2025/10/14 09:44:20 runner 3 connected 2025/10/14 09:44:22 runner 4 connected 2025/10/14 09:44:23 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:44:25 base crash: lost connection to test machine 2025/10/14 09:44:32 runner 0 connected 2025/10/14 09:44:51 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:45:01 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:45:03 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:45:10 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:45:13 runner 6 connected 2025/10/14 09:45:15 runner 0 connected 2025/10/14 09:45:16 base crash: lost connection to test machine 2025/10/14 09:45:33 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:45:39 runner 3 connected 2025/10/14 09:45:44 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:45:45 base crash: lost connection to test machine 2025/10/14 09:45:50 runner 5 connected 2025/10/14 09:45:51 runner 0 connected 2025/10/14 09:45:52 runner 8 connected 2025/10/14 09:46:06 runner 1 connected 2025/10/14 09:46:09 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:46:13 crash "possible deadlock in ocfs2_xattr_set" is already known 2025/10/14 09:46:13 base crash "possible deadlock in ocfs2_xattr_set" is to be ignored 2025/10/14 09:46:13 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/10/14 09:46:22 runner 7 connected 2025/10/14 09:46:32 runner 6 connected 2025/10/14 09:46:34 runner 0 connected 2025/10/14 09:46:36 base crash: lost connection to test machine 2025/10/14 09:46:59 runner 3 connected 2025/10/14 09:47:03 runner 4 connected 2025/10/14 09:47:12 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:47:16 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:47:24 runner 1 connected 2025/10/14 09:47:37 base crash: lost connection to test machine 2025/10/14 09:47:40 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:47:46 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:47:50 crash "possible deadlock in ocfs2_xattr_set" is already known 2025/10/14 09:47:50 base crash "possible deadlock in ocfs2_xattr_set" is to be ignored 2025/10/14 09:47:50 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/10/14 09:47:59 runner 0 connected 2025/10/14 09:48:03 base crash: unregister_netdevice: waiting for DEV to become free 2025/10/14 09:48:03 base crash: lost connection to test machine 2025/10/14 09:48:06 runner 7 connected 2025/10/14 09:48:12 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 202, "corpus": 44893, "corpus [files]": 1254, "corpus [symbols]": 16, "cover overflows": 66460, "coverage": 302541, "distributor delayed": 52112, "distributor undelayed": 52111, "distributor violated": 437, "exec candidate": 81571, "exec collide": 8120, "exec fuzz": 15766, "exec gen": 776, "exec hints": 12042, "exec inject": 0, "exec minimize": 9726, "exec retries": 17, "exec seeds": 1037, "exec smash": 8919, "exec total [base]": 131423, "exec total [new]": 401861, "exec triage": 145810, "executor restarts [base]": 437, "executor restarts [new]": 1113, "fault jobs": 0, "fuzzer jobs": 58, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 2, "hints jobs": 31, "max signal": 307553, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5332, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46507, "no exec duration": 76805000000, "no exec requests": 416, "pending": 20, "prog exec time": 701, "reproducing": 1, "rpc recv": 15187743612, "rpc sent": 3240476000, "signal": 297360, "smash jobs": 20, "triage jobs": 7, "vm output": 57805278, "vm restarts [base]": 58, "vm restarts [new]": 158 } 2025/10/14 09:48:17 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:48:26 runner 0 connected 2025/10/14 09:48:29 runner 4 connected 2025/10/14 09:48:35 runner 6 connected 2025/10/14 09:48:39 runner 8 connected 2025/10/14 09:48:41 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:48:50 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:48:51 runner 2 connected 2025/10/14 09:48:52 runner 1 connected 2025/10/14 09:48:57 base crash: lost connection to test machine 2025/10/14 09:49:06 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:49:06 runner 5 connected 2025/10/14 09:49:22 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:49:23 base crash: lost connection to test machine 2025/10/14 09:49:31 runner 7 connected 2025/10/14 09:49:39 runner 3 connected 2025/10/14 09:49:45 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:49:46 runner 0 connected 2025/10/14 09:49:56 runner 6 connected 2025/10/14 09:50:11 runner 0 connected 2025/10/14 09:50:12 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:50:13 runner 1 connected 2025/10/14 09:50:13 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 09:50:13 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:50:29 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:50:30 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 09:50:30 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:50:34 runner 8 connected 2025/10/14 09:50:44 base crash: lost connection to test machine 2025/10/14 09:50:55 base crash: lost connection to test machine 2025/10/14 09:51:01 runner 5 connected 2025/10/14 09:51:02 runner 4 connected 2025/10/14 09:51:10 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:51:18 runner 3 connected 2025/10/14 09:51:20 runner 6 connected 2025/10/14 09:51:33 runner 1 connected 2025/10/14 09:51:34 base crash: lost connection to test machine 2025/10/14 09:51:43 runner 2 connected 2025/10/14 09:51:51 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:51:58 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:51:59 runner 8 connected 2025/10/14 09:52:09 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:52:19 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:52:22 runner 0 connected 2025/10/14 09:52:34 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:52:40 runner 6 connected 2025/10/14 09:52:47 runner 7 connected 2025/10/14 09:52:48 base crash: lost connection to test machine 2025/10/14 09:52:53 base crash: lost connection to test machine 2025/10/14 09:52:58 runner 3 connected 2025/10/14 09:53:01 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:53:09 runner 5 connected 2025/10/14 09:53:11 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:53:12 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 245, "corpus": 44923, "corpus [files]": 1254, "corpus [symbols]": 16, "cover overflows": 67638, "coverage": 302589, "distributor delayed": 52218, "distributor undelayed": 52216, "distributor violated": 437, "exec candidate": 81571, "exec collide": 8738, "exec fuzz": 16950, "exec gen": 857, "exec hints": 13126, "exec inject": 0, "exec minimize": 10350, "exec retries": 17, "exec seeds": 1126, "exec smash": 9630, "exec total [base]": 133217, "exec total [new]": 406384, "exec triage": 145929, "executor restarts [base]": 462, "executor restarts [new]": 1176, "fault jobs": 0, "fuzzer jobs": 61, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 30, "max signal": 307704, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5685, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46558, "no exec duration": 76805000000, "no exec requests": 416, "pending": 22, "prog exec time": 509, "reproducing": 1, "rpc recv": 16168800400, "rpc sent": 3454413928, "signal": 297407, "smash jobs": 17, "triage jobs": 14, "vm output": 62040714, "vm restarts [base]": 66, "vm restarts [new]": 176 } 2025/10/14 09:53:16 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:53:24 runner 8 connected 2025/10/14 09:53:33 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:53:37 base crash: lost connection to test machine 2025/10/14 09:53:38 runner 1 connected 2025/10/14 09:53:42 runner 0 connected 2025/10/14 09:53:44 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:53:52 runner 0 connected 2025/10/14 09:53:59 runner 6 connected 2025/10/14 09:54:05 runner 7 connected 2025/10/14 09:54:07 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 09:54:07 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:54:18 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 09:54:18 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:54:22 runner 3 connected 2025/10/14 09:54:26 runner 2 connected 2025/10/14 09:54:29 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:54:33 runner 4 connected 2025/10/14 09:54:42 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:54:52 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:54:57 runner 8 connected 2025/10/14 09:54:57 base crash: lost connection to test machine 2025/10/14 09:55:05 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:55:08 runner 5 connected 2025/10/14 09:55:17 runner 6 connected 2025/10/14 09:55:28 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:55:30 runner 7 connected 2025/10/14 09:55:40 runner 3 connected 2025/10/14 09:55:45 runner 2 connected 2025/10/14 09:55:52 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 09:55:52 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:55:55 runner 4 connected 2025/10/14 09:56:16 runner 8 connected 2025/10/14 09:56:18 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:56:42 runner 5 connected 2025/10/14 09:56:48 base crash: lost connection to test machine 2025/10/14 09:56:48 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:56:52 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:56:56 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:57:01 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:57:05 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 09:57:05 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:57:07 runner 3 connected 2025/10/14 09:57:37 runner 1 connected 2025/10/14 09:57:38 runner 8 connected 2025/10/14 09:57:42 runner 6 connected 2025/10/14 09:57:45 runner 7 connected 2025/10/14 09:57:50 runner 4 connected 2025/10/14 09:57:54 runner 0 connected 2025/10/14 09:58:00 base crash: possible deadlock in ocfs2_xattr_set 2025/10/14 09:58:12 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 300, "corpus": 44956, "corpus [files]": 1255, "corpus [symbols]": 16, "cover overflows": 68526, "coverage": 302756, "distributor delayed": 52305, "distributor undelayed": 52305, "distributor violated": 437, "exec candidate": 81571, "exec collide": 9234, "exec fuzz": 17904, "exec gen": 904, "exec hints": 13938, "exec inject": 0, "exec minimize": 11095, "exec retries": 17, "exec seeds": 1224, "exec smash": 10213, "exec total [base]": 136515, "exec total [new]": 410254, "exec triage": 146042, "executor restarts [base]": 489, "executor restarts [new]": 1249, "fault jobs": 0, "fuzzer jobs": 72, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 6, "hints jobs": 37, "max signal": 307823, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6093, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46602, "no exec duration": 79840000000, "no exec requests": 420, "pending": 26, "prog exec time": 500, "reproducing": 1, "rpc recv": 17273939604, "rpc sent": 3737684176, "signal": 297569, "smash jobs": 28, "triage jobs": 7, "vm output": 65131628, "vm restarts [base]": 71, "vm restarts [new]": 196 } 2025/10/14 09:58:17 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 09:58:17 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 09:58:17 base crash: lost connection to test machine 2025/10/14 09:58:48 runner 0 connected 2025/10/14 09:58:51 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:59:04 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:59:07 runner 5 connected 2025/10/14 09:59:07 runner 1 connected 2025/10/14 09:59:20 base crash: lost connection to test machine 2025/10/14 09:59:41 runner 3 connected 2025/10/14 09:59:44 base crash: lost connection to test machine 2025/10/14 09:59:45 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 09:59:53 runner 4 connected 2025/10/14 09:59:58 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:00:10 runner 0 connected 2025/10/14 10:00:27 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:00:28 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:00:32 runner 1 connected 2025/10/14 10:00:34 runner 7 connected 2025/10/14 10:00:40 base crash: lost connection to test machine 2025/10/14 10:00:48 runner 8 connected 2025/10/14 10:01:02 base crash: lost connection to test machine 2025/10/14 10:01:07 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:01:15 runner 5 connected 2025/10/14 10:01:17 runner 3 connected 2025/10/14 10:01:23 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:01:28 runner 0 connected 2025/10/14 10:01:52 runner 1 connected 2025/10/14 10:01:53 base crash: lost connection to test machine 2025/10/14 10:01:56 runner 7 connected 2025/10/14 10:02:10 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:02:10 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:02:13 runner 8 connected 2025/10/14 10:02:14 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:02:32 base crash: lost connection to test machine 2025/10/14 10:02:41 runner 2 connected 2025/10/14 10:02:48 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:02:58 runner 5 connected 2025/10/14 10:03:02 runner 3 connected 2025/10/14 10:03:12 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 326, "corpus": 45005, "corpus [files]": 1256, "corpus [symbols]": 16, "cover overflows": 70561, "coverage": 302866, "distributor delayed": 52417, "distributor undelayed": 52417, "distributor violated": 437, "exec candidate": 81571, "exec collide": 10305, "exec fuzz": 19784, "exec gen": 1014, "exec hints": 15467, "exec inject": 0, "exec minimize": 12215, "exec retries": 17, "exec seeds": 1374, "exec smash": 11597, "exec total [base]": 138806, "exec total [new]": 417714, "exec triage": 146260, "executor restarts [base]": 514, "executor restarts [new]": 1315, "fault jobs": 0, "fuzzer jobs": 45, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 6, "hints jobs": 18, "max signal": 308190, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6862, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46684, "no exec duration": 90820000000, "no exec requests": 439, "pending": 28, "prog exec time": 477, "reproducing": 1, "rpc recv": 18044496084, "rpc sent": 3967338968, "signal": 297671, "smash jobs": 20, "triage jobs": 7, "vm output": 68994628, "vm restarts [base]": 78, "vm restarts [new]": 207 } 2025/10/14 10:03:22 runner 0 connected 2025/10/14 10:03:31 base crash: lost connection to test machine 2025/10/14 10:03:38 runner 8 connected 2025/10/14 10:03:41 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:04:00 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:04:00 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:04:16 base crash: possible deadlock in ocfs2_init_acl 2025/10/14 10:04:21 runner 2 connected 2025/10/14 10:04:27 base crash: lost connection to test machine 2025/10/14 10:04:30 runner 4 connected 2025/10/14 10:04:32 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:04:49 runner 8 connected 2025/10/14 10:05:05 runner 0 connected 2025/10/14 10:05:21 runner 0 connected 2025/10/14 10:05:25 runner 1 connected 2025/10/14 10:05:26 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:05:31 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:05:48 base crash: lost connection to test machine 2025/10/14 10:05:53 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:05:53 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:06:09 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:06:14 runner 4 connected 2025/10/14 10:06:20 runner 8 connected 2025/10/14 10:06:37 runner 0 connected 2025/10/14 10:06:41 runner 6 connected 2025/10/14 10:06:43 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:06:50 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:06:59 runner 7 connected 2025/10/14 10:07:09 base crash: lost connection to test machine 2025/10/14 10:07:33 runner 0 connected 2025/10/14 10:07:39 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:07:39 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:07:39 runner 8 connected 2025/10/14 10:07:42 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:07:42 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:07:57 runner 0 connected 2025/10/14 10:08:07 base crash: WARNING in xfrm_state_fini 2025/10/14 10:08:12 STAT { "buffer too small": 4, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 356, "corpus": 45042, "corpus [files]": 1256, "corpus [symbols]": 16, "cover overflows": 72463, "coverage": 302927, "distributor delayed": 52543, "distributor undelayed": 52543, "distributor violated": 437, "exec candidate": 81571, "exec collide": 11449, "exec fuzz": 22048, "exec gen": 1127, "exec hints": 17415, "exec inject": 0, "exec minimize": 13102, "exec retries": 19, "exec seeds": 1485, "exec smash": 12522, "exec total [base]": 141785, "exec total [new]": 425318, "exec triage": 146463, "executor restarts [base]": 549, "executor restarts [new]": 1418, "fault jobs": 0, "fuzzer jobs": 22, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 9, "max signal": 308360, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7461, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46755, "no exec duration": 97828000000, "no exec requests": 449, "pending": 32, "prog exec time": 498, "reproducing": 1, "rpc recv": 18768634112, "rpc sent": 4166038760, "signal": 297732, "smash jobs": 4, "triage jobs": 9, "vm output": 73564863, "vm restarts [base]": 84, "vm restarts [new]": 217 } 2025/10/14 10:08:28 runner 7 connected 2025/10/14 10:08:30 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 10:08:30 runner 3 connected 2025/10/14 10:08:56 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:08:56 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:08:57 runner 1 connected 2025/10/14 10:09:00 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:09:07 base crash: lost connection to test machine 2025/10/14 10:09:19 runner 8 connected 2025/10/14 10:09:30 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:09:45 runner 0 connected 2025/10/14 10:09:46 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:09:46 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:09:49 runner 7 connected 2025/10/14 10:09:55 runner 0 connected 2025/10/14 10:09:57 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:10:04 base crash: lost connection to test machine 2025/10/14 10:10:19 runner 6 connected 2025/10/14 10:10:20 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:10:20 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:10:34 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:10:34 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:10:35 runner 5 connected 2025/10/14 10:10:46 runner 8 connected 2025/10/14 10:10:55 runner 1 connected 2025/10/14 10:11:10 runner 0 connected 2025/10/14 10:11:22 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:11:24 runner 4 connected 2025/10/14 10:11:31 base crash: lost connection to test machine 2025/10/14 10:11:40 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:11:50 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:11:51 base crash: lost connection to test machine 2025/10/14 10:11:58 base crash: lost connection to test machine 2025/10/14 10:12:06 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:12:11 runner 6 connected 2025/10/14 10:12:20 runner 2 connected 2025/10/14 10:12:29 runner 8 connected 2025/10/14 10:12:30 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:12:39 runner 3 connected 2025/10/14 10:12:40 runner 0 connected 2025/10/14 10:12:41 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:12:46 runner 1 connected 2025/10/14 10:12:56 runner 4 connected 2025/10/14 10:13:03 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:13:12 STAT { "buffer too small": 4, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 371, "corpus": 45079, "corpus [files]": 1257, "corpus [symbols]": 16, "cover overflows": 73921, "coverage": 303034, "distributor delayed": 52639, "distributor undelayed": 52638, "distributor violated": 437, "exec candidate": 81571, "exec collide": 12553, "exec fuzz": 24118, "exec gen": 1258, "exec hints": 18914, "exec inject": 0, "exec minimize": 14036, "exec retries": 19, "exec seeds": 1587, "exec smash": 13462, "exec total [base]": 144693, "exec total [new]": 432245, "exec triage": 146609, "executor restarts [base]": 579, "executor restarts [new]": 1492, "fault jobs": 0, "fuzzer jobs": 19, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 7, "max signal": 308520, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 8051, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46819, "no exec duration": 99049000000, "no exec requests": 454, "pending": 36, "prog exec time": 302, "reproducing": 1, "rpc recv": 19584204404, "rpc sent": 4368794144, "signal": 297818, "smash jobs": 3, "triage jobs": 9, "vm output": 76557324, "vm restarts [base]": 90, "vm restarts [new]": 231 } 2025/10/14 10:13:17 base crash: lost connection to test machine 2025/10/14 10:13:17 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:13:21 runner 0 connected 2025/10/14 10:13:22 base crash: lost connection to test machine 2025/10/14 10:13:26 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:13:31 runner 6 connected 2025/10/14 10:13:57 base crash: lost connection to test machine 2025/10/14 10:13:59 runner 8 connected 2025/10/14 10:14:07 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:14:07 runner 7 connected 2025/10/14 10:14:14 runner 1 connected 2025/10/14 10:14:16 runner 4 connected 2025/10/14 10:14:18 runner 0 connected 2025/10/14 10:14:38 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:14:44 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:14:45 runner 2 connected 2025/10/14 10:14:46 base crash: lost connection to test machine 2025/10/14 10:14:53 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:14:56 runner 3 connected 2025/10/14 10:15:08 base crash: lost connection to test machine 2025/10/14 10:15:16 base crash: lost connection to test machine 2025/10/14 10:15:18 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:15:27 runner 6 connected 2025/10/14 10:15:32 runner 7 connected 2025/10/14 10:15:35 runner 1 connected 2025/10/14 10:15:42 runner 8 connected 2025/10/14 10:15:42 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:15:43 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:15:57 runner 0 connected 2025/10/14 10:16:04 runner 2 connected 2025/10/14 10:16:09 runner 5 connected 2025/10/14 10:16:31 runner 4 connected 2025/10/14 10:16:32 base crash: lost connection to test machine 2025/10/14 10:16:32 runner 0 connected 2025/10/14 10:16:43 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:16:48 base crash: lost connection to test machine 2025/10/14 10:16:53 base crash: lost connection to test machine 2025/10/14 10:17:00 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:17:20 runner 1 connected 2025/10/14 10:17:30 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 10:17:33 runner 8 connected 2025/10/14 10:17:38 runner 0 connected 2025/10/14 10:17:42 runner 2 connected 2025/10/14 10:17:44 crash "possible deadlock in hfsplus_block_allocate" is already known 2025/10/14 10:17:44 base crash "possible deadlock in hfsplus_block_allocate" is to be ignored 2025/10/14 10:17:44 patched crashed: possible deadlock in hfsplus_block_allocate [need repro = false] 2025/10/14 10:17:49 runner 6 connected 2025/10/14 10:17:50 base crash: lost connection to test machine 2025/10/14 10:17:54 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:18:12 STAT { "buffer too small": 4, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 375, "corpus": 45117, "corpus [files]": 1257, "corpus [symbols]": 16, "cover overflows": 75465, "coverage": 303099, "distributor delayed": 52748, "distributor undelayed": 52747, "distributor violated": 437, "exec candidate": 81571, "exec collide": 13852, "exec fuzz": 26511, "exec gen": 1384, "exec hints": 19956, "exec inject": 0, "exec minimize": 14941, "exec retries": 20, "exec seeds": 1679, "exec smash": 14333, "exec total [base]": 145647, "exec total [new]": 439138, "exec triage": 146765, "executor restarts [base]": 612, "executor restarts [new]": 1567, "fault jobs": 0, "fuzzer jobs": 20, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 3, "hints jobs": 6, "max signal": 308657, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 8624, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46879, "no exec duration": 110614000000, "no exec requests": 470, "pending": 36, "prog exec time": 562, "reproducing": 1, "rpc recv": 20391293720, "rpc sent": 4552247136, "signal": 297885, "smash jobs": 7, "triage jobs": 7, "vm output": 79193580, "vm restarts [base]": 99, "vm restarts [new]": 245 } 2025/10/14 10:18:12 base crash: lost connection to test machine 2025/10/14 10:18:19 runner 7 connected 2025/10/14 10:18:28 base crash: lost connection to test machine 2025/10/14 10:18:32 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:18:33 runner 5 connected 2025/10/14 10:18:38 runner 1 connected 2025/10/14 10:18:43 runner 4 connected 2025/10/14 10:18:52 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:19:01 runner 2 connected 2025/10/14 10:19:09 base crash: lost connection to test machine 2025/10/14 10:19:17 runner 0 connected 2025/10/14 10:19:17 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:19:20 runner 8 connected 2025/10/14 10:19:43 runner 0 connected 2025/10/14 10:19:58 runner 1 connected 2025/10/14 10:20:13 runner 5 connected 2025/10/14 10:20:26 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/10/14 10:20:31 base crash: lost connection to test machine 2025/10/14 10:20:45 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:20:50 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:21:15 runner 3 connected 2025/10/14 10:21:16 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:21:16 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:21:20 runner 2 connected 2025/10/14 10:21:25 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:21:34 runner 4 connected 2025/10/14 10:21:39 runner 0 connected 2025/10/14 10:22:05 runner 8 connected 2025/10/14 10:22:14 runner 5 connected 2025/10/14 10:22:17 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:22:23 base crash: lost connection to test machine 2025/10/14 10:22:32 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:23:05 runner 0 connected 2025/10/14 10:23:12 STAT { "buffer too small": 4, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 423, "corpus": 45152, "corpus [files]": 1258, "corpus [symbols]": 16, "cover overflows": 77224, "coverage": 303225, "distributor delayed": 52857, "distributor undelayed": 52855, "distributor violated": 437, "exec candidate": 81571, "exec collide": 14868, "exec fuzz": 28526, "exec gen": 1480, "exec hints": 20841, "exec inject": 0, "exec minimize": 15947, "exec retries": 21, "exec seeds": 1773, "exec smash": 15114, "exec total [base]": 148836, "exec total [new]": 445203, "exec triage": 146929, "executor restarts [base]": 637, "executor restarts [new]": 1627, "fault jobs": 0, "fuzzer jobs": 29, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 5, "hints jobs": 9, "max signal": 308804, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 9174, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46940, "no exec duration": 119614000000, "no exec requests": 479, "pending": 37, "prog exec time": 646, "reproducing": 1, "rpc recv": 21129466404, "rpc sent": 4778128480, "signal": 297990, "smash jobs": 10, "triage jobs": 10, "vm output": 82774461, "vm restarts [base]": 104, "vm restarts [new]": 257 } 2025/10/14 10:23:14 runner 0 connected 2025/10/14 10:23:21 runner 7 connected 2025/10/14 10:23:26 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/10/14 10:23:31 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/10/14 10:23:31 base crash: lost connection to test machine 2025/10/14 10:23:37 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/10/14 10:23:56 patched crashed: INFO: task hung in synchronize_rcu [need repro = true] 2025/10/14 10:23:56 scheduled a reproduction of 'INFO: task hung in synchronize_rcu' 2025/10/14 10:23:56 start reproducing 'INFO: task hung in synchronize_rcu' 2025/10/14 10:24:00 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/10/14 10:24:01 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/10/14 10:24:14 runner 5 connected 2025/10/14 10:24:15 base crash: general protection fault in pcl818_ai_cancel 2025/10/14 10:24:17 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:24:21 runner 4 connected 2025/10/14 10:24:22 runner 2 connected 2025/10/14 10:24:26 runner 8 connected 2025/10/14 10:24:44 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:24:45 runner 6 connected 2025/10/14 10:24:47 runner 7 connected 2025/10/14 10:24:56 base crash: lost connection to test machine 2025/10/14 10:24:58 base crash: lost connection to test machine 2025/10/14 10:25:02 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:25:05 runner 1 connected 2025/10/14 10:25:05 runner 3 connected 2025/10/14 10:25:34 runner 5 connected 2025/10/14 10:25:46 base crash: general protection fault in pcl818_ai_cancel 2025/10/14 10:25:47 runner 0 connected 2025/10/14 10:25:51 runner 4 connected 2025/10/14 10:25:53 runner 2 connected 2025/10/14 10:26:05 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:26:34 runner 1 connected 2025/10/14 10:26:56 base crash: general protection fault in lock_sock_nested 2025/10/14 10:27:01 runner 5 connected 2025/10/14 10:27:01 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:27:10 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:27:10 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:27:31 base crash: lost connection to test machine 2025/10/14 10:27:44 runner 0 connected 2025/10/14 10:27:49 runner 7 connected 2025/10/14 10:27:56 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:27:59 runner 3 connected 2025/10/14 10:27:59 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:28:12 STAT { "buffer too small": 5, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 488, "corpus": 45185, "corpus [files]": 1258, "corpus [symbols]": 16, "cover overflows": 78805, "coverage": 303293, "distributor delayed": 52929, "distributor undelayed": 52928, "distributor violated": 437, "exec candidate": 81571, "exec collide": 15690, "exec fuzz": 30077, "exec gen": 1574, "exec hints": 21837, "exec inject": 0, "exec minimize": 16735, "exec retries": 21, "exec seeds": 1868, "exec smash": 15809, "exec total [base]": 151155, "exec total [new]": 450389, "exec triage": 147069, "executor restarts [base]": 664, "executor restarts [new]": 1690, "fault jobs": 0, "fuzzer jobs": 25, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 9, "max signal": 309122, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 9612, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46994, "no exec duration": 119614000000, "no exec requests": 479, "pending": 38, "prog exec time": 660, "reproducing": 2, "rpc recv": 21952955536, "rpc sent": 4950861472, "signal": 298034, "smash jobs": 6, "triage jobs": 10, "vm output": 87818583, "vm restarts [base]": 111, "vm restarts [new]": 269 } 2025/10/14 10:28:22 runner 1 connected 2025/10/14 10:28:32 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:28:48 runner 5 connected 2025/10/14 10:28:53 runner 6 connected 2025/10/14 10:29:29 runner 3 connected 2025/10/14 10:29:47 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:30:25 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:30:25 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:30:27 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:30:37 runner 7 connected 2025/10/14 10:31:21 runner 3 connected 2025/10/14 10:31:23 runner 4 connected 2025/10/14 10:31:26 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:31:30 base crash: lost connection to test machine 2025/10/14 10:32:20 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:32:24 runner 7 connected 2025/10/14 10:32:26 base crash: general protection fault in pcl818_ai_cancel 2025/10/14 10:32:26 runner 0 connected 2025/10/14 10:33:07 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:33:10 runner 4 connected 2025/10/14 10:33:12 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 504, "corpus": 45210, "corpus [files]": 1260, "corpus [symbols]": 16, "cover overflows": 80178, "coverage": 303339, "distributor delayed": 53019, "distributor undelayed": 53018, "distributor violated": 437, "exec candidate": 81571, "exec collide": 16549, "exec fuzz": 31677, "exec gen": 1645, "exec hints": 22639, "exec inject": 0, "exec minimize": 17577, "exec retries": 21, "exec seeds": 1927, "exec smash": 16411, "exec total [base]": 154802, "exec total [new]": 455364, "exec triage": 147208, "executor restarts [base]": 686, "executor restarts [new]": 1743, "fault jobs": 0, "fuzzer jobs": 18, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 4, "hints jobs": 4, "max signal": 309191, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 10089, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47043, "no exec duration": 120732000000, "no exec requests": 481, "pending": 39, "prog exec time": 664, "reproducing": 2, "rpc recv": 22463362584, "rpc sent": 5099151672, "signal": 298066, "smash jobs": 5, "triage jobs": 9, "vm output": 92555267, "vm restarts [base]": 113, "vm restarts [new]": 277 } 2025/10/14 10:33:20 base crash: lost connection to test machine 2025/10/14 10:33:22 runner 2 connected 2025/10/14 10:33:55 runner 5 connected 2025/10/14 10:33:57 patched crashed: possible deadlock in pcpu_alloc_noprof [need repro = true] 2025/10/14 10:33:57 scheduled a reproduction of 'possible deadlock in pcpu_alloc_noprof' 2025/10/14 10:33:57 start reproducing 'possible deadlock in pcpu_alloc_noprof' 2025/10/14 10:34:09 runner 0 connected 2025/10/14 10:34:25 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:34:30 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:34:31 base crash: WARNING in xfrm_state_fini 2025/10/14 10:34:32 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:34:33 base crash: lost connection to test machine 2025/10/14 10:34:47 runner 4 connected 2025/10/14 10:35:15 runner 5 connected 2025/10/14 10:35:16 base crash: lost connection to test machine 2025/10/14 10:35:17 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:35:20 runner 2 connected 2025/10/14 10:35:20 runner 6 connected 2025/10/14 10:35:20 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:35:20 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:35:23 runner 1 connected 2025/10/14 10:35:46 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:35:51 base crash: lost connection to test machine 2025/10/14 10:36:06 runner 0 connected 2025/10/14 10:36:09 runner 4 connected 2025/10/14 10:36:33 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:36:34 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:36:34 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:36:39 runner 2 connected 2025/10/14 10:36:43 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:36:52 crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/10/14 10:36:52 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is to be ignored 2025/10/14 10:36:52 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/10/14 10:37:09 base crash: lost connection to test machine 2025/10/14 10:37:12 base crash: lost connection to test machine 2025/10/14 10:37:23 runner 5 connected 2025/10/14 10:37:32 runner 7 connected 2025/10/14 10:37:38 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:37:38 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:37:38 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:37:42 runner 6 connected 2025/10/14 10:37:58 runner 0 connected 2025/10/14 10:38:02 runner 2 connected 2025/10/14 10:38:06 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:38:08 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:38:12 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 522, "corpus": 45225, "corpus [files]": 1260, "corpus [symbols]": 16, "cover overflows": 81495, "coverage": 303366, "distributor delayed": 53073, "distributor undelayed": 53072, "distributor violated": 437, "exec candidate": 81571, "exec collide": 17418, "exec fuzz": 33243, "exec gen": 1734, "exec hints": 23022, "exec inject": 0, "exec minimize": 18217, "exec retries": 22, "exec seeds": 1966, "exec smash": 16780, "exec total [base]": 156299, "exec total [new]": 459400, "exec triage": 147284, "executor restarts [base]": 714, "executor restarts [new]": 1789, "fault jobs": 0, "fuzzer jobs": 9, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 3, "max signal": 309283, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 10529, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47069, "no exec duration": 120803000000, "no exec requests": 483, "pending": 42, "prog exec time": 573, "reproducing": 3, "rpc recv": 23116738584, "rpc sent": 5232256208, "signal": 298092, "smash jobs": 3, "triage jobs": 3, "vm output": 97119332, "vm restarts [base]": 121, "vm restarts [new]": 285 } 2025/10/14 10:38:25 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:38:27 runner 8 connected 2025/10/14 10:38:55 runner 7 connected 2025/10/14 10:39:00 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:39:02 base crash: lost connection to test machine 2025/10/14 10:39:14 runner 6 connected 2025/10/14 10:39:25 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:39:51 runner 2 connected 2025/10/14 10:39:56 runner 8 connected 2025/10/14 10:39:59 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:40:08 base crash: lost connection to test machine 2025/10/14 10:40:13 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:40:22 runner 7 connected 2025/10/14 10:40:56 runner 0 connected 2025/10/14 10:40:57 runner 6 connected 2025/10/14 10:40:58 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:41:01 runner 5 connected 2025/10/14 10:41:01 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:41:09 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:41:09 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:41:23 base crash: general protection fault in pcl818_ai_cancel 2025/10/14 10:41:27 base crash: lost connection to test machine 2025/10/14 10:41:36 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:41:48 runner 7 connected 2025/10/14 10:41:50 runner 8 connected 2025/10/14 10:41:58 runner 4 connected 2025/10/14 10:42:09 base crash: lost connection to test machine 2025/10/14 10:42:12 runner 1 connected 2025/10/14 10:42:17 runner 0 connected 2025/10/14 10:42:17 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:42:19 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:42:25 runner 5 connected 2025/10/14 10:42:40 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:42:57 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:42:59 runner 2 connected 2025/10/14 10:43:05 runner 6 connected 2025/10/14 10:43:06 runner 7 connected 2025/10/14 10:43:08 base crash: possible deadlock in pcpu_alloc_noprof 2025/10/14 10:43:12 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 535, "corpus": 45237, "corpus [files]": 1260, "corpus [symbols]": 16, "cover overflows": 82101, "coverage": 303447, "distributor delayed": 53152, "distributor undelayed": 53151, "distributor violated": 437, "exec candidate": 81571, "exec collide": 17871, "exec fuzz": 34143, "exec gen": 1786, "exec hints": 23369, "exec inject": 0, "exec minimize": 18513, "exec retries": 22, "exec seeds": 2000, "exec smash": 16994, "exec total [base]": 158571, "exec total [new]": 461778, "exec triage": 147357, "executor restarts [base]": 743, "executor restarts [new]": 1826, "fault jobs": 0, "fuzzer jobs": 17, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 1, "hints jobs": 9, "max signal": 309345, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 10697, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47096, "no exec duration": 123803000000, "no exec requests": 486, "pending": 43, "prog exec time": 571, "reproducing": 3, "rpc recv": 23812267772, "rpc sent": 5338425688, "signal": 298108, "smash jobs": 6, "triage jobs": 2, "vm output": 99930510, "vm restarts [base]": 126, "vm restarts [new]": 298 } 2025/10/14 10:43:13 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:43:19 base crash: lost connection to test machine 2025/10/14 10:43:29 runner 8 connected 2025/10/14 10:43:37 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:43:45 runner 5 connected 2025/10/14 10:43:47 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:43:57 runner 0 connected 2025/10/14 10:44:15 runner 1 connected 2025/10/14 10:44:18 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:44:21 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:44:32 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:44:35 runner 7 connected 2025/10/14 10:44:36 runner 6 connected 2025/10/14 10:44:46 base crash: lost connection to test machine 2025/10/14 10:45:03 base crash: lost connection to test machine 2025/10/14 10:45:06 runner 5 connected 2025/10/14 10:45:10 runner 8 connected 2025/10/14 10:45:12 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:45:25 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:45:35 runner 1 connected 2025/10/14 10:45:50 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:45:59 runner 0 connected 2025/10/14 10:46:08 base crash: lost connection to test machine 2025/10/14 10:46:13 runner 6 connected 2025/10/14 10:46:31 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:46:37 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/10/14 10:46:56 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/10/14 10:47:00 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:47:02 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:47:02 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:47:04 runner 1 connected 2025/10/14 10:47:15 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:47:15 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:47:16 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:47:33 runner 4 connected 2025/10/14 10:47:39 base crash: lost connection to test machine 2025/10/14 10:47:44 runner 8 connected 2025/10/14 10:47:49 runner 6 connected 2025/10/14 10:47:51 runner 7 connected 2025/10/14 10:47:54 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = true] 2025/10/14 10:47:54 scheduled a reproduction of 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:48:09 base crash: lost connection to test machine 2025/10/14 10:48:12 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 538, "corpus": 45243, "corpus [files]": 1260, "corpus [symbols]": 16, "cover overflows": 83032, "coverage": 303457, "distributor delayed": 53196, "distributor undelayed": 53196, "distributor violated": 437, "exec candidate": 81571, "exec collide": 18758, "exec fuzz": 35790, "exec gen": 1869, "exec hints": 23910, "exec inject": 0, "exec minimize": 18794, "exec retries": 22, "exec seeds": 2018, "exec smash": 17157, "exec total [base]": 160846, "exec total [new]": 465468, "exec triage": 147419, "executor restarts [base]": 773, "executor restarts [new]": 1883, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 2, "hints jobs": 3, "max signal": 309387, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 10951, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47121, "no exec duration": 130709000000, "no exec requests": 496, "pending": 46, "prog exec time": 383, "reproducing": 3, "rpc recv": 24488396460, "rpc sent": 5462781016, "signal": 298116, "smash jobs": 3, "triage jobs": 2, "vm output": 103730102, "vm restarts [base]": 131, "vm restarts [new]": 309 } 2025/10/14 10:48:12 runner 5 connected 2025/10/14 10:48:20 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:48:21 base crash: lost connection to test machine 2025/10/14 10:48:28 runner 1 connected 2025/10/14 10:48:34 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:48:44 runner 4 connected 2025/10/14 10:48:45 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:48:59 runner 0 connected 2025/10/14 10:49:01 base crash: lost connection to test machine 2025/10/14 10:49:02 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:49:09 runner 8 connected 2025/10/14 10:49:11 runner 2 connected 2025/10/14 10:49:34 runner 6 connected 2025/10/14 10:49:47 base crash: lost connection to test machine 2025/10/14 10:49:50 runner 1 connected 2025/10/14 10:49:57 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:50:01 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:50:09 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:50:13 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:50:24 base crash: lost connection to test machine 2025/10/14 10:50:27 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:50:35 runner 2 connected 2025/10/14 10:50:49 runner 7 connected 2025/10/14 10:50:58 runner 5 connected 2025/10/14 10:51:03 runner 8 connected 2025/10/14 10:51:13 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:51:21 runner 0 connected 2025/10/14 10:51:22 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:51:28 base crash: lost connection to test machine 2025/10/14 10:51:43 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:51:44 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:51:46 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:51:57 base crash: lost connection to test machine 2025/10/14 10:52:04 base crash: lost connection to test machine 2025/10/14 10:52:11 runner 7 connected 2025/10/14 10:52:17 runner 2 connected 2025/10/14 10:52:32 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:52:32 runner 8 connected 2025/10/14 10:52:33 runner 5 connected 2025/10/14 10:52:37 repro finished 'possible deadlock in __pte_offset_map_lock', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 10:52:37 failed repro for "possible deadlock in __pte_offset_map_lock", err=%!s() 2025/10/14 10:52:37 "possible deadlock in __pte_offset_map_lock": saved crash log into 1760439157.crash.log 2025/10/14 10:52:37 "possible deadlock in __pte_offset_map_lock": saved repro log into 1760439157.repro.log 2025/10/14 10:52:37 start reproducing 'possible deadlock in __pte_offset_map_lock' 2025/10/14 10:52:52 runner 0 connected 2025/10/14 10:52:52 base crash: lost connection to test machine 2025/10/14 10:52:53 runner 1 connected 2025/10/14 10:52:57 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:53:02 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:53:05 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:53:05 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:53:05 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:53:12 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 547, "corpus": 45255, "corpus [files]": 1260, "corpus [symbols]": 16, "cover overflows": 84324, "coverage": 303476, "distributor delayed": 53271, "distributor undelayed": 53267, "distributor violated": 437, "exec candidate": 81571, "exec collide": 19744, "exec fuzz": 37585, "exec gen": 1961, "exec hints": 24946, "exec inject": 0, "exec minimize": 19062, "exec retries": 22, "exec seeds": 2049, "exec smash": 17433, "exec total [base]": 162115, "exec total [new]": 470037, "exec triage": 147501, "executor restarts [base]": 807, "executor restarts [new]": 1934, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 5, "max signal": 309436, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 11181, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47155, "no exec duration": 133744000000, "no exec requests": 503, "pending": 45, "prog exec time": 482, "reproducing": 3, "rpc recv": 25177227152, "rpc sent": 5574852848, "signal": 298136, "smash jobs": 0, "triage jobs": 10, "vm output": 107142913, "vm restarts [base]": 140, "vm restarts [new]": 319 } 2025/10/14 10:53:41 runner 2 connected 2025/10/14 10:53:46 runner 4 connected 2025/10/14 10:53:47 base crash: lost connection to test machine 2025/10/14 10:53:54 runner 5 connected 2025/10/14 10:53:54 runner 7 connected 2025/10/14 10:53:54 runner 6 connected 2025/10/14 10:53:56 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:54:03 base crash: lost connection to test machine 2025/10/14 10:54:09 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:54:15 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:54:36 runner 1 connected 2025/10/14 10:54:45 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:54:46 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:54:52 runner 0 connected 2025/10/14 10:54:59 runner 8 connected 2025/10/14 10:55:05 runner 4 connected 2025/10/14 10:55:11 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:55:14 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:55:30 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:55:35 runner 6 connected 2025/10/14 10:55:40 base crash: lost connection to test machine 2025/10/14 10:55:43 runner 7 connected 2025/10/14 10:55:45 base crash: lost connection to test machine 2025/10/14 10:55:48 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:55:54 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:56:00 runner 5 connected 2025/10/14 10:56:21 runner 8 connected 2025/10/14 10:56:34 runner 1 connected 2025/10/14 10:56:36 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:56:37 runner 0 connected 2025/10/14 10:56:43 runner 4 connected 2025/10/14 10:56:54 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:56:56 base crash: lost connection to test machine 2025/10/14 10:57:13 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:57:21 patched crashed: possible deadlock in __pte_offset_map_lock [need repro = false] 2025/10/14 10:57:50 runner 8 connected 2025/10/14 10:57:52 runner 2 connected 2025/10/14 10:57:58 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:58:05 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:58:11 runner 6 connected 2025/10/14 10:58:12 STAT { "buffer too small": 7, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 556, "corpus": 45275, "corpus [files]": 1260, "corpus [symbols]": 16, "cover overflows": 84947, "coverage": 303497, "distributor delayed": 53315, "distributor undelayed": 53314, "distributor violated": 437, "exec candidate": 81571, "exec collide": 20136, "exec fuzz": 38278, "exec gen": 2000, "exec hints": 25607, "exec inject": 0, "exec minimize": 19519, "exec retries": 22, "exec seeds": 2105, "exec smash": 17802, "exec total [base]": 164775, "exec total [new]": 472774, "exec triage": 147561, "executor restarts [base]": 843, "executor restarts [new]": 1985, "fault jobs": 0, "fuzzer jobs": 17, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 10, "max signal": 309462, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 11490, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47180, "no exec duration": 136744000000, "no exec requests": 506, "pending": 45, "prog exec time": 390, "reproducing": 3, "rpc recv": 25916241672, "rpc sent": 5683320416, "signal": 298155, "smash jobs": 4, "triage jobs": 3, "vm output": 109363125, "vm restarts [base]": 146, "vm restarts [new]": 332 } 2025/10/14 10:58:12 base crash: lost connection to test machine 2025/10/14 10:58:21 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:58:30 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:58:53 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:58:54 runner 5 connected 2025/10/14 10:59:01 base crash: lost connection to test machine 2025/10/14 10:59:01 runner 1 connected 2025/10/14 10:59:11 runner 8 connected 2025/10/14 10:59:21 crash "INFO: task hung in reg_process_self_managed_hints" is already known 2025/10/14 10:59:21 base crash "INFO: task hung in reg_process_self_managed_hints" is to be ignored 2025/10/14 10:59:21 patched crashed: INFO: task hung in reg_process_self_managed_hints [need repro = false] 2025/10/14 10:59:22 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 10:59:24 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 10:59:40 base crash: lost connection to test machine 2025/10/14 10:59:43 runner 6 connected 2025/10/14 10:59:50 runner 0 connected 2025/10/14 10:59:52 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 11:00:10 runner 7 connected 2025/10/14 11:00:15 runner 4 connected 2025/10/14 11:00:28 runner 1 connected 2025/10/14 11:00:29 base crash: lost connection to test machine 2025/10/14 11:00:39 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 11:00:40 runner 8 connected 2025/10/14 11:00:42 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 11:00:43 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 11:00:57 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 11:01:18 runner 0 connected 2025/10/14 11:01:33 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 11:01:35 runner 5 connected 2025/10/14 11:01:38 runner 6 connected 2025/10/14 11:01:47 runner 7 connected 2025/10/14 11:02:11 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 11:02:18 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 11:02:33 base crash: lost connection to test machine 2025/10/14 11:02:42 patched crashed: lost connection to test machine [need repro = false] 2025/10/14 11:02:50 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 11:03:07 bug reporting terminated 2025/10/14 11:03:07 status reporting terminated 2025/10/14 11:03:07 base: rpc server terminaled 2025/10/14 11:03:07 new: rpc server terminaled 2025/10/14 11:03:22 base: pool terminated 2025/10/14 11:03:22 base: kernel context loop terminated 2025/10/14 11:03:23 reproducing crash 'possible deadlock in pcpu_alloc_noprof': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f drivers/block/nbd.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/10/14 11:03:23 repro finished 'possible deadlock in pcpu_alloc_noprof', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 11:04:22 repro finished 'INFO: task hung in synchronize_rcu', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 11:05:21 repro finished 'possible deadlock in __pte_offset_map_lock', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/10/14 11:05:21 repro loop terminated 2025/10/14 11:05:21 new: pool terminated 2025/10/14 11:05:21 new: kernel context loop terminated 2025/10/14 11:05:21 diff fuzzing terminated 2025/10/14 11:05:21 fuzzing is finished 2025/10/14 11:05:21 status at the end: Title On-Base On-Patched possible deadlock in __pte_offset_map_lock 52 crashes[reproduced] BUG: sleeping function called from invalid context in hook_sb_delete 3 crashes INFO: rcu detected stall in corrupted 1 crashes 1 crashes INFO: task hung in corrupted 1 crashes 4 crashes INFO: task hung in read_part_sector 1 crashes INFO: task hung in reg_process_self_managed_hints 1 crashes INFO: task hung in synchronize_rcu 1 crashes KASAN: out-of-bounds Read in ext4_xattr_set_entry 1 crashes KASAN: use-after-free Read in hpfs_get_ea 3 crashes WARNING in xfrm6_tunnel_net_exit 2 crashes 5 crashes WARNING in xfrm_state_fini 5 crashes 10 crashes general protection fault in lock_sock_nested 1 crashes general protection fault in pcl818_ai_cancel 5 crashes 9 crashes kernel BUG in jfs_evict_inode 4 crashes kernel BUG in txUnlock 3 crashes 11 crashes lost connection to test machine 120 crashes 211 crashes no output from test machine 1 crashes 3 crashes possible deadlock in dqget 1 crashes possible deadlock in hfsplus_block_allocate 1 crashes possible deadlock in ntfs_fiemap 1 crashes possible deadlock in ocfs2_init_acl 1 crashes 1 crashes possible deadlock in ocfs2_reserve_suballoc_bits 1 crashes possible deadlock in ocfs2_try_remove_refcount_tree 1 crashes 7 crashes possible deadlock in ocfs2_xattr_set 1 crashes 2 crashes possible deadlock in pcpu_alloc_noprof 1 crashes 1 crashes unregister_netdevice: waiting for DEV to become free 2 crashes 1 crashes 2025/10/14 11:05:21 possibly patched-only: possible deadlock in __pte_offset_map_lock