2025/09/23 14:37:32 extracted 327351 text symbol hashes for base and 327353 for patched 2025/09/23 14:37:32 symbol "split_huge_pages_in_file.__UNIQUE_ID_ddebug1729" has different values in base vs patch 2025/09/23 14:37:32 binaries are different, continuing fuzzing 2025/09/23 14:37:32 adding modified_functions to focus areas: ["__access_remote_vm" "__folio_split" "__handle_mm_fault" "__pfx_set_nohugepfnmap" "__pte_alloc" "__pte_alloc_kernel" "__vm_insert_mixed" "__vmf_anon_prepare" "can_change_pmd_writable" "change_huge_pmd" "clear_gigantic_page" "copy_folio_from_user" "copy_huge_pmd" "copy_page_range" "copy_pmd_range" "copy_remote_vm_str" "copy_user_gigantic_page" "copy_user_large_folio" "dax_fault_iter" "dax_insert_entry" "do_huge_pmd_anonymous_page" "do_huge_pmd_wp_page" "do_set_pmd" "do_swap_page" "do_wp_page" "folio_zero_user" "follow_pfnmap_start" "handle_mm_fault" "insert_page" "insert_pmd" "madvise_free_huge_pmd" "mm_get_huge_zero_folio" "remap_pfn_range_notrack" "remove_device_exclusive_entry" "set_nohugepfnmap" "split_folio_to_list" "split_huge_pages_all" "split_huge_pages_in_file" "split_huge_pages_write" "split_huge_pmd_locked" "try_restore_exclusive_pte" "unmap_huge_pmd_locked" "unmap_page_range" "vm_insert_pages" "vmf_insert_folio_pmd" "vmf_insert_pfn_pmd" "zap_huge_pmd"] 2025/09/23 14:37:32 adding directly modified files to focus areas: ["arch/arm64/include/asm/pgtable.h" "arch/riscv/include/asm/pgtable.h" "include/linux/pgtable.h" "mm/huge_memory.c" "mm/memory.c"] 2025/09/23 14:37:33 downloaded the corpus from https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db 2025/09/23 14:38:30 runner 6 connected 2025/09/23 14:38:30 runner 8 connected 2025/09/23 14:38:30 runner 0 connected 2025/09/23 14:38:30 runner 1 connected 2025/09/23 14:38:30 runner 9 connected 2025/09/23 14:38:30 runner 2 connected 2025/09/23 14:38:30 runner 2 connected 2025/09/23 14:38:30 runner 4 connected 2025/09/23 14:38:31 runner 5 connected 2025/09/23 14:38:31 runner 3 connected 2025/09/23 14:38:31 runner 7 connected 2025/09/23 14:38:32 runner 3 connected 2025/09/23 14:38:32 runner 0 connected 2025/09/23 14:38:32 runner 1 connected 2025/09/23 14:38:37 executor cover filter: 0 PCs 2025/09/23 14:38:37 initializing coverage information... 2025/09/23 14:38:40 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8055 2025/09/23 14:38:40 base: machine check complete 2025/09/23 14:38:41 discovered 7699 source files, 338750 symbols 2025/09/23 14:38:41 coverage filter: __access_remote_vm: [__access_remote_vm] 2025/09/23 14:38:41 coverage filter: __folio_split: [__folio_split] 2025/09/23 14:38:41 coverage filter: __handle_mm_fault: [__handle_mm_fault] 2025/09/23 14:38:41 coverage filter: __pfx_set_nohugepfnmap: [] 2025/09/23 14:38:41 coverage filter: __pte_alloc: [__pte_alloc __pte_alloc_kernel] 2025/09/23 14:38:41 coverage filter: __pte_alloc_kernel: [] 2025/09/23 14:38:41 coverage filter: __vm_insert_mixed: [__vm_insert_mixed] 2025/09/23 14:38:41 coverage filter: __vmf_anon_prepare: [__vmf_anon_prepare] 2025/09/23 14:38:41 coverage filter: can_change_pmd_writable: [can_change_pmd_writable] 2025/09/23 14:38:41 coverage filter: change_huge_pmd: [change_huge_pmd] 2025/09/23 14:38:41 coverage filter: clear_gigantic_page: [clear_gigantic_page] 2025/09/23 14:38:41 coverage filter: copy_folio_from_user: [copy_folio_from_user] 2025/09/23 14:38:41 coverage filter: copy_huge_pmd: [copy_huge_pmd] 2025/09/23 14:38:41 coverage filter: copy_page_range: [copy_page_range] 2025/09/23 14:38:41 coverage filter: copy_pmd_range: [copy_pmd_range] 2025/09/23 14:38:41 coverage filter: copy_remote_vm_str: [copy_remote_vm_str] 2025/09/23 14:38:41 coverage filter: copy_user_gigantic_page: [copy_user_gigantic_page] 2025/09/23 14:38:41 coverage filter: copy_user_large_folio: [copy_user_large_folio] 2025/09/23 14:38:41 coverage filter: dax_fault_iter: [dax_fault_iter] 2025/09/23 14:38:41 coverage filter: dax_insert_entry: [dax_insert_entry] 2025/09/23 14:38:41 coverage filter: do_huge_pmd_anonymous_page: [do_huge_pmd_anonymous_page] 2025/09/23 14:38:41 coverage filter: do_huge_pmd_wp_page: [do_huge_pmd_wp_page] 2025/09/23 14:38:41 coverage filter: do_set_pmd: [do_set_pmd] 2025/09/23 14:38:41 coverage filter: do_swap_page: [do_swap_page] 2025/09/23 14:38:41 coverage filter: do_wp_page: [do_wp_page] 2025/09/23 14:38:41 coverage filter: folio_zero_user: [folio_zero_user] 2025/09/23 14:38:41 coverage filter: follow_pfnmap_start: [follow_pfnmap_start] 2025/09/23 14:38:41 coverage filter: handle_mm_fault: [handle_mm_fault] 2025/09/23 14:38:41 coverage filter: insert_page: [bxt_vtd_ggtt_insert_page__BKL bxt_vtd_ggtt_insert_page__cb dpt_insert_page gen6_ggtt_insert_page gen8_ggtt_insert_page gen8_ggtt_insert_page_bind gmch_ggtt_insert_page insert_page insert_page_into_pte_locked intel_gmch_gtt_insert_page intel_gmch_gtt_insert_pages null_insert_page vm_insert_page vm_insert_pages vmf_insert_page_mkwrite] 2025/09/23 14:38:41 coverage filter: insert_pmd: [insert_pmd] 2025/09/23 14:38:41 coverage filter: madvise_free_huge_pmd: [madvise_free_huge_pmd] 2025/09/23 14:38:41 coverage filter: mm_get_huge_zero_folio: [mm_get_huge_zero_folio] 2025/09/23 14:38:41 coverage filter: remap_pfn_range_notrack: [remap_pfn_range_notrack] 2025/09/23 14:38:41 coverage filter: remove_device_exclusive_entry: [remove_device_exclusive_entry] 2025/09/23 14:38:41 coverage filter: set_nohugepfnmap: [] 2025/09/23 14:38:41 coverage filter: split_folio_to_list: [split_folio_to_list] 2025/09/23 14:38:41 coverage filter: split_huge_pages_all: [split_huge_pages_all] 2025/09/23 14:38:41 coverage filter: split_huge_pages_in_file: [split_huge_pages_in_file] 2025/09/23 14:38:41 coverage filter: split_huge_pages_write: [split_huge_pages_write] 2025/09/23 14:38:41 coverage filter: split_huge_pmd_locked: [split_huge_pmd_locked] 2025/09/23 14:38:41 coverage filter: try_restore_exclusive_pte: [try_restore_exclusive_pte] 2025/09/23 14:38:41 coverage filter: unmap_huge_pmd_locked: [unmap_huge_pmd_locked] 2025/09/23 14:38:41 coverage filter: unmap_page_range: [unmap_page_range] 2025/09/23 14:38:41 coverage filter: vm_insert_pages: [] 2025/09/23 14:38:41 coverage filter: vmf_insert_folio_pmd: [vmf_insert_folio_pmd] 2025/09/23 14:38:41 coverage filter: vmf_insert_pfn_pmd: [vmf_insert_pfn_pmd] 2025/09/23 14:38:41 coverage filter: zap_huge_pmd: [zap_huge_pmd] 2025/09/23 14:38:41 coverage filter: arch/arm64/include/asm/pgtable.h: [] 2025/09/23 14:38:41 coverage filter: arch/riscv/include/asm/pgtable.h: [] 2025/09/23 14:38:41 coverage filter: include/linux/pgtable.h: [] 2025/09/23 14:38:41 coverage filter: mm/huge_memory.c: [mm/huge_memory.c] 2025/09/23 14:38:41 coverage filter: mm/memory.c: [mm/memory.c] 2025/09/23 14:38:41 area "symbols": 5737 PCs in the cover filter 2025/09/23 14:38:41 area "files": 9798 PCs in the cover filter 2025/09/23 14:38:41 area "": 0 PCs in the cover filter 2025/09/23 14:38:41 executor cover filter: 0 PCs 2025/09/23 14:38:45 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8055 2025/09/23 14:38:45 new: machine check complete 2025/09/23 14:38:45 new: adding 80382 seeds 2025/09/23 14:39:17 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:39:17 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:39:28 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:39:28 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:40:06 runner 5 connected 2025/09/23 14:40:17 runner 2 connected 2025/09/23 14:41:19 patched crashed: WARNING in driver_unregister [need repro = true] 2025/09/23 14:41:19 scheduled a reproduction of 'WARNING in driver_unregister' 2025/09/23 14:41:36 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:41:36 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:41:38 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:41:38 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:41:48 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:41:48 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:42:08 runner 3 connected 2025/09/23 14:42:22 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:42:22 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:42:23 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:42:23 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:42:24 runner 5 connected 2025/09/23 14:42:26 runner 7 connected 2025/09/23 14:42:33 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:42:33 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:42:34 STAT { "buffer too small": 0, "candidate triage jobs": 63, "candidates": 75343, "comps overflows": 0, "corpus": 4951, "corpus [files]": 6932, "corpus [symbols]": 1058, "cover overflows": 3218, "coverage": 162666, "distributor delayed": 5105, "distributor undelayed": 5076, "distributor violated": 1, "exec candidate": 5039, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 12037, "exec total [new]": 22639, "exec triage": 15803, "executor restarts [base]": 53, "executor restarts [new]": 103, "fault jobs": 0, "fuzzer jobs": 63, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 166519, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 5039, "no exec duration": 48363000000, "no exec requests": 326, "pending": 9, "prog exec time": 316, "reproducing": 0, "rpc recv": 1418536720, "rpc sent": 106254544, "signal": 159769, "smash jobs": 0, "triage jobs": 0, "vm output": 2126786, "vm restarts [base]": 4, "vm restarts [new]": 15 } 2025/09/23 14:42:37 runner 0 connected 2025/09/23 14:43:11 runner 2 connected 2025/09/23 14:43:13 runner 8 connected 2025/09/23 14:43:22 base crash: WARNING in xfrm_state_fini 2025/09/23 14:43:29 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/23 14:43:30 runner 9 connected 2025/09/23 14:43:37 base crash: lost connection to test machine 2025/09/23 14:43:43 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/23 14:43:46 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 14:44:09 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 14:44:11 runner 0 connected 2025/09/23 14:44:18 runner 3 connected 2025/09/23 14:44:31 runner 5 connected 2025/09/23 14:44:34 runner 1 connected 2025/09/23 14:44:35 runner 7 connected 2025/09/23 14:44:42 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:44:42 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:44:51 patched crashed: stack segment fault in pgtable_trans_huge_withdraw [need repro = true] 2025/09/23 14:44:51 scheduled a reproduction of 'stack segment fault in pgtable_trans_huge_withdraw' 2025/09/23 14:44:53 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:44:53 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:45:00 runner 8 connected 2025/09/23 14:45:02 patched crashed: stack segment fault in pgtable_trans_huge_withdraw [need repro = true] 2025/09/23 14:45:02 scheduled a reproduction of 'stack segment fault in pgtable_trans_huge_withdraw' 2025/09/23 14:45:12 patched crashed: stack segment fault in pgtable_trans_huge_withdraw [need repro = true] 2025/09/23 14:45:12 scheduled a reproduction of 'stack segment fault in pgtable_trans_huge_withdraw' 2025/09/23 14:45:23 patched crashed: stack segment fault in pgtable_trans_huge_withdraw [need repro = true] 2025/09/23 14:45:23 scheduled a reproduction of 'stack segment fault in pgtable_trans_huge_withdraw' 2025/09/23 14:45:25 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/23 14:45:33 runner 6 connected 2025/09/23 14:45:39 runner 3 connected 2025/09/23 14:45:42 runner 4 connected 2025/09/23 14:45:50 runner 2 connected 2025/09/23 14:46:01 runner 9 connected 2025/09/23 14:46:10 base crash: WARNING in xfrm_state_fini 2025/09/23 14:46:12 runner 1 connected 2025/09/23 14:46:14 runner 0 connected 2025/09/23 14:46:27 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:46:27 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:46:37 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:46:37 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:46:39 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:46:39 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:46:49 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:46:49 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:46:52 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:46:52 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:46:53 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:46:53 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:47:00 runner 0 connected 2025/09/23 14:47:02 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:47:02 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:47:03 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:47:03 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:47:08 base crash: WARNING in xfrm_state_fini 2025/09/23 14:47:17 runner 5 connected 2025/09/23 14:47:26 runner 2 connected 2025/09/23 14:47:29 runner 3 connected 2025/09/23 14:47:34 STAT { "buffer too small": 0, "candidate triage jobs": 356, "candidates": 71308, "comps overflows": 0, "corpus": 8666, "corpus [files]": 11015, "corpus [symbols]": 1661, "cover overflows": 5696, "coverage": 195772, "distributor delayed": 11556, "distributor undelayed": 11227, "distributor violated": 206, "exec candidate": 9074, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 24277, "exec total [new]": 39895, "exec triage": 27652, "executor restarts [base]": 69, "executor restarts [new]": 172, "fault jobs": 0, "fuzzer jobs": 356, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 199660, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 9074, "no exec duration": 48363000000, "no exec requests": 326, "pending": 23, "prog exec time": 218, "reproducing": 0, "rpc recv": 2756740432, "rpc sent": 224178312, "signal": 192297, "smash jobs": 0, "triage jobs": 0, "vm output": 4324769, "vm restarts [base]": 7, "vm restarts [new]": 33 } 2025/09/23 14:47:37 runner 1 connected 2025/09/23 14:47:39 base crash: lost connection to test machine 2025/09/23 14:47:40 runner 8 connected 2025/09/23 14:47:42 runner 7 connected 2025/09/23 14:47:51 runner 6 connected 2025/09/23 14:47:52 runner 4 connected 2025/09/23 14:47:57 runner 2 connected 2025/09/23 14:48:28 runner 3 connected 2025/09/23 14:49:06 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 14:49:26 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:49:26 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:49:26 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 14:49:37 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:49:37 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:49:39 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/23 14:49:53 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:49:53 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:49:55 runner 4 connected 2025/09/23 14:49:57 base crash: WARNING in xfrm_state_fini 2025/09/23 14:50:04 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:50:04 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:50:16 runner 9 connected 2025/09/23 14:50:16 runner 7 connected 2025/09/23 14:50:26 runner 8 connected 2025/09/23 14:50:26 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/23 14:50:28 runner 2 connected 2025/09/23 14:50:43 runner 1 connected 2025/09/23 14:50:46 runner 3 connected 2025/09/23 14:50:53 runner 6 connected 2025/09/23 14:51:16 runner 5 connected 2025/09/23 14:51:47 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:51:47 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:51:54 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:51:54 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:51:57 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:51:57 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:52:04 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:52:04 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:52:28 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:52:28 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:52:34 STAT { "buffer too small": 0, "candidate triage jobs": 48, "candidates": 65950, "comps overflows": 0, "corpus": 14287, "corpus [files]": 16656, "corpus [symbols]": 2454, "cover overflows": 9529, "coverage": 227594, "distributor delayed": 17974, "distributor undelayed": 17952, "distributor violated": 490, "exec candidate": 14432, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 38482, "exec total [new]": 65623, "exec triage": 44778, "executor restarts [base]": 84, "executor restarts [new]": 240, "fault jobs": 0, "fuzzer jobs": 48, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 229988, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 14432, "no exec duration": 48369000000, "no exec requests": 327, "pending": 32, "prog exec time": 234, "reproducing": 0, "rpc recv": 4179942348, "rpc sent": 380996696, "signal": 223744, "smash jobs": 0, "triage jobs": 0, "vm output": 7038385, "vm restarts [base]": 10, "vm restarts [new]": 46 } 2025/09/23 14:52:35 runner 2 connected 2025/09/23 14:52:38 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:52:38 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:52:45 runner 1 connected 2025/09/23 14:52:46 runner 7 connected 2025/09/23 14:52:52 runner 3 connected 2025/09/23 14:52:57 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:52:57 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:53:08 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:53:08 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:53:17 runner 9 connected 2025/09/23 14:53:26 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:53:26 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:53:28 runner 8 connected 2025/09/23 14:53:36 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:53:36 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:53:46 runner 6 connected 2025/09/23 14:53:57 runner 0 connected 2025/09/23 14:54:15 runner 5 connected 2025/09/23 14:54:33 runner 2 connected 2025/09/23 14:54:54 base crash "KASAN: slab-use-after-free Read in xfrm_alloc_spi" is already known 2025/09/23 14:54:54 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/09/23 14:55:02 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:55:02 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:55:14 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:55:14 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:55:33 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:21692: connect: connection refused 2025/09/23 14:55:33 VM-2 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:21692: connect: connection refused 2025/09/23 14:55:39 base crash "WARNING in xfrm6_tunnel_net_exit" is already known 2025/09/23 14:55:39 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/09/23 14:55:42 runner 7 connected 2025/09/23 14:55:43 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 14:55:48 base crash "kernel BUG in txUnlock" is already known 2025/09/23 14:55:48 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/23 14:55:49 base crash "kernel BUG in txUnlock" is already known 2025/09/23 14:55:49 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/23 14:55:49 base crash "kernel BUG in txUnlock" is already known 2025/09/23 14:55:49 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/23 14:55:51 runner 4 connected 2025/09/23 14:55:52 base crash "kernel BUG in txUnlock" is already known 2025/09/23 14:55:52 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/23 14:55:58 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:34092: connect: connection refused 2025/09/23 14:55:58 VM-1 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:34092: connect: connection refused 2025/09/23 14:56:05 runner 3 connected 2025/09/23 14:56:08 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 14:56:19 base crash: WARNING in xfrm6_tunnel_net_exit 2025/09/23 14:56:23 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/09/23 14:56:28 runner 9 connected 2025/09/23 14:56:31 runner 2 connected 2025/09/23 14:56:33 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:56:33 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:56:37 runner 0 connected 2025/09/23 14:56:38 runner 8 connected 2025/09/23 14:56:39 runner 5 connected 2025/09/23 14:56:41 runner 6 connected 2025/09/23 14:56:44 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:56:44 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:56:57 runner 1 connected 2025/09/23 14:57:08 runner 0 connected 2025/09/23 14:57:12 runner 1 connected 2025/09/23 14:57:17 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:57:17 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:57:19 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:57:19 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:57:20 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:57:20 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:57:22 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:57:22 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:57:22 runner 4 connected 2025/09/23 14:57:32 runner 7 connected 2025/09/23 14:57:34 STAT { "buffer too small": 0, "candidate triage jobs": 34, "candidates": 62082, "comps overflows": 0, "corpus": 18136, "corpus [files]": 20404, "corpus [symbols]": 2957, "cover overflows": 11574, "coverage": 242100, "distributor delayed": 23284, "distributor undelayed": 23284, "distributor violated": 507, "exec candidate": 18300, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 58805, "exec total [new]": 83501, "exec triage": 56539, "executor restarts [base]": 92, "executor restarts [new]": 335, "fault jobs": 0, "fuzzer jobs": 34, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 244516, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 18300, "no exec duration": 48990000000, "no exec requests": 335, "pending": 45, "prog exec time": 186, "reproducing": 0, "rpc recv": 5764631468, "rpc sent": 559280536, "signal": 238037, "smash jobs": 0, "triage jobs": 0, "vm output": 9820343, "vm restarts [base]": 12, "vm restarts [new]": 68 } 2025/09/23 14:57:59 base crash: WARNING in xfrm6_tunnel_net_exit 2025/09/23 14:58:05 runner 6 connected 2025/09/23 14:58:08 runner 8 connected 2025/09/23 14:58:09 runner 5 connected 2025/09/23 14:58:11 runner 2 connected 2025/09/23 14:58:56 base crash "KASAN: out-of-bounds Read in ext4_xattr_set_entry" is already known 2025/09/23 14:58:56 patched crashed: KASAN: out-of-bounds Read in ext4_xattr_set_entry [need repro = false] 2025/09/23 14:58:56 runner 2 connected 2025/09/23 14:59:14 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:59:14 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:59:14 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:59:14 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:59:15 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:59:15 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:59:15 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:59:15 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:59:27 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 14:59:27 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 14:59:40 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/09/23 14:59:40 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/09/23 14:59:44 runner 6 connected 2025/09/23 14:59:51 base crash "possible deadlock in ocfs2_reserve_suballoc_bits" is already known 2025/09/23 14:59:51 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/09/23 15:00:03 runner 9 connected 2025/09/23 15:00:04 runner 0 connected 2025/09/23 15:00:05 runner 4 connected 2025/09/23 15:00:10 runner 2 connected 2025/09/23 15:00:15 runner 3 connected 2025/09/23 15:00:28 runner 1 connected 2025/09/23 15:00:40 runner 7 connected 2025/09/23 15:00:45 base crash: lost connection to test machine 2025/09/23 15:00:56 base crash: possible deadlock in ocfs2_xattr_set 2025/09/23 15:01:34 runner 2 connected 2025/09/23 15:01:46 runner 3 connected 2025/09/23 15:02:19 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:02:19 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:02:29 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:02:29 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:02:34 STAT { "buffer too small": 0, "candidate triage jobs": 45, "candidates": 56785, "comps overflows": 0, "corpus": 23374, "corpus [files]": 25267, "corpus [symbols]": 3537, "cover overflows": 14541, "coverage": 259853, "distributor delayed": 28597, "distributor undelayed": 28597, "distributor violated": 591, "exec candidate": 23597, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 72034, "exec total [new]": 108799, "exec triage": 72650, "executor restarts [base]": 107, "executor restarts [new]": 410, "fault jobs": 0, "fuzzer jobs": 45, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 261910, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 23597, "no exec duration": 49035000000, "no exec requests": 338, "pending": 52, "prog exec time": 124, "reproducing": 0, "rpc recv": 7123641168, "rpc sent": 731510896, "signal": 255461, "smash jobs": 0, "triage jobs": 0, "vm output": 13613588, "vm restarts [base]": 15, "vm restarts [new]": 80 } 2025/09/23 15:02:54 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/09/23 15:03:15 runner 5 connected 2025/09/23 15:03:24 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:03:24 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:03:27 runner 7 connected 2025/09/23 15:03:33 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/09/23 15:03:35 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:03:35 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:03:43 runner 0 connected 2025/09/23 15:04:13 runner 8 connected 2025/09/23 15:04:22 runner 3 connected 2025/09/23 15:04:24 runner 0 connected 2025/09/23 15:04:54 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 15:05:25 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:05:25 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:05:36 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:05:36 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:05:43 runner 8 connected 2025/09/23 15:06:14 runner 4 connected 2025/09/23 15:06:26 runner 7 connected 2025/09/23 15:07:25 base crash "general protection fault in pcl818_ai_cancel" is already known 2025/09/23 15:07:25 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/09/23 15:07:34 STAT { "buffer too small": 0, "candidate triage jobs": 49, "candidates": 51347, "comps overflows": 0, "corpus": 28736, "corpus [files]": 30097, "corpus [symbols]": 4141, "cover overflows": 17807, "coverage": 274360, "distributor delayed": 33424, "distributor undelayed": 33421, "distributor violated": 591, "exec candidate": 29035, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 84891, "exec total [new]": 136683, "exec triage": 89226, "executor restarts [base]": 130, "executor restarts [new]": 481, "fault jobs": 0, "fuzzer jobs": 49, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 276660, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 29035, "no exec duration": 49053000000, "no exec requests": 340, "pending": 56, "prog exec time": 195, "reproducing": 0, "rpc recv": 8230757008, "rpc sent": 904745656, "signal": 269699, "smash jobs": 0, "triage jobs": 0, "vm output": 17142021, "vm restarts [base]": 17, "vm restarts [new]": 87 } 2025/09/23 15:07:37 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:07:37 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:07:37 base crash "general protection fault in pcl818_ai_cancel" is already known 2025/09/23 15:07:37 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/09/23 15:07:39 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:07:39 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:07:44 base crash "general protection fault in pcl818_ai_cancel" is already known 2025/09/23 15:07:44 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/09/23 15:07:48 base crash "general protection fault in pcl818_ai_cancel" is already known 2025/09/23 15:07:48 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/09/23 15:07:50 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:07:50 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:08:06 base crash: general protection fault in pcl818_ai_cancel 2025/09/23 15:08:15 runner 0 connected 2025/09/23 15:08:26 runner 3 connected 2025/09/23 15:08:28 runner 7 connected 2025/09/23 15:08:29 runner 4 connected 2025/09/23 15:08:30 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/09/23 15:08:33 runner 1 connected 2025/09/23 15:08:37 runner 6 connected 2025/09/23 15:08:38 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:08:38 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:08:39 runner 5 connected 2025/09/23 15:08:49 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:08:49 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:08:55 runner 2 connected 2025/09/23 15:09:18 runner 1 connected 2025/09/23 15:09:20 base crash: general protection fault in pcl818_ai_cancel 2025/09/23 15:09:23 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:09:23 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:09:24 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:09:24 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:09:27 runner 9 connected 2025/09/23 15:09:34 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:09:34 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:09:36 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:09:36 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:09:38 runner 8 connected 2025/09/23 15:10:09 runner 3 connected 2025/09/23 15:10:12 runner 6 connected 2025/09/23 15:10:21 runner 0 connected 2025/09/23 15:10:22 runner 7 connected 2025/09/23 15:10:27 runner 5 connected 2025/09/23 15:10:27 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:10:27 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:10:39 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:10:39 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:10:57 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:10:57 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:10:58 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:10:58 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:11:05 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:11:05 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:11:06 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:11:06 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:11:07 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:11:07 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:11:07 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:11:07 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:11:08 base crash: general protection fault in pcl818_ai_cancel 2025/09/23 15:11:08 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:11:08 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:11:09 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:11:09 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:11:16 runner 2 connected 2025/09/23 15:11:27 runner 1 connected 2025/09/23 15:11:38 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:11:38 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:11:44 runner 5 connected 2025/09/23 15:11:47 runner 3 connected 2025/09/23 15:11:53 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:11:53 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:11:54 runner 8 connected 2025/09/23 15:11:55 runner 9 connected 2025/09/23 15:11:55 runner 0 connected 2025/09/23 15:11:57 runner 4 connected 2025/09/23 15:11:58 runner 0 connected 2025/09/23 15:11:58 runner 7 connected 2025/09/23 15:11:58 runner 6 connected 2025/09/23 15:12:05 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:12:05 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:12:28 runner 2 connected 2025/09/23 15:12:34 STAT { "buffer too small": 0, "candidate triage jobs": 35, "candidates": 48708, "comps overflows": 0, "corpus": 31351, "corpus [files]": 32396, "corpus [symbols]": 4445, "cover overflows": 19513, "coverage": 281099, "distributor delayed": 37060, "distributor undelayed": 37060, "distributor violated": 760, "exec candidate": 31674, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 95640, "exec total [new]": 150831, "exec triage": 97279, "executor restarts [base]": 167, "executor restarts [new]": 582, "fault jobs": 0, "fuzzer jobs": 35, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 283495, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 31674, "no exec duration": 49053000000, "no exec requests": 340, "pending": 78, "prog exec time": 286, "reproducing": 0, "rpc recv": 9616538356, "rpc sent": 1044740648, "signal": 276491, "smash jobs": 0, "triage jobs": 0, "vm output": 19536925, "vm restarts [base]": 21, "vm restarts [new]": 111 } 2025/09/23 15:12:40 runner 1 connected 2025/09/23 15:12:54 runner 5 connected 2025/09/23 15:13:07 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/09/23 15:13:07 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/23 15:13:10 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:13:10 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:13:20 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:13:20 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:13:30 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:13:30 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:13:32 base crash "possible deadlock in ocfs2_init_acl" is already known 2025/09/23 15:13:32 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/23 15:13:36 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 15:13:37 base crash: unregister_netdevice: waiting for DEV to become free 2025/09/23 15:13:42 base crash "possible deadlock in ocfs2_init_acl" is already known 2025/09/23 15:13:42 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/23 15:13:56 runner 6 connected 2025/09/23 15:13:59 runner 1 connected 2025/09/23 15:14:10 runner 8 connected 2025/09/23 15:14:18 runner 7 connected 2025/09/23 15:14:21 runner 9 connected 2025/09/23 15:14:24 runner 4 connected 2025/09/23 15:14:26 runner 1 connected 2025/09/23 15:14:28 base crash: unregister_netdevice: waiting for DEV to become free 2025/09/23 15:14:31 runner 5 connected 2025/09/23 15:14:34 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:14:34 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:14:38 base crash: possible deadlock in ocfs2_init_acl 2025/09/23 15:14:46 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:14:46 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:15:00 base crash "kernel BUG in jfs_evict_inode" is already known 2025/09/23 15:15:00 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/09/23 15:15:06 base crash "kernel BUG in jfs_evict_inode" is already known 2025/09/23 15:15:06 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/09/23 15:15:17 runner 3 connected 2025/09/23 15:15:24 runner 1 connected 2025/09/23 15:15:30 runner 0 connected 2025/09/23 15:15:35 runner 6 connected 2025/09/23 15:15:49 runner 7 connected 2025/09/23 15:15:55 runner 3 connected 2025/09/23 15:16:23 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:16:23 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:16:27 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:16:27 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:16:28 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/09/23 15:16:28 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/23 15:16:28 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/23 15:16:35 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:16:35 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:16:38 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:16:38 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:16:38 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/23 15:16:48 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:16:48 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:16:59 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:16:59 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:17:12 runner 6 connected 2025/09/23 15:17:16 runner 5 connected 2025/09/23 15:17:17 runner 0 connected 2025/09/23 15:17:18 runner 3 connected 2025/09/23 15:17:23 runner 9 connected 2025/09/23 15:17:26 runner 7 connected 2025/09/23 15:17:28 runner 1 connected 2025/09/23 15:17:34 STAT { "buffer too small": 0, "candidate triage jobs": 86, "candidates": 45063, "comps overflows": 0, "corpus": 34905, "corpus [files]": 35540, "corpus [symbols]": 4815, "cover overflows": 21574, "coverage": 289696, "distributor delayed": 41879, "distributor undelayed": 41831, "distributor violated": 1109, "exec candidate": 35319, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 5, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108857, "exec total [new]": 170580, "exec triage": 108257, "executor restarts [base]": 191, "executor restarts [new]": 658, "fault jobs": 0, "fuzzer jobs": 86, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 292202, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 35319, "no exec duration": 49092000000, "no exec requests": 342, "pending": 89, "prog exec time": 184, "reproducing": 0, "rpc recv": 10989256680, "rpc sent": 1201419600, "signal": 285020, "smash jobs": 0, "triage jobs": 0, "vm output": 22475931, "vm restarts [base]": 25, "vm restarts [new]": 130 } 2025/09/23 15:17:38 runner 2 connected 2025/09/23 15:17:44 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:17:44 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:17:48 runner 8 connected 2025/09/23 15:17:55 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:17:55 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:18:19 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:18:19 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:18:19 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:18:19 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:18:23 base crash "kernel BUG in txUnlock" is already known 2025/09/23 15:18:23 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/23 15:18:23 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/09/23 15:18:24 base crash "kernel BUG in txUnlock" is already known 2025/09/23 15:18:24 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/23 15:18:25 base crash "kernel BUG in txUnlock" is already known 2025/09/23 15:18:25 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/23 15:18:30 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:18:30 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:18:30 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:18:30 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:18:31 runner 3 connected 2025/09/23 15:18:32 base crash: kernel BUG in txUnlock 2025/09/23 15:18:36 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/23 15:18:37 base crash: lost connection to test machine 2025/09/23 15:18:44 runner 6 connected 2025/09/23 15:19:01 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:19:01 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:19:08 runner 9 connected 2025/09/23 15:19:09 runner 2 connected 2025/09/23 15:19:11 runner 4 connected 2025/09/23 15:19:13 runner 1 connected 2025/09/23 15:19:13 runner 8 connected 2025/09/23 15:19:13 runner 1 connected 2025/09/23 15:19:19 runner 5 connected 2025/09/23 15:19:20 runner 2 connected 2025/09/23 15:19:20 runner 0 connected 2025/09/23 15:19:24 runner 7 connected 2025/09/23 15:19:26 runner 0 connected 2025/09/23 15:19:50 runner 3 connected 2025/09/23 15:19:57 base crash: kernel BUG in txUnlock 2025/09/23 15:20:26 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/09/23 15:20:45 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:20:45 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:20:53 runner 1 connected 2025/09/23 15:20:57 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:20:57 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:21:16 runner 3 connected 2025/09/23 15:21:35 runner 7 connected 2025/09/23 15:21:46 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:21:46 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:21:54 runner 0 connected 2025/09/23 15:21:57 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:21:57 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:22:34 STAT { "buffer too small": 0, "candidate triage jobs": 40, "candidates": 41216, "comps overflows": 0, "corpus": 38753, "corpus [files]": 38796, "corpus [symbols]": 5162, "cover overflows": 23636, "coverage": 297933, "distributor delayed": 45507, "distributor undelayed": 45507, "distributor violated": 1109, "exec candidate": 39166, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 6, "exec seeds": 0, "exec smash": 0, "exec total [base]": 118102, "exec total [new]": 191710, "exec triage": 119942, "executor restarts [base]": 218, "executor restarts [new]": 738, "fault jobs": 0, "fuzzer jobs": 40, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 300360, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 39166, "no exec duration": 49092000000, "no exec requests": 342, "pending": 100, "prog exec time": 197, "reproducing": 0, "rpc recv": 12274948640, "rpc sent": 1364451416, "signal": 293243, "smash jobs": 0, "triage jobs": 0, "vm output": 26370054, "vm restarts [base]": 30, "vm restarts [new]": 145 } 2025/09/23 15:22:42 runner 4 connected 2025/09/23 15:22:46 runner 9 connected 2025/09/23 15:23:20 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:23:20 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:23:26 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/09/23 15:23:30 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:23:30 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:23:56 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:23:56 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:24:09 runner 8 connected 2025/09/23 15:24:15 runner 1 connected 2025/09/23 15:24:20 runner 1 connected 2025/09/23 15:24:43 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:24:43 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:24:53 runner 7 connected 2025/09/23 15:25:15 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = false] 2025/09/23 15:25:32 runner 1 connected 2025/09/23 15:26:05 runner 2 connected 2025/09/23 15:26:36 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:26:36 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:26:37 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:26:37 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:26:38 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:26:38 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:26:39 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:26:39 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:27:20 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/23 15:27:25 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:27:25 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:27:26 runner 5 connected 2025/09/23 15:27:28 runner 9 connected 2025/09/23 15:27:32 runner 7 connected 2025/09/23 15:27:34 STAT { "buffer too small": 0, "candidate triage jobs": 34, "candidates": 37674, "comps overflows": 0, "corpus": 42234, "corpus [files]": 41791, "corpus [symbols]": 5537, "cover overflows": 26077, "coverage": 305192, "distributor delayed": 49287, "distributor undelayed": 49267, "distributor violated": 1113, "exec candidate": 42708, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 130071, "exec total [new]": 213809, "exec triage": 130794, "executor restarts [base]": 238, "executor restarts [new]": 796, "fault jobs": 0, "fuzzer jobs": 34, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 307854, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 42708, "no exec duration": 49179000000, "no exec requests": 344, "pending": 109, "prog exec time": 265, "reproducing": 0, "rpc recv": 13194192896, "rpc sent": 1526609360, "signal": 300460, "smash jobs": 0, "triage jobs": 0, "vm output": 29926573, "vm restarts [base]": 31, "vm restarts [new]": 155 } 2025/09/23 15:27:34 runner 4 connected 2025/09/23 15:27:36 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:27:36 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:27:52 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:27:52 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:27:53 patched crashed: stack segment fault in pgtable_trans_huge_withdraw [need repro = true] 2025/09/23 15:27:53 scheduled a reproduction of 'stack segment fault in pgtable_trans_huge_withdraw' 2025/09/23 15:27:59 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:27:59 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:28:00 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:28:00 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:28:02 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:28:02 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:28:09 runner 0 connected 2025/09/23 15:28:14 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:28:14 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:28:15 runner 2 connected 2025/09/23 15:28:25 runner 3 connected 2025/09/23 15:28:41 runner 5 connected 2025/09/23 15:28:41 runner 1 connected 2025/09/23 15:28:49 runner 9 connected 2025/09/23 15:28:50 runner 7 connected 2025/09/23 15:28:53 runner 6 connected 2025/09/23 15:29:04 runner 8 connected 2025/09/23 15:29:11 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:29:11 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:29:17 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:29:17 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:29:52 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:29:52 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:30:01 runner 1 connected 2025/09/23 15:30:06 runner 9 connected 2025/09/23 15:30:07 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:30:07 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:30:09 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:30:09 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:30:27 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:30:27 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:30:33 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:30:33 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:30:42 runner 7 connected 2025/09/23 15:30:47 base crash: WARNING in io_ring_exit_work 2025/09/23 15:30:56 runner 4 connected 2025/09/23 15:30:58 runner 3 connected 2025/09/23 15:31:04 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/09/23 15:31:18 runner 2 connected 2025/09/23 15:31:22 runner 6 connected 2025/09/23 15:31:36 runner 0 connected 2025/09/23 15:31:44 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:31:44 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:31:45 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:31:45 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:31:55 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:31:55 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:32:00 runner 3 connected 2025/09/23 15:32:08 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:32:08 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:32:17 base crash: INFO: task hung in ip_tunnel_init_net 2025/09/23 15:32:24 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 15:32:33 runner 5 connected 2025/09/23 15:32:34 STAT { "buffer too small": 0, "candidate triage jobs": 17, "candidates": 36345, "comps overflows": 0, "corpus": 43550, "corpus [files]": 42818, "corpus [symbols]": 5675, "cover overflows": 28513, "coverage": 307900, "distributor delayed": 51020, "distributor undelayed": 51017, "distributor violated": 1115, "exec candidate": 44037, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 139621, "exec total [new]": 230725, "exec triage": 134899, "executor restarts [base]": 254, "executor restarts [new]": 880, "fault jobs": 0, "fuzzer jobs": 17, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 310597, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44037, "no exec duration": 49214000000, "no exec requests": 347, "pending": 127, "prog exec time": 278, "reproducing": 0, "rpc recv": 14212907148, "rpc sent": 1684387240, "signal": 303200, "smash jobs": 0, "triage jobs": 0, "vm output": 32811005, "vm restarts [base]": 33, "vm restarts [new]": 173 } 2025/09/23 15:32:36 runner 3 connected 2025/09/23 15:32:38 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/23 15:32:44 runner 2 connected 2025/09/23 15:32:57 runner 7 connected 2025/09/23 15:33:01 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:33:01 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:33:06 runner 1 connected 2025/09/23 15:33:12 runner 1 connected 2025/09/23 15:33:27 runner 4 connected 2025/09/23 15:33:50 runner 8 connected 2025/09/23 15:33:54 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:33:54 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:34:05 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:34:05 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:34:10 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:34:10 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:34:21 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:34:21 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:34:28 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:34:28 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:34:35 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:34:35 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:34:35 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:34:35 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:34:44 runner 3 connected 2025/09/23 15:34:55 runner 7 connected 2025/09/23 15:34:59 runner 5 connected 2025/09/23 15:35:10 runner 6 connected 2025/09/23 15:35:15 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/09/23 15:35:16 runner 4 connected 2025/09/23 15:35:24 runner 8 connected 2025/09/23 15:35:25 runner 2 connected 2025/09/23 15:35:46 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:35:46 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:36:03 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 15:36:04 runner 2 connected 2025/09/23 15:36:13 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:36:13 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:36:15 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:36:15 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:36:18 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:36:18 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:36:39 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:36:39 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:36:39 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/23 15:36:43 runner 9 connected 2025/09/23 15:36:52 runner 4 connected 2025/09/23 15:37:03 runner 7 connected 2025/09/23 15:37:05 runner 5 connected 2025/09/23 15:37:09 runner 0 connected 2025/09/23 15:37:28 runner 0 connected 2025/09/23 15:37:30 runner 8 connected 2025/09/23 15:37:34 STAT { "buffer too small": 0, "candidate triage jobs": 11, "candidates": 35523, "comps overflows": 0, "corpus": 44340, "corpus [files]": 43422, "corpus [symbols]": 5739, "cover overflows": 31250, "coverage": 309657, "distributor delayed": 51994, "distributor undelayed": 51994, "distributor violated": 1117, "exec candidate": 44859, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 9, "exec seeds": 0, "exec smash": 0, "exec total [base]": 150135, "exec total [new]": 248325, "exec triage": 137476, "executor restarts [base]": 279, "executor restarts [new]": 969, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 312497, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44859, "no exec duration": 49214000000, "no exec requests": 347, "pending": 140, "prog exec time": 312, "reproducing": 0, "rpc recv": 15213234524, "rpc sent": 1821774008, "signal": 304955, "smash jobs": 0, "triage jobs": 0, "vm output": 35288832, "vm restarts [base]": 36, "vm restarts [new]": 192 } 2025/09/23 15:37:47 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:37:47 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:38:03 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:38:03 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:38:25 patched crashed: KASAN: slab-use-after-free Read in __xfrm_state_lookup [need repro = false] 2025/09/23 15:38:40 base crash: KASAN: slab-use-after-free Write in __xfrm_state_delete 2025/09/23 15:38:43 runner 0 connected 2025/09/23 15:38:43 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:38:43 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:38:52 runner 1 connected 2025/09/23 15:39:18 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:39:18 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:39:21 runner 9 connected 2025/09/23 15:39:31 runner 8 connected 2025/09/23 15:39:37 runner 3 connected 2025/09/23 15:39:43 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:39:43 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:39:54 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:39:54 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:40:07 runner 0 connected 2025/09/23 15:40:21 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:40:21 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:40:30 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:40:30 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:40:31 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:40:31 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:40:31 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:40:31 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:40:31 runner 9 connected 2025/09/23 15:40:36 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:40:36 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:40:40 patched crashed: stack segment fault in pgtable_trans_huge_withdraw [need repro = true] 2025/09/23 15:40:40 scheduled a reproduction of 'stack segment fault in pgtable_trans_huge_withdraw' 2025/09/23 15:40:42 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:40:42 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:40:44 runner 1 connected 2025/09/23 15:40:53 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:40:53 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:41:09 runner 2 connected 2025/09/23 15:41:18 runner 0 connected 2025/09/23 15:41:18 runner 4 connected 2025/09/23 15:41:21 runner 3 connected 2025/09/23 15:41:24 runner 7 connected 2025/09/23 15:41:29 runner 8 connected 2025/09/23 15:41:32 runner 5 connected 2025/09/23 15:41:42 runner 9 connected 2025/09/23 15:41:44 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:41:44 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:41:47 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:41:47 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:42:06 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:42:06 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:42:27 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:42:27 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:42:29 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:42:29 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:42:33 runner 2 connected 2025/09/23 15:42:34 STAT { "buffer too small": 0, "candidate triage jobs": 11, "candidates": 34915, "comps overflows": 0, "corpus": 44845, "corpus [files]": 43816, "corpus [symbols]": 5793, "cover overflows": 34005, "coverage": 310942, "distributor delayed": 52760, "distributor undelayed": 52752, "distributor violated": 1120, "exec candidate": 45467, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 162398, "exec total [new]": 264982, "exec triage": 139326, "executor restarts [base]": 299, "executor restarts [new]": 1047, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 314151, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45426, "no exec duration": 60661000000, "no exec requests": 365, "pending": 159, "prog exec time": 203, "reproducing": 0, "rpc recv": 16095770916, "rpc sent": 1975570856, "signal": 306310, "smash jobs": 0, "triage jobs": 0, "vm output": 37454418, "vm restarts [base]": 37, "vm restarts [new]": 208 } 2025/09/23 15:42:37 runner 7 connected 2025/09/23 15:42:40 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 15:42:55 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:42:55 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:42:56 runner 3 connected 2025/09/23 15:42:57 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:42:57 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:43:03 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:43:03 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:43:08 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:43:08 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:43:15 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:43:15 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:43:18 runner 0 connected 2025/09/23 15:43:18 runner 1 connected 2025/09/23 15:43:29 runner 8 connected 2025/09/23 15:43:44 runner 9 connected 2025/09/23 15:43:45 runner 5 connected 2025/09/23 15:43:51 runner 2 connected 2025/09/23 15:43:57 runner 4 connected 2025/09/23 15:44:04 runner 7 connected 2025/09/23 15:44:53 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:44:53 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:44:57 base crash: WARNING in driver_unregister 2025/09/23 15:45:19 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:45:19 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:45:42 runner 7 connected 2025/09/23 15:45:54 runner 3 connected 2025/09/23 15:45:54 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:45:54 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:46:00 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:46:00 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:46:08 runner 9 connected 2025/09/23 15:46:12 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:46:12 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:46:12 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:46:12 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:46:16 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:46:16 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:46:52 runner 4 connected 2025/09/23 15:46:56 runner 1 connected 2025/09/23 15:47:01 runner 6 connected 2025/09/23 15:47:02 runner 3 connected 2025/09/23 15:47:05 runner 7 connected 2025/09/23 15:47:08 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/09/23 15:47:19 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:47:19 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:47:34 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/23 15:47:34 STAT { "buffer too small": 0, "candidate triage jobs": 9, "candidates": 34537, "comps overflows": 0, "corpus": 45102, "corpus [files]": 44003, "corpus [symbols]": 5818, "cover overflows": 37254, "coverage": 311405, "distributor delayed": 53154, "distributor undelayed": 53154, "distributor violated": 1126, "exec candidate": 45845, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 18, "exec seeds": 0, "exec smash": 0, "exec total [base]": 176638, "exec total [new]": 283598, "exec triage": 140316, "executor restarts [base]": 316, "executor restarts [new]": 1136, "fault jobs": 0, "fuzzer jobs": 9, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 314723, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45722, "no exec duration": 182893000000, "no exec requests": 836, "pending": 172, "prog exec time": 254, "reproducing": 0, "rpc recv": 16998812248, "rpc sent": 2117098488, "signal": 306777, "smash jobs": 0, "triage jobs": 0, "vm output": 39798526, "vm restarts [base]": 38, "vm restarts [new]": 225 } 2025/09/23 15:47:39 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:47:39 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:47:41 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:47:41 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:47:41 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:47:41 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:47:54 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:47:54 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:47:58 runner 0 connected 2025/09/23 15:48:07 runner 8 connected 2025/09/23 15:48:23 runner 0 connected 2025/09/23 15:48:28 runner 1 connected 2025/09/23 15:48:30 runner 3 connected 2025/09/23 15:48:30 runner 2 connected 2025/09/23 15:48:43 runner 9 connected 2025/09/23 15:48:55 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:48:55 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:49:08 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:49:08 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:49:33 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:49:33 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:49:44 runner 7 connected 2025/09/23 15:50:02 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:50:02 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:50:05 runner 1 connected 2025/09/23 15:50:22 runner 6 connected 2025/09/23 15:50:32 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:50:32 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:50:51 runner 3 connected 2025/09/23 15:51:09 base crash: WARNING in xfrm_state_fini 2025/09/23 15:51:22 runner 5 connected 2025/09/23 15:51:53 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/09/23 15:52:07 runner 2 connected 2025/09/23 15:52:34 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 20287, "comps overflows": 0, "corpus": 45403, "corpus [files]": 44243, "corpus [symbols]": 5851, "cover overflows": 41704, "coverage": 312035, "distributor delayed": 53539, "distributor undelayed": 53539, "distributor violated": 1132, "exec candidate": 60095, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 18, "exec seeds": 0, "exec smash": 0, "exec total [base]": 190586, "exec total [new]": 310168, "exec triage": 141460, "executor restarts [base]": 333, "executor restarts [new]": 1204, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 10, "hints jobs": 0, "max signal": 315471, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46060, "no exec duration": 183206000000, "no exec requests": 843, "pending": 181, "prog exec time": 247, "reproducing": 0, "rpc recv": 17708804764, "rpc sent": 2276475408, "signal": 307382, "smash jobs": 0, "triage jobs": 0, "vm output": 42549966, "vm restarts [base]": 40, "vm restarts [new]": 236 } 2025/09/23 15:52:50 runner 1 connected 2025/09/23 15:54:34 triaged 92.2% of the corpus 2025/09/23 15:54:34 starting bug reproductions 2025/09/23 15:54:34 starting bug reproductions (max 10 VMs, 7 repros) 2025/09/23 15:54:34 reproduction of "WARNING in driver_unregister" aborted: it's no longer needed 2025/09/23 15:54:34 start reproducing 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:54:34 start reproducing 'stack segment fault in pgtable_trans_huge_withdraw' 2025/09/23 15:54:53 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/09/23 15:55:24 runner 0 connected 2025/09/23 15:55:42 runner 9 connected 2025/09/23 15:56:11 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 15:56:32 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:56:32 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:57:00 runner 0 connected 2025/09/23 15:57:06 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/09/23 15:57:21 runner 6 connected 2025/09/23 15:57:32 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/09/23 15:57:34 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 11, "corpus": 45499, "corpus [files]": 44331, "corpus [symbols]": 5864, "cover overflows": 46402, "coverage": 312230, "distributor delayed": 53848, "distributor undelayed": 53848, "distributor violated": 1137, "exec candidate": 80382, "exec collide": 766, "exec fuzz": 1496, "exec gen": 87, "exec hints": 581, "exec inject": 0, "exec minimize": 352, "exec retries": 18, "exec seeds": 63, "exec smash": 468, "exec total [base]": 204411, "exec total [new]": 335062, "exec triage": 142251, "executor restarts [base]": 350, "executor restarts [new]": 1251, "fault jobs": 0, "fuzzer jobs": 25, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 13, "max signal": 316037, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 204, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46270, "no exec duration": 185269000000, "no exec requests": 856, "pending": 179, "prog exec time": 315, "reproducing": 2, "rpc recv": 18207233264, "rpc sent": 2480189696, "signal": 307549, "smash jobs": 5, "triage jobs": 7, "vm output": 44882859, "vm restarts [base]": 41, "vm restarts [new]": 240 } 2025/09/23 15:57:42 base crash "general protection fault in lmLogSync" is already known 2025/09/23 15:57:42 patched crashed: general protection fault in lmLogSync [need repro = false] 2025/09/23 15:57:50 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:57:50 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:57:51 base crash "WARNING in dbAdjTree" is already known 2025/09/23 15:57:51 patched crashed: WARNING in dbAdjTree [need repro = false] 2025/09/23 15:57:55 runner 3 connected 2025/09/23 15:58:08 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 15:58:21 runner 4 connected 2025/09/23 15:58:32 runner 9 connected 2025/09/23 15:58:38 runner 8 connected 2025/09/23 15:58:39 runner 5 connected 2025/09/23 15:58:56 runner 6 connected 2025/09/23 15:59:01 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 15:59:01 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 15:59:03 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/23 15:59:18 base crash: lost connection to test machine 2025/09/23 15:59:28 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/23 15:59:48 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 15:59:50 runner 7 connected 2025/09/23 15:59:53 runner 1 connected 2025/09/23 16:00:08 runner 3 connected 2025/09/23 16:00:18 runner 0 connected 2025/09/23 16:00:36 runner 9 connected 2025/09/23 16:00:56 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 16:01:04 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 16:01:04 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 16:01:06 base crash: lost connection to test machine 2025/09/23 16:01:46 runner 7 connected 2025/09/23 16:01:54 runner 4 connected 2025/09/23 16:01:56 runner 3 connected 2025/09/23 16:02:34 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 107, "corpus": 45563, "corpus [files]": 44381, "corpus [symbols]": 5874, "cover overflows": 48621, "coverage": 312338, "distributor delayed": 54053, "distributor undelayed": 54053, "distributor violated": 1137, "exec candidate": 80382, "exec collide": 1651, "exec fuzz": 3176, "exec gen": 180, "exec hints": 1956, "exec inject": 0, "exec minimize": 1948, "exec retries": 18, "exec seeds": 252, "exec smash": 1542, "exec total [base]": 211252, "exec total [new]": 342327, "exec triage": 142610, "executor restarts [base]": 389, "executor restarts [new]": 1338, "fault jobs": 0, "fuzzer jobs": 73, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 7, "hints jobs": 25, "max signal": 316434, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1135, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46399, "no exec duration": 188332000000, "no exec requests": 860, "pending": 182, "prog exec time": 688, "reproducing": 2, "rpc recv": 19024647336, "rpc sent": 2705140856, "signal": 307649, "smash jobs": 34, "triage jobs": 14, "vm output": 48519825, "vm restarts [base]": 45, "vm restarts [new]": 250 } 2025/09/23 16:03:48 base crash: lost connection to test machine 2025/09/23 16:03:58 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 16:03:58 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 16:04:33 base crash: lost connection to test machine 2025/09/23 16:04:38 runner 1 connected 2025/09/23 16:04:46 runner 5 connected 2025/09/23 16:05:16 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 16:05:22 runner 3 connected 2025/09/23 16:05:31 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 16:05:31 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 16:05:44 repro finished 'BUG: non-zero pgtables_bytes on freeing mm: NUM', repro=true crepro=false desc='BUG: non-zero pgtables_bytes on freeing mm: NUM' hub=false from_dashboard=false 2025/09/23 16:05:44 found repro for "BUG: non-zero pgtables_bytes on freeing mm: NUM" (orig title: "-SAME-", reliability: 1), took 11.17 minutes 2025/09/23 16:05:44 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved crash log into 1758643544.crash.log 2025/09/23 16:05:44 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved repro log into 1758643544.repro.log 2025/09/23 16:05:44 start reproducing 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 16:06:05 runner 8 connected 2025/09/23 16:06:20 runner 6 connected 2025/09/23 16:06:35 patched crashed: KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings [need repro = true] 2025/09/23 16:06:35 scheduled a reproduction of 'KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings' 2025/09/23 16:06:35 start reproducing 'KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings' 2025/09/23 16:07:24 runner 9 connected 2025/09/23 16:07:34 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 227, "corpus": 45632, "corpus [files]": 44425, "corpus [symbols]": 5878, "cover overflows": 51612, "coverage": 312885, "distributor delayed": 54250, "distributor undelayed": 54250, "distributor violated": 1137, "exec candidate": 80382, "exec collide": 2567, "exec fuzz": 4901, "exec gen": 262, "exec hints": 2964, "exec inject": 0, "exec minimize": 3490, "exec retries": 18, "exec seeds": 448, "exec smash": 3061, "exec total [base]": 216109, "exec total [new]": 349698, "exec triage": 142995, "executor restarts [base]": 425, "executor restarts [new]": 1396, "fault jobs": 0, "fuzzer jobs": 91, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 32, "max signal": 317231, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1982, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46532, "no exec duration": 200984000000, "no exec requests": 874, "pending": 183, "prog exec time": 935, "reproducing": 3, "rpc recv": 19592975180, "rpc sent": 2968755296, "signal": 307989, "smash jobs": 44, "triage jobs": 15, "vm output": 53484490, "vm restarts [base]": 47, "vm restarts [new]": 254 } 2025/09/23 16:07:38 attempt #0 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 16:07:52 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 16:08:15 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 16:08:42 runner 7 connected 2025/09/23 16:09:04 runner 5 connected 2025/09/23 16:09:13 patched crashed: stack segment fault in pgtable_trans_huge_withdraw [need repro = true] 2025/09/23 16:09:13 scheduled a reproduction of 'stack segment fault in pgtable_trans_huge_withdraw' 2025/09/23 16:09:16 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = false] 2025/09/23 16:09:31 attempt #1 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 16:09:36 base crash "INFO: task hung in bch2_journal_reclaim_thread" is already known 2025/09/23 16:09:36 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = false] 2025/09/23 16:10:02 runner 9 connected 2025/09/23 16:10:04 runner 7 connected 2025/09/23 16:10:24 runner 4 connected 2025/09/23 16:11:11 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = false] 2025/09/23 16:11:24 attempt #2 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 16:11:24 patched-only: BUG: non-zero pgtables_bytes on freeing mm: NUM 2025/09/23 16:11:24 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM (full)' 2025/09/23 16:11:24 start reproducing 'BUG: non-zero pgtables_bytes on freeing mm: NUM (full)' 2025/09/23 16:12:14 runner 0 connected 2025/09/23 16:12:34 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 325, "corpus": 45676, "corpus [files]": 44456, "corpus [symbols]": 5885, "cover overflows": 53754, "coverage": 313042, "distributor delayed": 54379, "distributor undelayed": 54375, "distributor violated": 1137, "exec candidate": 80382, "exec collide": 3080, "exec fuzz": 5963, "exec gen": 313, "exec hints": 3555, "exec inject": 0, "exec minimize": 4455, "exec retries": 18, "exec seeds": 563, "exec smash": 3979, "exec total [base]": 220701, "exec total [new]": 354115, "exec triage": 143195, "executor restarts [base]": 443, "executor restarts [new]": 1438, "fault jobs": 0, "fuzzer jobs": 97, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 34, "max signal": 317721, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2613, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46606, "no exec duration": 203984000000, "no exec requests": 877, "pending": 184, "prog exec time": 678, "reproducing": 4, "rpc recv": 20120876584, "rpc sent": 3205434720, "signal": 308134, "smash jobs": 49, "triage jobs": 14, "vm output": 57692378, "vm restarts [base]": 48, "vm restarts [new]": 259 } 2025/09/23 16:13:00 base crash "INFO: task hung in bch2_journal_reclaim_thread" is already known 2025/09/23 16:13:00 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = false] 2025/09/23 16:13:57 runner 7 connected 2025/09/23 16:15:50 base crash "WARNING in dbAdjTree" is already known 2025/09/23 16:15:50 patched crashed: WARNING in dbAdjTree [need repro = false] 2025/09/23 16:16:39 runner 6 connected 2025/09/23 16:17:24 patched crashed: INFO: task hung in ima_file_free [need repro = true] 2025/09/23 16:17:24 scheduled a reproduction of 'INFO: task hung in ima_file_free' 2025/09/23 16:17:24 start reproducing 'INFO: task hung in ima_file_free' 2025/09/23 16:17:34 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 381, "corpus": 45708, "corpus [files]": 44481, "corpus [symbols]": 5892, "cover overflows": 55462, "coverage": 313155, "distributor delayed": 54507, "distributor undelayed": 54504, "distributor violated": 1141, "exec candidate": 80382, "exec collide": 3459, "exec fuzz": 6756, "exec gen": 346, "exec hints": 3981, "exec inject": 0, "exec minimize": 5449, "exec retries": 18, "exec seeds": 655, "exec smash": 4668, "exec total [base]": 225999, "exec total [new]": 357711, "exec triage": 143384, "executor restarts [base]": 465, "executor restarts [new]": 1467, "fault jobs": 0, "fuzzer jobs": 99, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 2, "hints jobs": 42, "max signal": 318086, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3257, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46675, "no exec duration": 203984000000, "no exec requests": 877, "pending": 184, "prog exec time": 768, "reproducing": 5, "rpc recv": 20569012060, "rpc sent": 3428041688, "signal": 308231, "smash jobs": 45, "triage jobs": 12, "vm output": 61582105, "vm restarts [base]": 48, "vm restarts [new]": 261 } 2025/09/23 16:18:14 runner 9 connected 2025/09/23 16:18:23 base crash: lost connection to test machine 2025/09/23 16:19:08 repro finished 'BUG: non-zero pgtables_bytes on freeing mm: NUM', repro=true crepro=false desc='BUG: non-zero pgtables_bytes on freeing mm: NUM' hub=false from_dashboard=false 2025/09/23 16:19:08 found repro for "BUG: non-zero pgtables_bytes on freeing mm: NUM" (orig title: "-SAME-", reliability: 1), took 13.39 minutes 2025/09/23 16:19:08 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved crash log into 1758644348.crash.log 2025/09/23 16:19:08 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved repro log into 1758644348.repro.log 2025/09/23 16:19:08 start reproducing 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 16:20:21 base crash: lost connection to test machine 2025/09/23 16:20:32 repro finished 'stack segment fault in pgtable_trans_huge_withdraw', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/09/23 16:20:32 failed repro for "stack segment fault in pgtable_trans_huge_withdraw", err=%!s() 2025/09/23 16:20:32 "stack segment fault in pgtable_trans_huge_withdraw": saved crash log into 1758644432.crash.log 2025/09/23 16:20:32 "stack segment fault in pgtable_trans_huge_withdraw": saved repro log into 1758644432.repro.log 2025/09/23 16:20:32 start reproducing 'stack segment fault in pgtable_trans_huge_withdraw' 2025/09/23 16:20:58 base crash "INFO: task hung in bch2_journal_reclaim_thread" is already known 2025/09/23 16:20:58 patched crashed: INFO: task hung in bch2_journal_reclaim_thread [need repro = false] 2025/09/23 16:21:04 attempt #0 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 16:21:09 runner 1 connected 2025/09/23 16:21:10 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/23 16:21:46 runner 8 connected 2025/09/23 16:22:00 runner 9 connected 2025/09/23 16:22:34 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 415, "corpus": 45737, "corpus [files]": 44502, "corpus [symbols]": 5896, "cover overflows": 56555, "coverage": 313218, "distributor delayed": 54591, "distributor undelayed": 54591, "distributor violated": 1149, "exec candidate": 80382, "exec collide": 3779, "exec fuzz": 7365, "exec gen": 370, "exec hints": 4314, "exec inject": 0, "exec minimize": 5962, "exec retries": 18, "exec seeds": 741, "exec smash": 5199, "exec total [base]": 229895, "exec total [new]": 360271, "exec triage": 143522, "executor restarts [base]": 484, "executor restarts [new]": 1509, "fault jobs": 0, "fuzzer jobs": 99, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 40, "max signal": 318362, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3544, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46725, "no exec duration": 204221000000, "no exec requests": 878, "pending": 182, "prog exec time": 325, "reproducing": 5, "rpc recv": 20968894712, "rpc sent": 3588348144, "signal": 308291, "smash jobs": 50, "triage jobs": 9, "vm output": 65314573, "vm restarts [base]": 49, "vm restarts [new]": 264 } 2025/09/23 16:22:53 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = false] 2025/09/23 16:22:57 attempt #1 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 16:23:42 runner 8 connected 2025/09/23 16:23:57 base crash "WARNING in drv_unassign_vif_chanctx" is already known 2025/09/23 16:23:57 patched crashed: WARNING in drv_unassign_vif_chanctx [need repro = false] 2025/09/23 16:24:45 runner 7 connected 2025/09/23 16:24:49 attempt #2 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 16:24:49 patched-only: BUG: non-zero pgtables_bytes on freeing mm: NUM 2025/09/23 16:24:49 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM (full)' 2025/09/23 16:24:50 base crash "possible deadlock in ocfs2_setattr" is already known 2025/09/23 16:24:50 patched crashed: possible deadlock in ocfs2_setattr [need repro = false] 2025/09/23 16:24:51 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/23 16:25:37 runner 0 connected 2025/09/23 16:25:39 runner 3 connected 2025/09/23 16:25:40 runner 8 connected 2025/09/23 16:26:26 base crash "kernel BUG in may_open" is already known 2025/09/23 16:26:26 patched crashed: kernel BUG in may_open [need repro = false] 2025/09/23 16:27:14 runner 8 connected 2025/09/23 16:27:21 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/09/23 16:27:29 base crash: lost connection to test machine 2025/09/23 16:27:29 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/09/23 16:27:30 base crash: lost connection to test machine 2025/09/23 16:27:34 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 455, "corpus": 45750, "corpus [files]": 44512, "corpus [symbols]": 5899, "cover overflows": 57776, "coverage": 313370, "distributor delayed": 54647, "distributor undelayed": 54644, "distributor violated": 1149, "exec candidate": 80382, "exec collide": 4048, "exec fuzz": 7870, "exec gen": 394, "exec hints": 4612, "exec inject": 0, "exec minimize": 6363, "exec retries": 18, "exec seeds": 785, "exec smash": 5657, "exec total [base]": 232681, "exec total [new]": 362340, "exec triage": 143595, "executor restarts [base]": 504, "executor restarts [new]": 1533, "fault jobs": 0, "fuzzer jobs": 100, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 0, "hints jobs": 34, "max signal": 318449, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3739, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46759, "no exec duration": 430610000000, "no exec requests": 1478, "pending": 183, "prog exec time": 389, "reproducing": 5, "rpc recv": 21361768020, "rpc sent": 3713642080, "signal": 308424, "smash jobs": 48, "triage jobs": 18, "vm output": 68751441, "vm restarts [base]": 51, "vm restarts [new]": 268 } 2025/09/23 16:27:34 base crash: lost connection to test machine 2025/09/23 16:27:44 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 16:27:44 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 16:28:10 runner 2 connected 2025/09/23 16:28:18 runner 9 connected 2025/09/23 16:28:18 runner 0 connected 2025/09/23 16:28:19 runner 3 connected 2025/09/23 16:28:22 runner 1 connected 2025/09/23 16:28:33 runner 7 connected 2025/09/23 16:28:33 runner 8 connected 2025/09/23 16:28:53 base crash: lost connection to test machine 2025/09/23 16:28:53 base crash: lost connection to test machine 2025/09/23 16:29:44 runner 2 connected 2025/09/23 16:29:49 runner 3 connected 2025/09/23 16:29:49 base crash: WARNING in xfrm6_tunnel_net_exit 2025/09/23 16:30:39 runner 0 connected 2025/09/23 16:30:42 repro finished 'BUG: non-zero pgtables_bytes on freeing mm: NUM (full)', repro=true crepro=true desc='BUG: non-zero pgtables_bytes on freeing mm: NUM' hub=false from_dashboard=false 2025/09/23 16:30:42 found repro for "BUG: non-zero pgtables_bytes on freeing mm: NUM" (orig title: "-SAME-", reliability: 1), took 19.30 minutes 2025/09/23 16:30:42 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved crash log into 1758645042.crash.log 2025/09/23 16:30:42 start reproducing 'BUG: non-zero pgtables_bytes on freeing mm: NUM (full)' 2025/09/23 16:30:42 failed to recv *flatrpc.InfoRequestRawT: EOF 2025/09/23 16:30:42 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved repro log into 1758645042.repro.log 2025/09/23 16:31:08 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = false] 2025/09/23 16:31:58 runner 9 connected 2025/09/23 16:32:34 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 485, "corpus": 45768, "corpus [files]": 44528, "corpus [symbols]": 5901, "cover overflows": 58856, "coverage": 313420, "distributor delayed": 54716, "distributor undelayed": 54716, "distributor violated": 1149, "exec candidate": 80382, "exec collide": 4292, "exec fuzz": 8333, "exec gen": 419, "exec hints": 4909, "exec inject": 0, "exec minimize": 6819, "exec retries": 18, "exec seeds": 832, "exec smash": 6046, "exec total [base]": 234844, "exec total [new]": 364411, "exec triage": 143738, "executor restarts [base]": 541, "executor restarts [new]": 1570, "fault jobs": 0, "fuzzer jobs": 89, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 42, "max signal": 318620, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4106, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 2, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46804, "no exec duration": 513627000000, "no exec requests": 1697, "pending": 182, "prog exec time": 1006, "reproducing": 5, "rpc recv": 21873553936, "rpc sent": 3831614352, "signal": 308470, "smash jobs": 39, "triage jobs": 8, "vm output": 72677638, "vm restarts [base]": 58, "vm restarts [new]": 272 } 2025/09/23 16:32:34 attempt #0 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 16:34:27 attempt #1 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 16:34:53 base crash "INFO: task hung in __iterate_supers" is already known 2025/09/23 16:34:53 patched crashed: INFO: task hung in __iterate_supers [need repro = false] 2025/09/23 16:34:54 base crash "INFO: task hung in __iterate_supers" is already known 2025/09/23 16:34:54 patched crashed: INFO: task hung in __iterate_supers [need repro = false] 2025/09/23 16:35:38 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/23 16:35:42 runner 7 connected 2025/09/23 16:35:42 runner 8 connected 2025/09/23 16:36:19 attempt #2 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 16:36:19 patched-only: BUG: non-zero pgtables_bytes on freeing mm: NUM 2025/09/23 16:36:27 runner 9 connected 2025/09/23 16:36:58 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/23 16:37:07 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = false] 2025/09/23 16:37:08 runner 0 connected 2025/09/23 16:37:31 base crash: INFO: task hung in __iterate_supers 2025/09/23 16:37:34 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 540, "corpus": 45790, "corpus [files]": 44550, "corpus [symbols]": 5905, "cover overflows": 59852, "coverage": 313480, "distributor delayed": 54773, "distributor undelayed": 54769, "distributor violated": 1149, "exec candidate": 80382, "exec collide": 4602, "exec fuzz": 8922, "exec gen": 447, "exec hints": 5296, "exec inject": 0, "exec minimize": 7228, "exec retries": 18, "exec seeds": 908, "exec smash": 6508, "exec total [base]": 237186, "exec total [new]": 366759, "exec triage": 143827, "executor restarts [base]": 554, "executor restarts [new]": 1593, "fault jobs": 0, "fuzzer jobs": 87, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 39, "max signal": 318710, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4330, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 4, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46838, "no exec duration": 820991000000, "no exec requests": 2444, "pending": 182, "prog exec time": 0, "reproducing": 5, "rpc recv": 22182845456, "rpc sent": 3951856688, "signal": 308524, "smash jobs": 43, "triage jobs": 5, "vm output": 76442449, "vm restarts [base]": 59, "vm restarts [new]": 275 } 2025/09/23 16:37:40 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/09/23 16:37:46 runner 9 connected 2025/09/23 16:37:55 runner 7 connected 2025/09/23 16:38:13 runner 3 connected 2025/09/23 16:38:29 runner 8 connected 2025/09/23 16:38:59 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/23 16:39:28 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/23 16:39:47 runner 9 connected 2025/09/23 16:40:10 runner 8 connected 2025/09/23 16:41:45 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/23 16:41:47 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/23 16:42:34 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 562, "corpus": 45819, "corpus [files]": 44576, "corpus [symbols]": 5912, "cover overflows": 60692, "coverage": 313632, "distributor delayed": 54848, "distributor undelayed": 54842, "distributor violated": 1149, "exec candidate": 80382, "exec collide": 4837, "exec fuzz": 9419, "exec gen": 479, "exec hints": 5608, "exec inject": 0, "exec minimize": 7964, "exec retries": 18, "exec seeds": 959, "exec smash": 6911, "exec total [base]": 239578, "exec total [new]": 369148, "exec triage": 143943, "executor restarts [base]": 561, "executor restarts [new]": 1612, "fault jobs": 0, "fuzzer jobs": 119, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 49, "max signal": 318890, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4765, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 4, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46883, "no exec duration": 1480790000000, "no exec requests": 4127, "pending": 182, "prog exec time": 656, "reproducing": 5, "rpc recv": 22541919952, "rpc sent": 4071306032, "signal": 308642, "smash jobs": 62, "triage jobs": 8, "vm output": 79961019, "vm restarts [base]": 60, "vm restarts [new]": 280 } 2025/09/23 16:42:34 runner 2 connected 2025/09/23 16:42:35 runner 8 connected 2025/09/23 16:42:53 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = false] 2025/09/23 16:43:41 runner 7 connected 2025/09/23 16:44:44 base crash: WARNING in dbAdjTree 2025/09/23 16:44:45 patched crashed: WARNING in dbAdjTree [need repro = false] 2025/09/23 16:45:15 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/23 16:45:27 repro finished 'stack segment fault in pgtable_trans_huge_withdraw', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/09/23 16:45:27 failed repro for "stack segment fault in pgtable_trans_huge_withdraw", err=%!s() 2025/09/23 16:45:27 "stack segment fault in pgtable_trans_huge_withdraw": saved crash log into 1758645927.crash.log 2025/09/23 16:45:27 "stack segment fault in pgtable_trans_huge_withdraw": saved repro log into 1758645927.repro.log 2025/09/23 16:45:27 start reproducing 'stack segment fault in pgtable_trans_huge_withdraw' 2025/09/23 16:45:28 repro finished 'BUG: non-zero pgtables_bytes on freeing mm: NUM (full)', repro=true crepro=true desc='BUG: non-zero pgtables_bytes on freeing mm: NUM' hub=false from_dashboard=false 2025/09/23 16:45:28 found repro for "BUG: non-zero pgtables_bytes on freeing mm: NUM" (orig title: "-SAME-", reliability: 1), took 14.78 minutes 2025/09/23 16:45:28 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved crash log into 1758645928.crash.log 2025/09/23 16:45:28 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved repro log into 1758645928.repro.log 2025/09/23 16:45:35 runner 8 connected 2025/09/23 16:45:57 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 16:46:03 runner 2 connected 2025/09/23 16:46:16 runner 0 connected 2025/09/23 16:46:33 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/23 16:46:35 attempt #0 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: aborting due to context cancelation 2025/09/23 16:46:50 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 16:47:21 runner 8 connected 2025/09/23 16:47:25 runner 0 connected 2025/09/23 16:47:34 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 600, "corpus": 45838, "corpus [files]": 44591, "corpus [symbols]": 5917, "cover overflows": 61752, "coverage": 313741, "distributor delayed": 54911, "distributor undelayed": 54906, "distributor violated": 1149, "exec candidate": 80382, "exec collide": 5220, "exec fuzz": 10192, "exec gen": 525, "exec hints": 6095, "exec inject": 0, "exec minimize": 8373, "exec retries": 18, "exec seeds": 1026, "exec smash": 7553, "exec total [base]": 242461, "exec total [new]": 372048, "exec triage": 144032, "executor restarts [base]": 576, "executor restarts [new]": 1645, "fault jobs": 0, "fuzzer jobs": 99, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 3, "hints jobs": 41, "max signal": 319075, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4996, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 4, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46915, "no exec duration": 2151901000000, "no exec requests": 5936, "pending": 181, "prog exec time": 711, "reproducing": 4, "rpc recv": 22971300484, "rpc sent": 4214676208, "signal": 308696, "smash jobs": 47, "triage jobs": 11, "vm output": 83202203, "vm restarts [base]": 63, "vm restarts [new]": 285 } 2025/09/23 16:47:39 runner 9 connected 2025/09/23 16:47:49 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/09/23 16:48:39 runner 0 connected 2025/09/23 16:48:39 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 16:48:58 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 16:48:58 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 16:49:07 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 16:49:48 runner 9 connected 2025/09/23 16:49:57 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 16:50:25 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 16:50:25 repro finished 'stack segment fault in pgtable_trans_huge_withdraw', repro=true crepro=false desc='stack segment fault in pgtable_trans_huge_withdraw' hub=false from_dashboard=false 2025/09/23 16:50:25 found repro for "stack segment fault in pgtable_trans_huge_withdraw" (orig title: "-SAME-", reliability: 1), took 4.96 minutes 2025/09/23 16:50:25 "stack segment fault in pgtable_trans_huge_withdraw": saved crash log into 1758646225.crash.log 2025/09/23 16:50:25 "stack segment fault in pgtable_trans_huge_withdraw": saved repro log into 1758646225.repro.log 2025/09/23 16:50:25 start reproducing 'stack segment fault in pgtable_trans_huge_withdraw' 2025/09/23 16:50:46 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 16:50:46 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 16:50:52 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 16:51:19 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 16:51:19 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 16:51:34 runner 8 connected 2025/09/23 16:51:47 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 16:51:47 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 16:51:53 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/23 16:52:08 runner 0 connected 2025/09/23 16:52:18 attempt #0 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 16:52:29 runner 7 connected 2025/09/23 16:52:34 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 637, "corpus": 45862, "corpus [files]": 44613, "corpus [symbols]": 5924, "cover overflows": 63215, "coverage": 313869, "distributor delayed": 55003, "distributor undelayed": 54997, "distributor violated": 1149, "exec candidate": 80382, "exec collide": 5722, "exec fuzz": 11173, "exec gen": 574, "exec hints": 6777, "exec inject": 0, "exec minimize": 8903, "exec retries": 20, "exec seeds": 1093, "exec smash": 8342, "exec total [base]": 246211, "exec total [new]": 375784, "exec triage": 144169, "executor restarts [base]": 589, "executor restarts [new]": 1677, "fault jobs": 0, "fuzzer jobs": 72, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 35, "max signal": 319246, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5335, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 4, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46963, "no exec duration": 2671855000000, "no exec requests": 7393, "pending": 184, "prog exec time": 479, "reproducing": 4, "rpc recv": 23370606960, "rpc sent": 4390574800, "signal": 308774, "smash jobs": 27, "triage jobs": 10, "vm output": 86850630, "vm restarts [base]": 64, "vm restarts [new]": 290 } 2025/09/23 16:52:42 runner 9 connected 2025/09/23 16:53:13 base crash: possible deadlock in ocfs2_init_acl 2025/09/23 16:53:13 base crash: WARNING in xfrm_state_fini 2025/09/23 16:53:16 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 16:54:02 runner 3 connected 2025/09/23 16:54:03 runner 1 connected 2025/09/23 16:54:05 runner 7 connected 2025/09/23 16:54:08 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 16:54:08 attempt #1 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 16:55:11 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 16:55:34 base crash: lost connection to test machine 2025/09/23 16:55:39 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 16:55:44 base crash "INFO: task hung in __closure_sync" is already known 2025/09/23 16:55:44 patched crashed: INFO: task hung in __closure_sync [need repro = false] 2025/09/23 16:55:59 attempt #2 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 16:55:59 patched-only: stack segment fault in pgtable_trans_huge_withdraw 2025/09/23 16:55:59 scheduled a reproduction of 'stack segment fault in pgtable_trans_huge_withdraw (full)' 2025/09/23 16:55:59 start reproducing 'stack segment fault in pgtable_trans_huge_withdraw (full)' 2025/09/23 16:56:08 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 16:56:14 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 16:56:14 repro finished 'stack segment fault in pgtable_trans_huge_withdraw', repro=true crepro=false desc='stack segment fault in pgtable_trans_huge_withdraw' hub=false from_dashboard=false 2025/09/23 16:56:14 found repro for "stack segment fault in pgtable_trans_huge_withdraw" (orig title: "-SAME-", reliability: 1), took 5.83 minutes 2025/09/23 16:56:14 "stack segment fault in pgtable_trans_huge_withdraw": saved crash log into 1758646574.crash.log 2025/09/23 16:56:14 "stack segment fault in pgtable_trans_huge_withdraw": saved repro log into 1758646574.repro.log 2025/09/23 16:56:14 start reproducing 'stack segment fault in pgtable_trans_huge_withdraw' 2025/09/23 16:56:23 runner 2 connected 2025/09/23 16:56:31 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 16:56:59 runner 7 connected 2025/09/23 16:57:26 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 16:57:34 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 687, "corpus": 45889, "corpus [files]": 44634, "corpus [symbols]": 5931, "cover overflows": 64342, "coverage": 313927, "distributor delayed": 55079, "distributor undelayed": 55071, "distributor violated": 1149, "exec candidate": 80382, "exec collide": 6186, "exec fuzz": 12048, "exec gen": 620, "exec hints": 7456, "exec inject": 0, "exec minimize": 9546, "exec retries": 20, "exec seeds": 1165, "exec smash": 8977, "exec total [base]": 248729, "exec total [new]": 379311, "exec triage": 144277, "executor restarts [base]": 636, "executor restarts [new]": 1737, "fault jobs": 0, "fuzzer jobs": 69, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 32, "max signal": 319341, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5783, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 4, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47006, "no exec duration": 2677659000000, "no exec requests": 7407, "pending": 183, "prog exec time": 637, "reproducing": 5, "rpc recv": 23754988300, "rpc sent": 4530916272, "signal": 308832, "smash jobs": 28, "triage jobs": 9, "vm output": 90672722, "vm restarts [base]": 67, "vm restarts [new]": 293 } 2025/09/23 16:57:35 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 16:58:24 runner 7 connected 2025/09/23 16:58:29 base crash: lost connection to test machine 2025/09/23 16:58:39 attempt #0 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 16:59:19 runner 2 connected 2025/09/23 17:00:11 base crash "KASAN: out-of-bounds Read in ext4_xattr_set_entry" is already known 2025/09/23 17:00:11 patched crashed: KASAN: out-of-bounds Read in ext4_xattr_set_entry [need repro = false] 2025/09/23 17:00:23 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 17:00:33 attempt #1 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 17:00:41 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:01:01 runner 7 connected 2025/09/23 17:01:12 runner 9 connected 2025/09/23 17:01:20 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:01:42 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:01:44 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:02:02 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/23 17:02:25 attempt #2 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 17:02:25 patched-only: stack segment fault in pgtable_trans_huge_withdraw 2025/09/23 17:02:25 scheduled a reproduction of 'stack segment fault in pgtable_trans_huge_withdraw (full)' 2025/09/23 17:02:33 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:02:34 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 705, "corpus": 45903, "corpus [files]": 44643, "corpus [symbols]": 5933, "cover overflows": 65361, "coverage": 313948, "distributor delayed": 55136, "distributor undelayed": 55127, "distributor violated": 1155, "exec candidate": 80382, "exec collide": 6623, "exec fuzz": 12818, "exec gen": 663, "exec hints": 8201, "exec inject": 0, "exec minimize": 9949, "exec retries": 20, "exec seeds": 1198, "exec smash": 9449, "exec total [base]": 252044, "exec total [new]": 382298, "exec triage": 144364, "executor restarts [base]": 674, "executor restarts [new]": 1766, "fault jobs": 0, "fuzzer jobs": 50, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 26, "max signal": 319403, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6014, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 5, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47042, "no exec duration": 2677797000000, "no exec requests": 7409, "pending": 184, "prog exec time": 126, "reproducing": 5, "rpc recv": 24077254680, "rpc sent": 4661116520, "signal": 308852, "smash jobs": 13, "triage jobs": 11, "vm output": 93831565, "vm restarts [base]": 68, "vm restarts [new]": 296 } 2025/09/23 17:02:51 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:02:52 runner 9 connected 2025/09/23 17:02:58 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:03:04 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:03:13 runner 0 connected 2025/09/23 17:03:23 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/23 17:03:50 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:03:50 repro finished 'stack segment fault in pgtable_trans_huge_withdraw', repro=true crepro=false desc='stack segment fault in pgtable_trans_huge_withdraw' hub=false from_dashboard=false 2025/09/23 17:03:50 found repro for "stack segment fault in pgtable_trans_huge_withdraw" (orig title: "-SAME-", reliability: 1), took 7.59 minutes 2025/09/23 17:03:50 "stack segment fault in pgtable_trans_huge_withdraw": saved crash log into 1758647030.crash.log 2025/09/23 17:03:50 "stack segment fault in pgtable_trans_huge_withdraw": saved repro log into 1758647030.repro.log 2025/09/23 17:03:50 start reproducing 'stack segment fault in pgtable_trans_huge_withdraw' 2025/09/23 17:03:59 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:04:01 base crash: lost connection to test machine 2025/09/23 17:04:13 runner 1 connected 2025/09/23 17:04:14 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:04:21 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:04:51 runner 2 connected 2025/09/23 17:05:02 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:05:07 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/09/23 17:05:43 attempt #0 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 17:05:47 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:05:56 runner 7 connected 2025/09/23 17:06:45 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:06:58 base crash: WARNING in xfrm_state_fini 2025/09/23 17:07:17 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 17:07:17 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 17:07:34 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 731, "corpus": 45927, "corpus [files]": 44663, "corpus [symbols]": 5941, "cover overflows": 67142, "coverage": 313998, "distributor delayed": 55213, "distributor undelayed": 55212, "distributor violated": 1155, "exec candidate": 80382, "exec collide": 7270, "exec fuzz": 14107, "exec gen": 727, "exec hints": 9504, "exec inject": 0, "exec minimize": 10635, "exec retries": 20, "exec seeds": 1263, "exec smash": 10079, "exec total [base]": 256703, "exec total [new]": 387142, "exec triage": 144524, "executor restarts [base]": 709, "executor restarts [new]": 1798, "fault jobs": 0, "fuzzer jobs": 28, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 15, "max signal": 319558, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6433, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 5, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47091, "no exec duration": 2684291000000, "no exec requests": 7420, "pending": 184, "prog exec time": 557, "reproducing": 5, "rpc recv": 24468270548, "rpc sent": 4873477760, "signal": 308902, "smash jobs": 10, "triage jobs": 3, "vm output": 96631210, "vm restarts [base]": 71, "vm restarts [new]": 298 } 2025/09/23 17:07:36 attempt #1 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 17:07:41 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:07:41 repro finished 'stack segment fault in pgtable_trans_huge_withdraw (full)', repro=true crepro=true desc='stack segment fault in pgtable_trans_huge_withdraw' hub=false from_dashboard=false 2025/09/23 17:07:41 found repro for "stack segment fault in pgtable_trans_huge_withdraw" (orig title: "-SAME-", reliability: 1), took 11.70 minutes 2025/09/23 17:07:41 start reproducing 'stack segment fault in pgtable_trans_huge_withdraw (full)' 2025/09/23 17:07:41 "stack segment fault in pgtable_trans_huge_withdraw": saved crash log into 1758647261.crash.log 2025/09/23 17:07:41 "stack segment fault in pgtable_trans_huge_withdraw": saved repro log into 1758647261.repro.log 2025/09/23 17:07:47 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:07:47 runner 2 connected 2025/09/23 17:08:06 runner 9 connected 2025/09/23 17:08:20 base crash: lost connection to test machine 2025/09/23 17:08:39 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:08:46 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:09:09 runner 3 connected 2025/09/23 17:09:21 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:09:21 base crash: lost connection to test machine 2025/09/23 17:09:29 attempt #2 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 17:09:29 patched-only: stack segment fault in pgtable_trans_huge_withdraw 2025/09/23 17:09:29 scheduled a reproduction of 'stack segment fault in pgtable_trans_huge_withdraw (full)' 2025/09/23 17:09:33 attempt #0 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 17:09:41 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 17:09:41 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 17:09:45 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = true] 2025/09/23 17:09:45 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 17:09:54 repro finished 'BUG: non-zero pgtables_bytes on freeing mm: NUM', repro=true crepro=false desc='BUG: non-zero pgtables_bytes on freeing mm: NUM' hub=false from_dashboard=false 2025/09/23 17:09:54 found repro for "BUG: non-zero pgtables_bytes on freeing mm: NUM" (orig title: "-SAME-", reliability: 1), took 50.76 minutes 2025/09/23 17:09:54 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved crash log into 1758647394.crash.log 2025/09/23 17:09:54 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved repro log into 1758647394.repro.log 2025/09/23 17:09:54 start reproducing 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 17:10:11 runner 2 connected 2025/09/23 17:10:13 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:10:13 repro finished 'stack segment fault in pgtable_trans_huge_withdraw', repro=true crepro=false desc='stack segment fault in pgtable_trans_huge_withdraw' hub=false from_dashboard=false 2025/09/23 17:10:13 found repro for "stack segment fault in pgtable_trans_huge_withdraw" (orig title: "-SAME-", reliability: 1), took 6.38 minutes 2025/09/23 17:10:13 "stack segment fault in pgtable_trans_huge_withdraw": saved crash log into 1758647413.crash.log 2025/09/23 17:10:13 "stack segment fault in pgtable_trans_huge_withdraw": saved repro log into 1758647413.repro.log 2025/09/23 17:10:13 failed to recv *flatrpc.InfoRequestRawT: EOF 2025/09/23 17:10:13 start reproducing 'stack segment fault in pgtable_trans_huge_withdraw' 2025/09/23 17:10:31 runner 7 connected 2025/09/23 17:10:35 runner 8 connected 2025/09/23 17:10:40 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = false] 2025/09/23 17:11:01 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = false] 2025/09/23 17:11:24 attempt #1 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 17:11:35 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:11:36 runner 9 connected 2025/09/23 17:11:43 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = false] 2025/09/23 17:11:58 runner 7 connected 2025/09/23 17:12:07 attempt #0 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 17:12:09 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:12:10 attempt #0 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 17:12:29 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = false] 2025/09/23 17:12:32 runner 8 connected 2025/09/23 17:12:34 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 731, "corpus": 45935, "corpus [files]": 44666, "corpus [symbols]": 5941, "cover overflows": 67824, "coverage": 314019, "distributor delayed": 55243, "distributor undelayed": 55239, "distributor violated": 1155, "exec candidate": 80382, "exec collide": 7684, "exec fuzz": 14918, "exec gen": 783, "exec hints": 10331, "exec inject": 0, "exec minimize": 10799, "exec retries": 20, "exec seeds": 1276, "exec smash": 10292, "exec total [base]": 258231, "exec total [new]": 389688, "exec triage": 144574, "executor restarts [base]": 730, "executor restarts [new]": 1832, "fault jobs": 0, "fuzzer jobs": 10, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 1, "hints jobs": 4, "max signal": 319598, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6545, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 6, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47110, "no exec duration": 2685274000000, "no exec requests": 7423, "pending": 184, "prog exec time": 442, "reproducing": 5, "rpc recv": 24756716428, "rpc sent": 4973651784, "signal": 308920, "smash jobs": 1, "triage jobs": 5, "vm output": 99997761, "vm restarts [base]": 74, "vm restarts [new]": 304 } 2025/09/23 17:12:42 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:12:55 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:13:15 attempt #2 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 17:13:15 patched-only: stack segment fault in pgtable_trans_huge_withdraw 2025/09/23 17:13:17 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:13:19 runner 9 connected 2025/09/23 17:13:38 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:13:59 attempt #1 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 17:14:02 attempt #1 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 17:14:03 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:14:07 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = false] 2025/09/23 17:14:18 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:14:26 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:14:52 runner 0 connected 2025/09/23 17:14:56 runner 9 connected 2025/09/23 17:14:56 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:15:15 attempt #2 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 17:15:15 patched-only: BUG: non-zero pgtables_bytes on freeing mm: NUM 2025/09/23 17:15:15 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM (full)' 2025/09/23 17:15:15 start reproducing 'BUG: non-zero pgtables_bytes on freeing mm: NUM (full)' 2025/09/23 17:15:23 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:15:35 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = false] 2025/09/23 17:15:53 attempt #2 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 17:15:53 patched-only: stack segment fault in pgtable_trans_huge_withdraw 2025/09/23 17:15:53 scheduled a reproduction of 'stack segment fault in pgtable_trans_huge_withdraw (full)' 2025/09/23 17:16:00 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:16:04 runner 1 connected 2025/09/23 17:16:24 runner 8 connected 2025/09/23 17:16:43 runner 2 connected 2025/09/23 17:16:43 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/23 17:16:50 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:17:00 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:17:04 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:17:20 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = false] 2025/09/23 17:17:32 runner 3 connected 2025/09/23 17:17:34 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 747, "corpus": 45944, "corpus [files]": 44673, "corpus [symbols]": 5943, "cover overflows": 68574, "coverage": 314053, "distributor delayed": 55303, "distributor undelayed": 55290, "distributor violated": 1155, "exec candidate": 80382, "exec collide": 8165, "exec fuzz": 15865, "exec gen": 838, "exec hints": 10818, "exec inject": 0, "exec minimize": 11097, "exec retries": 20, "exec seeds": 1298, "exec smash": 10536, "exec total [base]": 261261, "exec total [new]": 392308, "exec triage": 144656, "executor restarts [base]": 754, "executor restarts [new]": 1865, "fault jobs": 0, "fuzzer jobs": 16, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 1, "hints jobs": 2, "max signal": 319724, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6715, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 6, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47145, "no exec duration": 2686253000000, "no exec requests": 7425, "pending": 185, "prog exec time": 370, "reproducing": 6, "rpc recv": 25136022152, "rpc sent": 5115282144, "signal": 308942, "smash jobs": 0, "triage jobs": 14, "vm output": 104735560, "vm restarts [base]": 78, "vm restarts [new]": 307 } 2025/09/23 17:18:12 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:18:15 base crash "WARNING in udf_truncate_extents" is already known 2025/09/23 17:18:15 patched crashed: WARNING in udf_truncate_extents [need repro = false] 2025/09/23 17:18:16 runner 8 connected 2025/09/23 17:18:16 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:18:40 base crash: lost connection to test machine 2025/09/23 17:18:42 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:18:52 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/23 17:18:59 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:19:05 runner 9 connected 2025/09/23 17:19:29 runner 0 connected 2025/09/23 17:19:34 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:19:34 repro finished 'BUG: non-zero pgtables_bytes on freeing mm: NUM', repro=true crepro=false desc='BUG: non-zero pgtables_bytes on freeing mm: NUM' hub=false from_dashboard=false 2025/09/23 17:19:34 found repro for "BUG: non-zero pgtables_bytes on freeing mm: NUM" (orig title: "-SAME-", reliability: 1), took 9.66 minutes 2025/09/23 17:19:34 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved crash log into 1758647974.crash.log 2025/09/23 17:19:34 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved repro log into 1758647974.repro.log 2025/09/23 17:19:34 failed to recv *flatrpc.InfoRequestRawT: unexpected EOF 2025/09/23 17:19:34 start reproducing 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 17:19:39 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:19:39 repro finished 'stack segment fault in pgtable_trans_huge_withdraw (full)', repro=true crepro=true desc='stack segment fault in pgtable_trans_huge_withdraw' hub=false from_dashboard=false 2025/09/23 17:19:39 start reproducing 'stack segment fault in pgtable_trans_huge_withdraw (full)' 2025/09/23 17:19:39 found repro for "stack segment fault in pgtable_trans_huge_withdraw" (orig title: "-SAME-", reliability: 1), took 11.95 minutes 2025/09/23 17:19:39 "stack segment fault in pgtable_trans_huge_withdraw": saved crash log into 1758647979.crash.log 2025/09/23 17:19:39 "stack segment fault in pgtable_trans_huge_withdraw": saved repro log into 1758647979.repro.log 2025/09/23 17:20:17 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:20:50 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:20:57 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:21:27 attempt #0 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 17:21:32 attempt #0 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 17:22:10 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:22:34 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 770, "corpus": 45963, "corpus [files]": 44688, "corpus [symbols]": 5944, "cover overflows": 69076, "coverage": 314165, "distributor delayed": 55323, "distributor undelayed": 55323, "distributor violated": 1168, "exec candidate": 80382, "exec collide": 8327, "exec fuzz": 16182, "exec gen": 864, "exec hints": 11009, "exec inject": 0, "exec minimize": 11584, "exec retries": 20, "exec seeds": 1346, "exec smash": 10687, "exec total [base]": 264218, "exec total [new]": 393803, "exec triage": 144765, "executor restarts [base]": 771, "executor restarts [new]": 1877, "fault jobs": 0, "fuzzer jobs": 40, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 14, "max signal": 319815, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7012, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 6, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47176, "no exec duration": 2778684000000, "no exec requests": 7667, "pending": 183, "prog exec time": 998, "reproducing": 6, "rpc recv": 25419917344, "rpc sent": 5273411888, "signal": 309015, "smash jobs": 17, "triage jobs": 9, "vm output": 108560339, "vm restarts [base]": 79, "vm restarts [new]": 309 } 2025/09/23 17:23:19 attempt #1 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 17:23:25 attempt #1 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 17:23:25 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:24:17 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:24:52 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:25:10 attempt #2 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 17:25:10 patched-only: BUG: non-zero pgtables_bytes on freeing mm: NUM 2025/09/23 17:25:10 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM (full)' 2025/09/23 17:25:16 attempt #2 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 17:25:16 patched-only: stack segment fault in pgtable_trans_huge_withdraw 2025/09/23 17:25:24 patched crashed: lost connection to test machine [need repro = false] 2025/09/23 17:25:32 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:25:59 runner 0 connected 2025/09/23 17:26:07 runner 1 connected 2025/09/23 17:26:09 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:26:12 runner 9 connected 2025/09/23 17:26:48 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:26:56 repro finished 'BUG: non-zero pgtables_bytes on freeing mm: NUM (full)', repro=true crepro=true desc='BUG: non-zero pgtables_bytes on freeing mm: NUM' hub=false from_dashboard=false 2025/09/23 17:26:56 found repro for "BUG: non-zero pgtables_bytes on freeing mm: NUM" (orig title: "-SAME-", reliability: 1), took 11.69 minutes 2025/09/23 17:26:56 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved crash log into 1758648416.crash.log 2025/09/23 17:26:56 start reproducing 'BUG: non-zero pgtables_bytes on freeing mm: NUM (full)' 2025/09/23 17:26:56 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved repro log into 1758648416.repro.log 2025/09/23 17:27:22 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/23 17:27:25 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:27:34 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 793, "corpus": 45988, "corpus [files]": 44702, "corpus [symbols]": 5946, "cover overflows": 69606, "coverage": 314261, "distributor delayed": 55354, "distributor undelayed": 55354, "distributor violated": 1168, "exec candidate": 80382, "exec collide": 8448, "exec fuzz": 16416, "exec gen": 871, "exec hints": 11088, "exec inject": 0, "exec minimize": 11923, "exec retries": 20, "exec seeds": 1416, "exec smash": 10902, "exec total [base]": 265384, "exec total [new]": 394967, "exec triage": 144866, "executor restarts [base]": 780, "executor restarts [new]": 1885, "fault jobs": 0, "fuzzer jobs": 66, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 1, "hints jobs": 22, "max signal": 319951, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7205, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 7, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47212, "no exec duration": 2967862000000, "no exec requests": 8088, "pending": 183, "prog exec time": 2032, "reproducing": 6, "rpc recv": 25649282576, "rpc sent": 5422574600, "signal": 309105, "smash jobs": 39, "triage jobs": 5, "vm output": 111683460, "vm restarts [base]": 81, "vm restarts [new]": 310 } 2025/09/23 17:27:38 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:28:04 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:28:18 runner 9 connected 2025/09/23 17:28:45 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:28:48 attempt #0 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 17:28:48 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:29:07 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:29:48 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:30:10 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:30:22 patched crashed: BUG: non-zero pgtables_bytes on freeing mm: NUM [need repro = false] 2025/09/23 17:30:27 repro finished 'BUG: non-zero pgtables_bytes on freeing mm: NUM', repro=true crepro=false desc='BUG: non-zero pgtables_bytes on freeing mm: NUM' hub=false from_dashboard=false 2025/09/23 17:30:27 found repro for "BUG: non-zero pgtables_bytes on freeing mm: NUM" (orig title: "-SAME-", reliability: 1), took 10.88 minutes 2025/09/23 17:30:27 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved crash log into 1758648627.crash.log 2025/09/23 17:30:27 "BUG: non-zero pgtables_bytes on freeing mm: NUM": saved repro log into 1758648627.repro.log 2025/09/23 17:30:27 start reproducing 'BUG: non-zero pgtables_bytes on freeing mm: NUM' 2025/09/23 17:30:42 attempt #1 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 17:30:48 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:31:12 runner 8 connected 2025/09/23 17:31:29 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:31:42 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:32:08 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:32:19 attempt #0 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 17:32:33 attempt #2 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 17:32:33 patched-only: BUG: non-zero pgtables_bytes on freeing mm: NUM 2025/09/23 17:32:34 STAT { "buffer too small": 2, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 824, "corpus": 46005, "corpus [files]": 44714, "corpus [symbols]": 5949, "cover overflows": 69941, "coverage": 314331, "distributor delayed": 55379, "distributor undelayed": 55379, "distributor violated": 1168, "exec candidate": 80382, "exec collide": 8556, "exec fuzz": 16607, "exec gen": 888, "exec hints": 11153, "exec inject": 0, "exec minimize": 12218, "exec retries": 20, "exec seeds": 1461, "exec smash": 11105, "exec total [base]": 266405, "exec total [new]": 395982, "exec triage": 144954, "executor restarts [base]": 786, "executor restarts [new]": 1900, "fault jobs": 0, "fuzzer jobs": 85, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 1, "hints jobs": 26, "max signal": 320036, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7408, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 7, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47244, "no exec duration": 3240960000000, "no exec requests": 8652, "pending": 182, "prog exec time": 1085, "reproducing": 6, "rpc recv": 25816778580, "rpc sent": 5551393960, "signal": 309149, "smash jobs": 54, "triage jobs": 5, "vm output": 115327661, "vm restarts [base]": 81, "vm restarts [new]": 312 } 2025/09/23 17:32:40 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/23 17:32:48 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:32:59 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:33:21 runner 0 connected 2025/09/23 17:33:28 runner 8 connected 2025/09/23 17:33:57 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:34:06 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:34:06 repro finished 'stack segment fault in pgtable_trans_huge_withdraw (full)', repro=true crepro=true desc='stack segment fault in pgtable_trans_huge_withdraw' hub=false from_dashboard=false 2025/09/23 17:34:06 found repro for "stack segment fault in pgtable_trans_huge_withdraw" (orig title: "-SAME-", reliability: 1), took 14.45 minutes 2025/09/23 17:34:06 "stack segment fault in pgtable_trans_huge_withdraw": saved crash log into 1758648846.crash.log 2025/09/23 17:34:06 start reproducing 'stack segment fault in pgtable_trans_huge_withdraw (full)' 2025/09/23 17:34:06 "stack segment fault in pgtable_trans_huge_withdraw": saved repro log into 1758648846.repro.log 2025/09/23 17:34:12 attempt #1 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 17:34:37 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:35:12 reproducing crash 'no output/lost connection': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:35:31 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:35:59 attempt #0 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: did not crash 2025/09/23 17:36:05 attempt #2 to run "BUG: non-zero pgtables_bytes on freeing mm: NUM" on base: did not crash 2025/09/23 17:36:05 patched-only: BUG: non-zero pgtables_bytes on freeing mm: NUM 2025/09/23 17:36:05 scheduled a reproduction of 'BUG: non-zero pgtables_bytes on freeing mm: NUM (full)' 2025/09/23 17:36:29 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:36:48 runner 0 connected 2025/09/23 17:37:19 reproducing crash 'stack segment fault in pgtable_trans_huge_withdraw': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f mm/pgtable-generic.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/23 17:37:30 bug reporting terminated 2025/09/23 17:37:30 status reporting terminated 2025/09/23 17:37:30 reproducing crash 'INFO: task hung in ima_file_free': concatenation step failed with context deadline exceeded 2025/09/23 17:37:30 repro finished 'INFO: task hung in ima_file_free', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/09/23 17:37:30 repro finished 'stack segment fault in pgtable_trans_huge_withdraw', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/09/23 17:37:30 repro finished 'BUG: non-zero pgtables_bytes on freeing mm: NUM (full)', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/09/23 17:37:30 repro finished 'stack segment fault in pgtable_trans_huge_withdraw (full)', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/09/23 17:37:44 repro finished 'BUG: non-zero pgtables_bytes on freeing mm: NUM', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/09/23 17:37:57 attempt #1 to run "stack segment fault in pgtable_trans_huge_withdraw" on base: aborting due to context cancelation 2025/09/23 17:37:58 syz-diff (base): kernel context loop terminated 2025/09/23 17:38:08 repro finished 'KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/09/23 17:38:09 syz-diff (new): kernel context loop terminated 2025/09/23 17:38:09 diff fuzzing terminated 2025/09/23 17:38:09 fuzzing is finished 2025/09/23 17:38:09 status at the end: Title On-Base On-Patched BUG: non-zero pgtables_bytes on freeing mm: NUM 201 crashes[reproduced] stack segment fault in pgtable_trans_huge_withdraw 7 crashes[reproduced] INFO: task hung in __closure_sync 1 crashes INFO: task hung in __iterate_supers 1 crashes 2 crashes INFO: task hung in bch2_journal_reclaim_thread 3 crashes INFO: task hung in ima_file_free 1 crashes INFO: task hung in ip_tunnel_init_net 1 crashes KASAN: out-of-bounds Read in ext4_xattr_set_entry 2 crashes KASAN: slab-use-after-free Read in __ethtool_get_link_ksettings 1 crashes KASAN: slab-use-after-free Read in __xfrm_state_lookup 3 crashes 2 crashes KASAN: slab-use-after-free Read in xfrm_alloc_spi 9 crashes 3 crashes KASAN: slab-use-after-free Write in __xfrm_state_delete 1 crashes WARNING in dbAdjTree 1 crashes 3 crashes WARNING in driver_unregister 1 crashes 1 crashes WARNING in drv_unassign_vif_chanctx 1 crashes WARNING in io_ring_exit_work 1 crashes WARNING in udf_truncate_extents 1 crashes WARNING in xfrm6_tunnel_net_exit 3 crashes 3 crashes WARNING in xfrm_state_fini 7 crashes 9 crashes general protection fault in lmLogSync 1 crashes general protection fault in pcl818_ai_cancel 3 crashes 4 crashes kernel BUG in jfs_evict_inode 2 crashes kernel BUG in may_open 1 crashes kernel BUG in txUnlock 2 crashes 9 crashes lost connection to test machine 21 crashes 26 crashes possible deadlock in ocfs2_init_acl 2 crashes 4 crashes possible deadlock in ocfs2_reserve_suballoc_bits 2 crashes 3 crashes possible deadlock in ocfs2_setattr 1 crashes possible deadlock in ocfs2_try_remove_refcount_tree 9 crashes 11 crashes possible deadlock in ocfs2_xattr_set 1 crashes unregister_netdevice: waiting for DEV to become free 2 crashes 2025/09/23 17:38:09 possibly patched-only: BUG: non-zero pgtables_bytes on freeing mm: NUM