2025/11/09 12:09:40 extracted 322917 text symbol hashes for base and 322919 for patched 2025/11/09 12:09:40 symbol "guard_install_walk_ops" has different values in base vs patch 2025/11/09 12:09:40 binaries are different, continuing fuzzing 2025/11/09 12:09:40 adding modified_functions to focus areas: ["__pfx_walk_page_range_mm_unsafe" "__pfx_walk_page_range_vma_unsafe" "__se_sys_process_madvise" "do_madvise" "madvise_lock" "madvise_vma_behavior" "madvise_walk_vmas" "set_anon_vma_name" "walk_page_mapping" "walk_page_range_mm_unsafe" "walk_page_range_vma" "walk_page_range_vma_unsafe" "walk_page_vma" "walk_pgd_range"] 2025/11/09 12:09:40 adding directly modified files to focus areas: ["mm/internal.h" "mm/madvise.c" "mm/pagewalk.c"] 2025/11/09 12:09:40 downloading corpus #1: "https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db" 2025/11/09 12:10:38 runner 2 connected 2025/11/09 12:10:39 runner 0 connected 2025/11/09 12:10:39 runner 4 connected 2025/11/09 12:10:39 runner 1 connected 2025/11/09 12:10:39 runner 8 connected 2025/11/09 12:10:39 runner 3 connected 2025/11/09 12:10:40 runner 6 connected 2025/11/09 12:10:40 runner 5 connected 2025/11/09 12:10:40 runner 2 connected 2025/11/09 12:10:40 runner 0 connected 2025/11/09 12:10:40 runner 7 connected 2025/11/09 12:10:40 runner 1 connected 2025/11/09 12:10:45 executor cover filter: 0 PCs 2025/11/09 12:10:45 initializing coverage information... 2025/11/09 12:10:49 discovered 7611 source files, 333871 symbols 2025/11/09 12:10:50 coverage filter: __pfx_walk_page_range_mm_unsafe: [] 2025/11/09 12:10:50 coverage filter: __pfx_walk_page_range_vma_unsafe: [] 2025/11/09 12:10:50 coverage filter: __se_sys_process_madvise: [__se_sys_process_madvise] 2025/11/09 12:10:50 coverage filter: do_madvise: [do_madvise] 2025/11/09 12:10:50 coverage filter: madvise_lock: [drm_gem_shmem_madvise_locked madvise_lock] 2025/11/09 12:10:50 coverage filter: madvise_vma_behavior: [madvise_vma_behavior] 2025/11/09 12:10:50 coverage filter: madvise_walk_vmas: [madvise_walk_vmas] 2025/11/09 12:10:50 coverage filter: set_anon_vma_name: [set_anon_vma_name] 2025/11/09 12:10:50 coverage filter: walk_page_mapping: [walk_page_mapping] 2025/11/09 12:10:50 coverage filter: walk_page_range_mm_unsafe: [walk_page_range_mm_unsafe] 2025/11/09 12:10:50 coverage filter: walk_page_range_vma: [walk_page_range_vma walk_page_range_vma_unsafe] 2025/11/09 12:10:50 coverage filter: walk_page_range_vma_unsafe: [] 2025/11/09 12:10:50 coverage filter: walk_page_vma: [walk_page_vma] 2025/11/09 12:10:50 coverage filter: walk_pgd_range: [walk_pgd_range] 2025/11/09 12:10:50 coverage filter: mm/internal.h: [] 2025/11/09 12:10:50 coverage filter: mm/madvise.c: [mm/madvise.c] 2025/11/09 12:10:50 coverage filter: mm/pagewalk.c: [mm/pagewalk.c] 2025/11/09 12:10:50 area "symbols": 838 PCs in the cover filter 2025/11/09 12:10:50 area "files": 1692 PCs in the cover filter 2025/11/09 12:10:50 area "": 0 PCs in the cover filter 2025/11/09 12:10:50 executor cover filter: 0 PCs 2025/11/09 12:10:50 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8056 2025/11/09 12:10:50 base: machine check complete 2025/11/09 12:10:53 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8056 2025/11/09 12:10:53 new: machine check complete 2025/11/09 12:10:54 new: adding 81787 seeds 2025/11/09 12:13:29 patched crashed: KASAN: slab-use-after-free Read in jfs_lazycommit [need repro = true] 2025/11/09 12:13:29 scheduled a reproduction of 'KASAN: slab-use-after-free Read in jfs_lazycommit' 2025/11/09 12:13:48 patched crashed: possible deadlock in dqget [need repro = true] 2025/11/09 12:13:48 scheduled a reproduction of 'possible deadlock in dqget' 2025/11/09 12:14:18 runner 7 connected 2025/11/09 12:14:38 runner 4 connected 2025/11/09 12:14:42 STAT { "buffer too small": 0, "candidate triage jobs": 49, "candidates": 77066, "comps overflows": 0, "corpus": 4641, "corpus [files]": 241, "corpus [symbols]": 100, "cover overflows": 2859, "coverage": 164395, "distributor delayed": 4438, "distributor undelayed": 4438, "distributor violated": 0, "exec candidate": 4721, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 6658, "exec total [new]": 20515, "exec triage": 14619, "executor restarts [base]": 54, "executor restarts [new]": 118, "fault jobs": 0, "fuzzer jobs": 49, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 166391, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 4721, "no exec duration": 42202000000, "no exec requests": 342, "pending": 2, "prog exec time": 237, "reproducing": 0, "rpc recv": 1118441940, "rpc sent": 108952264, "signal": 161737, "smash jobs": 0, "triage jobs": 0, "vm output": 3170759, "vm restarts [base]": 3, "vm restarts [new]": 11 } 2025/11/09 12:15:07 base crash: possible deadlock in ocfs2_acquire_dquot 2025/11/09 12:15:13 patched crashed: WARNING in folio_memcg [need repro = true] 2025/11/09 12:15:13 scheduled a reproduction of 'WARNING in folio_memcg' 2025/11/09 12:15:23 patched crashed: WARNING in folio_memcg [need repro = true] 2025/11/09 12:15:23 scheduled a reproduction of 'WARNING in folio_memcg' 2025/11/09 12:15:56 runner 1 connected 2025/11/09 12:15:57 base crash: INFO: task hung in rfkill_global_led_trigger_worker 2025/11/09 12:15:58 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 12:16:02 runner 7 connected 2025/11/09 12:16:12 runner 4 connected 2025/11/09 12:16:46 runner 1 connected 2025/11/09 12:16:47 runner 0 connected 2025/11/09 12:18:47 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = true] 2025/11/09 12:18:47 scheduled a reproduction of 'WARNING in xfrm6_tunnel_net_exit' 2025/11/09 12:18:51 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = true] 2025/11/09 12:18:51 scheduled a reproduction of 'WARNING in xfrm6_tunnel_net_exit' 2025/11/09 12:19:19 patched crashed: KASAN: out-of-bounds Read in ext4_xattr_set_entry [need repro = true] 2025/11/09 12:19:19 scheduled a reproduction of 'KASAN: out-of-bounds Read in ext4_xattr_set_entry' 2025/11/09 12:19:36 runner 8 connected 2025/11/09 12:19:40 runner 4 connected 2025/11/09 12:19:42 STAT { "buffer too small": 0, "candidate triage jobs": 39, "candidates": 70829, "comps overflows": 0, "corpus": 10823, "corpus [files]": 467, "corpus [symbols]": 215, "cover overflows": 7297, "coverage": 209035, "distributor delayed": 11156, "distributor undelayed": 11155, "distributor violated": 4, "exec candidate": 10958, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 13704, "exec total [new]": 48983, "exec triage": 34061, "executor restarts [base]": 88, "executor restarts [new]": 180, "fault jobs": 0, "fuzzer jobs": 39, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 211303, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 10958, "no exec duration": 42429000000, "no exec requests": 350, "pending": 7, "prog exec time": 175, "reproducing": 0, "rpc recv": 2072455172, "rpc sent": 245678312, "signal": 205611, "smash jobs": 0, "triage jobs": 0, "vm output": 6333689, "vm restarts [base]": 5, "vm restarts [new]": 16 } 2025/11/09 12:20:09 runner 6 connected 2025/11/09 12:22:04 patched crashed: possible deadlock in run_unpack_ex [need repro = true] 2025/11/09 12:22:04 scheduled a reproduction of 'possible deadlock in run_unpack_ex' 2025/11/09 12:22:15 base crash: possible deadlock in run_unpack_ex 2025/11/09 12:22:54 runner 1 connected 2025/11/09 12:23:05 runner 1 connected 2025/11/09 12:23:28 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = true] 2025/11/09 12:23:28 scheduled a reproduction of 'WARNING in xfrm6_tunnel_net_exit' 2025/11/09 12:24:25 runner 3 connected 2025/11/09 12:24:42 STAT { "buffer too small": 0, "candidate triage jobs": 55, "candidates": 64938, "comps overflows": 0, "corpus": 16633, "corpus [files]": 627, "corpus [symbols]": 288, "cover overflows": 11419, "coverage": 234667, "distributor delayed": 17043, "distributor undelayed": 17043, "distributor violated": 4, "exec candidate": 16849, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 6, "exec seeds": 0, "exec smash": 0, "exec total [base]": 24011, "exec total [new]": 77149, "exec triage": 52234, "executor restarts [base]": 110, "executor restarts [new]": 244, "fault jobs": 0, "fuzzer jobs": 55, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 237030, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 16849, "no exec duration": 43927000000, "no exec requests": 361, "pending": 9, "prog exec time": 340, "reproducing": 0, "rpc recv": 3045388204, "rpc sent": 389622424, "signal": 230661, "smash jobs": 0, "triage jobs": 0, "vm output": 9085459, "vm restarts [base]": 6, "vm restarts [new]": 19 } 2025/11/09 12:27:22 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/11/09 12:27:22 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/11/09 12:27:22 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/11/09 12:27:22 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/11/09 12:27:23 patched crashed: kernel BUG in txUnlock [need repro = true] 2025/11/09 12:27:23 scheduled a reproduction of 'kernel BUG in txUnlock' 2025/11/09 12:28:12 runner 2 connected 2025/11/09 12:28:12 runner 1 connected 2025/11/09 12:28:18 runner 6 connected 2025/11/09 12:28:43 base crash: kernel BUG in txUnlock 2025/11/09 12:29:34 runner 1 connected 2025/11/09 12:29:42 STAT { "buffer too small": 0, "candidate triage jobs": 52, "candidates": 59314, "comps overflows": 0, "corpus": 22178, "corpus [files]": 759, "corpus [symbols]": 351, "cover overflows": 15266, "coverage": 252167, "distributor delayed": 22683, "distributor undelayed": 22682, "distributor violated": 5, "exec candidate": 22473, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 6, "exec seeds": 0, "exec smash": 0, "exec total [base]": 33940, "exec total [new]": 105890, "exec triage": 69654, "executor restarts [base]": 128, "executor restarts [new]": 300, "fault jobs": 0, "fuzzer jobs": 52, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 254737, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 22473, "no exec duration": 45673000000, "no exec requests": 372, "pending": 12, "prog exec time": 218, "reproducing": 0, "rpc recv": 3917277332, "rpc sent": 535412784, "signal": 247823, "smash jobs": 0, "triage jobs": 0, "vm output": 12337004, "vm restarts [base]": 7, "vm restarts [new]": 22 } 2025/11/09 12:30:25 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/11/09 12:30:25 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/11/09 12:30:50 patched crashed: WARNING in folio_memcg [need repro = true] 2025/11/09 12:30:50 scheduled a reproduction of 'WARNING in folio_memcg' 2025/11/09 12:30:57 base crash: WARNING in folio_memcg 2025/11/09 12:31:01 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 12:31:04 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 12:31:15 runner 1 connected 2025/11/09 12:31:38 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/11/09 12:31:38 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/11/09 12:31:40 runner 4 connected 2025/11/09 12:31:45 runner 2 connected 2025/11/09 12:31:50 runner 6 connected 2025/11/09 12:31:52 runner 0 connected 2025/11/09 12:32:26 runner 7 connected 2025/11/09 12:32:57 base crash: WARNING in io_ring_exit_work 2025/11/09 12:33:47 runner 0 connected 2025/11/09 12:34:15 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = true] 2025/11/09 12:34:15 scheduled a reproduction of 'WARNING in xfrm6_tunnel_net_exit' 2025/11/09 12:34:42 STAT { "buffer too small": 0, "candidate triage jobs": 46, "candidates": 54547, "comps overflows": 0, "corpus": 26879, "corpus [files]": 863, "corpus [symbols]": 409, "cover overflows": 18404, "coverage": 265383, "distributor delayed": 27416, "distributor undelayed": 27416, "distributor violated": 8, "exec candidate": 27240, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 9, "exec seeds": 0, "exec smash": 0, "exec total [base]": 43005, "exec total [new]": 130504, "exec triage": 84266, "executor restarts [base]": 151, "executor restarts [new]": 375, "fault jobs": 0, "fuzzer jobs": 46, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 268363, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 27240, "no exec duration": 45688000000, "no exec requests": 375, "pending": 16, "prog exec time": 254, "reproducing": 0, "rpc recv": 4816576456, "rpc sent": 669859592, "signal": 260688, "smash jobs": 0, "triage jobs": 0, "vm output": 16474625, "vm restarts [base]": 9, "vm restarts [new]": 27 } 2025/11/09 12:34:52 base crash: general protection fault in pcl818_ai_cancel 2025/11/09 12:35:12 runner 5 connected 2025/11/09 12:35:41 runner 2 connected 2025/11/09 12:35:45 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = true] 2025/11/09 12:35:45 scheduled a reproduction of 'BUG: sleeping function called from invalid context in hook_sb_delete' 2025/11/09 12:35:46 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = true] 2025/11/09 12:35:46 scheduled a reproduction of 'BUG: sleeping function called from invalid context in hook_sb_delete' 2025/11/09 12:36:35 runner 4 connected 2025/11/09 12:36:36 runner 7 connected 2025/11/09 12:36:43 patched crashed: WARNING in io_ring_exit_work [need repro = false] 2025/11/09 12:37:00 base crash: BUG: sleeping function called from invalid context in hook_sb_delete 2025/11/09 12:37:32 runner 2 connected 2025/11/09 12:37:48 runner 2 connected 2025/11/09 12:37:54 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/11/09 12:37:54 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/11/09 12:38:44 runner 4 connected 2025/11/09 12:39:16 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 12:39:26 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 12:39:36 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 12:39:42 STAT { "buffer too small": 0, "candidate triage jobs": 29, "candidates": 49176, "comps overflows": 0, "corpus": 32189, "corpus [files]": 968, "corpus [symbols]": 463, "cover overflows": 22338, "coverage": 277793, "distributor delayed": 32812, "distributor undelayed": 32812, "distributor violated": 8, "exec candidate": 32611, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 9, "exec seeds": 0, "exec smash": 0, "exec total [base]": 53802, "exec total [new]": 161527, "exec triage": 100992, "executor restarts [base]": 168, "executor restarts [new]": 423, "fault jobs": 0, "fuzzer jobs": 29, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 280927, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 32611, "no exec duration": 45738000000, "no exec requests": 377, "pending": 19, "prog exec time": 222, "reproducing": 0, "rpc recv": 5747019784, "rpc sent": 841714304, "signal": 272623, "smash jobs": 0, "triage jobs": 0, "vm output": 19751572, "vm restarts [base]": 11, "vm restarts [new]": 32 } 2025/11/09 12:39:47 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 12:39:59 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/11/09 12:39:59 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/11/09 12:40:03 base crash: WARNING in folio_memcg 2025/11/09 12:40:04 runner 6 connected 2025/11/09 12:40:17 runner 5 connected 2025/11/09 12:40:25 runner 1 connected 2025/11/09 12:40:37 runner 8 connected 2025/11/09 12:40:47 runner 3 connected 2025/11/09 12:40:52 runner 1 connected 2025/11/09 12:40:57 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 12:41:45 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/11/09 12:41:45 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/11/09 12:41:51 base crash: lost connection to test machine 2025/11/09 12:41:54 runner 5 connected 2025/11/09 12:41:58 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/11/09 12:41:58 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/11/09 12:42:34 runner 1 connected 2025/11/09 12:42:38 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/11/09 12:42:38 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/11/09 12:42:41 runner 1 connected 2025/11/09 12:42:48 runner 3 connected 2025/11/09 12:43:02 patched crashed: INFO: task hung in bdev_open [need repro = true] 2025/11/09 12:43:02 scheduled a reproduction of 'INFO: task hung in bdev_open' 2025/11/09 12:43:09 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 12:43:27 runner 6 connected 2025/11/09 12:43:29 base crash: possible deadlock in ext4_writepages 2025/11/09 12:43:52 runner 4 connected 2025/11/09 12:43:58 runner 1 connected 2025/11/09 12:44:20 runner 2 connected 2025/11/09 12:44:42 STAT { "buffer too small": 0, "candidate triage jobs": 39, "candidates": 44877, "comps overflows": 0, "corpus": 36420, "corpus [files]": 1037, "corpus [symbols]": 497, "cover overflows": 25632, "coverage": 287761, "distributor delayed": 37899, "distributor undelayed": 37899, "distributor violated": 26, "exec candidate": 36910, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 9, "exec seeds": 0, "exec smash": 0, "exec total [base]": 60119, "exec total [new]": 187295, "exec triage": 114210, "executor restarts [base]": 186, "executor restarts [new]": 493, "fault jobs": 0, "fuzzer jobs": 39, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 291060, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 36910, "no exec duration": 45930000000, "no exec requests": 383, "pending": 24, "prog exec time": 215, "reproducing": 0, "rpc recv": 6683870712, "rpc sent": 989238776, "signal": 282435, "smash jobs": 0, "triage jobs": 0, "vm output": 22987015, "vm restarts [base]": 14, "vm restarts [new]": 43 } 2025/11/09 12:46:37 patched crashed: possible deadlock in ext4_destroy_inline_data [need repro = true] 2025/11/09 12:46:37 scheduled a reproduction of 'possible deadlock in ext4_destroy_inline_data' 2025/11/09 12:46:48 patched crashed: possible deadlock in ext4_destroy_inline_data [need repro = true] 2025/11/09 12:46:48 scheduled a reproduction of 'possible deadlock in ext4_destroy_inline_data' 2025/11/09 12:47:26 runner 7 connected 2025/11/09 12:47:37 runner 5 connected 2025/11/09 12:48:03 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 12:48:14 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 12:48:23 patched crashed: WARNING in xfrm_state_fini [need repro = true] 2025/11/09 12:48:23 scheduled a reproduction of 'WARNING in xfrm_state_fini' 2025/11/09 12:48:53 runner 2 connected 2025/11/09 12:49:03 runner 1 connected 2025/11/09 12:49:14 runner 6 connected 2025/11/09 12:49:29 patched crashed: INFO: task hung in __iterate_supers [need repro = true] 2025/11/09 12:49:29 scheduled a reproduction of 'INFO: task hung in __iterate_supers' 2025/11/09 12:49:34 patched crashed: possible deadlock in run_unpack_ex [need repro = false] 2025/11/09 12:49:42 STAT { "buffer too small": 0, "candidate triage jobs": 19, "candidates": 41060, "comps overflows": 0, "corpus": 40175, "corpus [files]": 1094, "corpus [symbols]": 525, "cover overflows": 29011, "coverage": 295588, "distributor delayed": 41513, "distributor undelayed": 41513, "distributor violated": 26, "exec candidate": 40727, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 9, "exec seeds": 0, "exec smash": 0, "exec total [base]": 69495, "exec total [new]": 214639, "exec triage": 126028, "executor restarts [base]": 212, "executor restarts [new]": 549, "fault jobs": 0, "fuzzer jobs": 19, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 299123, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 40726, "no exec duration": 45930000000, "no exec requests": 383, "pending": 28, "prog exec time": 278, "reproducing": 0, "rpc recv": 7419346568, "rpc sent": 1151643000, "signal": 290237, "smash jobs": 0, "triage jobs": 0, "vm output": 26528190, "vm restarts [base]": 14, "vm restarts [new]": 48 } 2025/11/09 12:50:09 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2025/11/09 12:50:09 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2025/11/09 12:50:18 runner 3 connected 2025/11/09 12:50:23 runner 4 connected 2025/11/09 12:50:36 patched crashed: possible deadlock in ntfs_look_for_free_space [need repro = true] 2025/11/09 12:50:36 scheduled a reproduction of 'possible deadlock in ntfs_look_for_free_space' 2025/11/09 12:50:58 runner 8 connected 2025/11/09 12:51:05 patched crashed: possible deadlock in mark_as_free_ex [need repro = true] 2025/11/09 12:51:05 scheduled a reproduction of 'possible deadlock in mark_as_free_ex' 2025/11/09 12:51:25 runner 5 connected 2025/11/09 12:52:02 runner 2 connected 2025/11/09 12:52:46 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 12:53:01 base crash: WARNING in folio_memcg 2025/11/09 12:53:32 base crash: WARNING in xfrm_state_fini 2025/11/09 12:53:39 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/09 12:53:42 runner 5 connected 2025/11/09 12:53:46 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/09 12:53:50 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/09 12:53:50 runner 2 connected 2025/11/09 12:53:57 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/09 12:54:22 runner 0 connected 2025/11/09 12:54:27 runner 4 connected 2025/11/09 12:54:31 patched crashed: WARNING in __dev_set_promiscuity [need repro = true] 2025/11/09 12:54:31 scheduled a reproduction of 'WARNING in __dev_set_promiscuity' 2025/11/09 12:54:35 runner 8 connected 2025/11/09 12:54:38 runner 6 connected 2025/11/09 12:54:42 STAT { "buffer too small": 0, "candidate triage jobs": 6, "candidates": 39192, "comps overflows": 0, "corpus": 41992, "corpus [files]": 1116, "corpus [symbols]": 536, "cover overflows": 32514, "coverage": 299311, "distributor delayed": 43606, "distributor undelayed": 43606, "distributor violated": 32, "exec candidate": 42595, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 11, "exec seeds": 0, "exec smash": 0, "exec total [base]": 76061, "exec total [new]": 237482, "exec triage": 131924, "executor restarts [base]": 228, "executor restarts [new]": 605, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 302937, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 42594, "no exec duration": 46103000000, "no exec requests": 391, "pending": 32, "prog exec time": 206, "reproducing": 0, "rpc recv": 8107269276, "rpc sent": 1303134464, "signal": 293961, "smash jobs": 0, "triage jobs": 0, "vm output": 29780363, "vm restarts [base]": 16, "vm restarts [new]": 57 } 2025/11/09 12:54:47 runner 1 connected 2025/11/09 12:55:20 runner 3 connected 2025/11/09 12:55:51 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 12:56:02 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 12:56:13 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 12:56:23 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 12:56:34 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 12:56:42 runner 5 connected 2025/11/09 12:56:51 runner 3 connected 2025/11/09 12:57:02 runner 1 connected 2025/11/09 12:57:13 runner 4 connected 2025/11/09 12:57:17 patched crashed: BUG: sleeping function called from invalid context in hook_sb_delete [need repro = false] 2025/11/09 12:57:23 runner 2 connected 2025/11/09 12:57:38 base crash: lost connection to test machine 2025/11/09 12:57:45 base crash: lost connection to test machine 2025/11/09 12:57:54 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 12:58:05 runner 5 connected 2025/11/09 12:58:28 runner 1 connected 2025/11/09 12:58:35 runner 2 connected 2025/11/09 12:58:44 runner 6 connected 2025/11/09 12:59:42 STAT { "buffer too small": 0, "candidate triage jobs": 14, "candidates": 37530, "comps overflows": 0, "corpus": 43563, "corpus [files]": 1139, "corpus [symbols]": 547, "cover overflows": 36933, "coverage": 302630, "distributor delayed": 45387, "distributor undelayed": 45386, "distributor violated": 43, "exec candidate": 44257, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 13, "exec seeds": 0, "exec smash": 0, "exec total [base]": 84837, "exec total [new]": 263726, "exec triage": 137295, "executor restarts [base]": 249, "executor restarts [new]": 678, "fault jobs": 0, "fuzzer jobs": 14, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 306443, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44254, "no exec duration": 46201000000, "no exec requests": 394, "pending": 32, "prog exec time": 247, "reproducing": 0, "rpc recv": 8839428972, "rpc sent": 1487376848, "signal": 297211, "smash jobs": 0, "triage jobs": 0, "vm output": 32887750, "vm restarts [base]": 18, "vm restarts [new]": 66 } 2025/11/09 12:59:45 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 13:00:36 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/09 13:00:41 runner 1 connected 2025/11/09 13:00:46 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/09 13:01:24 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:01:26 runner 7 connected 2025/11/09 13:01:35 runner 0 connected 2025/11/09 13:02:13 runner 6 connected 2025/11/09 13:03:00 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 13:03:00 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:03:50 runner 1 connected 2025/11/09 13:03:50 runner 8 connected 2025/11/09 13:04:42 STAT { "buffer too small": 0, "candidate triage jobs": 17, "candidates": 36416, "comps overflows": 0, "corpus": 44579, "corpus [files]": 1151, "corpus [symbols]": 551, "cover overflows": 40361, "coverage": 304756, "distributor delayed": 46538, "distributor undelayed": 46538, "distributor violated": 43, "exec candidate": 45371, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 93259, "exec total [new]": 285819, "exec triage": 140710, "executor restarts [base]": 266, "executor restarts [new]": 724, "fault jobs": 0, "fuzzer jobs": 17, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 308733, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45336, "no exec duration": 46216000000, "no exec requests": 395, "pending": 32, "prog exec time": 274, "reproducing": 0, "rpc recv": 9371473400, "rpc sent": 1630345088, "signal": 299188, "smash jobs": 0, "triage jobs": 0, "vm output": 36203639, "vm restarts [base]": 18, "vm restarts [new]": 72 } 2025/11/09 13:06:17 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 13:06:26 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/09 13:06:57 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/09 13:07:08 runner 4 connected 2025/11/09 13:07:16 runner 0 connected 2025/11/09 13:07:35 base crash: lost connection to test machine 2025/11/09 13:07:47 runner 2 connected 2025/11/09 13:08:19 base crash: possible deadlock in attr_data_get_block 2025/11/09 13:08:31 runner 1 connected 2025/11/09 13:09:09 runner 2 connected 2025/11/09 13:09:42 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 29594, "comps overflows": 0, "corpus": 45700, "corpus [files]": 1162, "corpus [symbols]": 559, "cover overflows": 44726, "coverage": 306861, "distributor delayed": 47623, "distributor undelayed": 47623, "distributor violated": 43, "exec candidate": 52193, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 18, "exec seeds": 0, "exec smash": 0, "exec total [base]": 98874, "exec total [new]": 312541, "exec triage": 144510, "executor restarts [base]": 289, "executor restarts [new]": 803, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 310850, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46519, "no exec duration": 48778000000, "no exec requests": 399, "pending": 32, "prog exec time": 287, "reproducing": 0, "rpc recv": 9803459916, "rpc sent": 1780915048, "signal": 301302, "smash jobs": 0, "triage jobs": 0, "vm output": 40172541, "vm restarts [base]": 20, "vm restarts [new]": 75 } 2025/11/09 13:10:14 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:11:04 runner 2 connected 2025/11/09 13:11:05 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:11:35 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:11:56 runner 8 connected 2025/11/09 13:12:23 runner 3 connected 2025/11/09 13:13:42 triaged 91.0% of the corpus 2025/11/09 13:13:42 starting bug reproductions 2025/11/09 13:13:42 starting bug reproductions (max 6 VMs, 4 repros) 2025/11/09 13:13:42 start reproducing 'KASAN: slab-use-after-free Read in jfs_lazycommit' 2025/11/09 13:13:42 reproduction of "WARNING in folio_memcg" aborted: it's no longer needed 2025/11/09 13:13:42 reproduction of "WARNING in folio_memcg" aborted: it's no longer needed 2025/11/09 13:13:42 start reproducing 'possible deadlock in dqget' 2025/11/09 13:13:42 start reproducing 'WARNING in xfrm6_tunnel_net_exit' 2025/11/09 13:13:42 reproduction of "possible deadlock in run_unpack_ex" aborted: it's no longer needed 2025/11/09 13:13:42 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/11/09 13:13:42 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/11/09 13:13:42 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2025/11/09 13:13:42 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/11/09 13:13:42 reproduction of "WARNING in folio_memcg" aborted: it's no longer needed 2025/11/09 13:13:42 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/11/09 13:13:42 reproduction of "BUG: sleeping function called from invalid context in hook_sb_delete" aborted: it's no longer needed 2025/11/09 13:13:42 start reproducing 'KASAN: out-of-bounds Read in ext4_xattr_set_entry' 2025/11/09 13:13:42 reproduction of "BUG: sleeping function called from invalid context in hook_sb_delete" aborted: it's no longer needed 2025/11/09 13:13:42 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/11/09 13:13:42 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/11/09 13:13:44 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/11/09 13:14:42 STAT { "buffer too small": 0, "candidate triage jobs": 11, "candidates": 5260, "comps overflows": 0, "corpus": 46056, "corpus [files]": 1168, "corpus [symbols]": 561, "cover overflows": 49230, "coverage": 307529, "distributor delayed": 48086, "distributor undelayed": 48078, "distributor violated": 64, "exec candidate": 76527, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 109090, "exec total [new]": 338278, "exec triage": 145904, "executor restarts [base]": 312, "executor restarts [new]": 844, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 311656, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46935, "no exec duration": 50930000000, "no exec requests": 405, "pending": 14, "prog exec time": 253, "reproducing": 4, "rpc recv": 10171924984, "rpc sent": 1918003264, "signal": 301953, "smash jobs": 0, "triage jobs": 0, "vm output": 42875718, "vm restarts [base]": 20, "vm restarts [new]": 78 } 2025/11/09 13:14:45 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:14:58 reproducing crash 'KASAN: out-of-bounds Read in ext4_xattr_set_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/xattr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 13:15:35 runner 6 connected 2025/11/09 13:16:04 base crash: WARNING in io_ring_exit_work 2025/11/09 13:16:54 runner 1 connected 2025/11/09 13:17:49 base crash: lost connection to test machine 2025/11/09 13:17:54 reproducing crash 'KASAN: out-of-bounds Read in ext4_xattr_set_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/xattr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 13:18:12 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 13:18:38 runner 0 connected 2025/11/09 13:19:01 runner 7 connected 2025/11/09 13:19:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 10, "corpus": 46160, "corpus [files]": 1170, "corpus [symbols]": 563, "cover overflows": 51174, "coverage": 307731, "distributor delayed": 48271, "distributor undelayed": 48271, "distributor violated": 87, "exec candidate": 81787, "exec collide": 301, "exec fuzz": 652, "exec gen": 28, "exec hints": 43, "exec inject": 0, "exec minimize": 245, "exec retries": 21, "exec seeds": 28, "exec smash": 91, "exec total [base]": 118137, "exec total [new]": 345358, "exec triage": 146333, "executor restarts [base]": 329, "executor restarts [new]": 875, "fault jobs": 0, "fuzzer jobs": 25, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 6, "max signal": 311940, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 160, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47068, "no exec duration": 50930000000, "no exec requests": 405, "pending": 14, "prog exec time": 595, "reproducing": 4, "rpc recv": 10536454948, "rpc sent": 2005562744, "signal": 302152, "smash jobs": 9, "triage jobs": 10, "vm output": 45954511, "vm restarts [base]": 22, "vm restarts [new]": 80 } 2025/11/09 13:19:57 base crash: WARNING in folio_memcg 2025/11/09 13:20:47 runner 0 connected 2025/11/09 13:21:45 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 13:22:16 base crash: WARNING in folio_memcg 2025/11/09 13:22:17 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:22:24 reproducing crash 'KASAN: out-of-bounds Read in ext4_xattr_set_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/xattr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 13:22:35 runner 6 connected 2025/11/09 13:23:06 runner 2 connected 2025/11/09 13:23:07 runner 8 connected 2025/11/09 13:23:39 reproducing crash 'KASAN: out-of-bounds Read in ext4_xattr_set_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/xattr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 13:24:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 65, "corpus": 46199, "corpus [files]": 1176, "corpus [symbols]": 569, "cover overflows": 53047, "coverage": 307853, "distributor delayed": 48361, "distributor undelayed": 48360, "distributor violated": 87, "exec candidate": 81787, "exec collide": 752, "exec fuzz": 1499, "exec gen": 75, "exec hints": 445, "exec inject": 0, "exec minimize": 1019, "exec retries": 21, "exec seeds": 149, "exec smash": 910, "exec total [base]": 122861, "exec total [new]": 348995, "exec triage": 146508, "executor restarts [base]": 343, "executor restarts [new]": 892, "fault jobs": 0, "fuzzer jobs": 51, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 18, "max signal": 312080, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 617, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47133, "no exec duration": 50930000000, "no exec requests": 405, "pending": 14, "prog exec time": 536, "reproducing": 4, "rpc recv": 10919008528, "rpc sent": 2158241176, "signal": 302244, "smash jobs": 28, "triage jobs": 5, "vm output": 48823685, "vm restarts [base]": 24, "vm restarts [new]": 82 } 2025/11/09 13:25:54 base crash: lost connection to test machine 2025/11/09 13:25:55 base crash: WARNING in xfrm6_tunnel_net_exit 2025/11/09 13:26:30 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = true] 2025/11/09 13:26:30 scheduled a reproduction of 'possible deadlock in ocfs2_xattr_set' 2025/11/09 13:26:39 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/11/09 13:26:43 runner 0 connected 2025/11/09 13:26:45 runner 2 connected 2025/11/09 13:26:52 repro finished 'possible deadlock in dqget', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/09 13:26:52 failed repro for "possible deadlock in dqget", err=%!s() 2025/11/09 13:26:52 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/11/09 13:26:52 start reproducing 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/11/09 13:26:52 "possible deadlock in dqget": saved crash log into 1762694812.crash.log 2025/11/09 13:26:52 "possible deadlock in dqget": saved repro log into 1762694812.repro.log 2025/11/09 13:27:13 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/11/09 13:27:18 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/11/09 13:27:18 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/11/09 13:27:21 runner 7 connected 2025/11/09 13:27:28 runner 6 connected 2025/11/09 13:27:59 reproducing crash 'KASAN: out-of-bounds Read in ext4_xattr_set_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/xattr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 13:27:59 repro finished 'KASAN: out-of-bounds Read in ext4_xattr_set_entry', repro=true crepro=false desc='KASAN: out-of-bounds Read in ext4_xattr_set_entry' hub=false from_dashboard=false 2025/11/09 13:27:59 found repro for "KASAN: out-of-bounds Read in ext4_xattr_set_entry" (orig title: "-SAME-", reliability: 0), took 14.25 minutes 2025/11/09 13:27:59 start reproducing 'INFO: task hung in bdev_open' 2025/11/09 13:27:59 KASAN: out-of-bounds Read in ext4_xattr_set_entry: repro is too unreliable, skipping 2025/11/09 13:27:59 "KASAN: out-of-bounds Read in ext4_xattr_set_entry": saved crash log into 1762694879.crash.log 2025/11/09 13:27:59 "KASAN: out-of-bounds Read in ext4_xattr_set_entry": saved repro log into 1762694879.repro.log 2025/11/09 13:28:03 runner 0 connected 2025/11/09 13:28:09 runner 8 connected 2025/11/09 13:28:13 base crash: lost connection to test machine 2025/11/09 13:28:32 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:28:55 base crash: lost connection to test machine 2025/11/09 13:29:04 runner 2 connected 2025/11/09 13:29:07 base crash: WARNING in xfrm_state_fini 2025/11/09 13:29:22 runner 7 connected 2025/11/09 13:29:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 86, "corpus": 46239, "corpus [files]": 1178, "corpus [symbols]": 571, "cover overflows": 54396, "coverage": 307909, "distributor delayed": 48460, "distributor undelayed": 48460, "distributor violated": 87, "exec candidate": 81787, "exec collide": 1265, "exec fuzz": 2402, "exec gen": 122, "exec hints": 901, "exec inject": 0, "exec minimize": 1540, "exec retries": 21, "exec seeds": 271, "exec smash": 1793, "exec total [base]": 126132, "exec total [new]": 352587, "exec triage": 146655, "executor restarts [base]": 357, "executor restarts [new]": 915, "fault jobs": 0, "fuzzer jobs": 66, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 19, "max signal": 312232, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 919, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47198, "no exec duration": 51228000000, "no exec requests": 408, "pending": 13, "prog exec time": 339, "reproducing": 4, "rpc recv": 11326806884, "rpc sent": 2270304192, "signal": 302300, "smash jobs": 35, "triage jobs": 12, "vm output": 51636633, "vm restarts [base]": 28, "vm restarts [new]": 86 } 2025/11/09 13:29:45 runner 0 connected 2025/11/09 13:29:54 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:29:55 runner 1 connected 2025/11/09 13:30:26 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/11/09 13:30:39 base crash: kernel BUG in txUnlock 2025/11/09 13:30:46 runner 7 connected 2025/11/09 13:31:15 runner 8 connected 2025/11/09 13:31:29 runner 1 connected 2025/11/09 13:31:29 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:32:20 runner 6 connected 2025/11/09 13:34:33 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:34:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 120, "corpus": 46270, "corpus [files]": 1181, "corpus [symbols]": 573, "cover overflows": 55759, "coverage": 307953, "distributor delayed": 48544, "distributor undelayed": 48536, "distributor violated": 87, "exec candidate": 81787, "exec collide": 1741, "exec fuzz": 3327, "exec gen": 157, "exec hints": 1456, "exec inject": 0, "exec minimize": 2096, "exec retries": 21, "exec seeds": 353, "exec smash": 2594, "exec total [base]": 130691, "exec total [new]": 356152, "exec triage": 146789, "executor restarts [base]": 377, "executor restarts [new]": 933, "fault jobs": 0, "fuzzer jobs": 63, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 26, "max signal": 312316, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1192, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47247, "no exec duration": 118026000000, "no exec requests": 611, "pending": 13, "prog exec time": 568, "reproducing": 4, "rpc recv": 11775607492, "rpc sent": 2407854552, "signal": 302341, "smash jobs": 28, "triage jobs": 9, "vm output": 54373266, "vm restarts [base]": 31, "vm restarts [new]": 89 } 2025/11/09 13:35:22 runner 7 connected 2025/11/09 13:38:04 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 13:38:13 base crash: lost connection to test machine 2025/11/09 13:38:38 base crash: WARNING in io_ring_exit_work 2025/11/09 13:39:02 runner 0 connected 2025/11/09 13:39:27 runner 2 connected 2025/11/09 13:39:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 157, "corpus": 46294, "corpus [files]": 1184, "corpus [symbols]": 576, "cover overflows": 57309, "coverage": 307999, "distributor delayed": 48614, "distributor undelayed": 48614, "distributor violated": 87, "exec candidate": 81787, "exec collide": 2355, "exec fuzz": 4416, "exec gen": 216, "exec hints": 2376, "exec inject": 0, "exec minimize": 2634, "exec retries": 21, "exec seeds": 436, "exec smash": 3350, "exec total [base]": 134194, "exec total [new]": 360347, "exec triage": 146923, "executor restarts [base]": 396, "executor restarts [new]": 966, "fault jobs": 0, "fuzzer jobs": 41, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 24, "max signal": 312414, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1518, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47294, "no exec duration": 316857000000, "no exec requests": 1218, "pending": 13, "prog exec time": 551, "reproducing": 4, "rpc recv": 12095420240, "rpc sent": 2535728008, "signal": 302386, "smash jobs": 13, "triage jobs": 4, "vm output": 58494114, "vm restarts [base]": 33, "vm restarts [new]": 90 } 2025/11/09 13:40:13 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:41:11 runner 8 connected 2025/11/09 13:42:09 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 13:42:29 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 13:42:56 base crash: WARNING in folio_memcg 2025/11/09 13:42:57 runner 7 connected 2025/11/09 13:42:58 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/11/09 13:42:58 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/11/09 13:43:29 base crash: WARNING in folio_memcg 2025/11/09 13:43:45 runner 1 connected 2025/11/09 13:43:47 runner 6 connected 2025/11/09 13:44:17 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:44:18 runner 0 connected 2025/11/09 13:44:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 172, "corpus": 46307, "corpus [files]": 1184, "corpus [symbols]": 576, "cover overflows": 58340, "coverage": 308021, "distributor delayed": 48679, "distributor undelayed": 48669, "distributor violated": 87, "exec candidate": 81787, "exec collide": 2800, "exec fuzz": 5297, "exec gen": 259, "exec hints": 3291, "exec inject": 0, "exec minimize": 3019, "exec retries": 21, "exec seeds": 475, "exec smash": 3766, "exec total [base]": 138013, "exec total [new]": 363537, "exec triage": 146989, "executor restarts [base]": 445, "executor restarts [new]": 1005, "fault jobs": 0, "fuzzer jobs": 25, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 10, "max signal": 312470, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1735, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47327, "no exec duration": 329098000000, "no exec requests": 1262, "pending": 14, "prog exec time": 482, "reproducing": 4, "rpc recv": 12462567552, "rpc sent": 2642960920, "signal": 302408, "smash jobs": 3, "triage jobs": 12, "vm output": 61583995, "vm restarts [base]": 35, "vm restarts [new]": 93 } 2025/11/09 13:45:06 runner 6 connected 2025/11/09 13:45:16 base crash: lost connection to test machine 2025/11/09 13:45:22 base crash: lost connection to test machine 2025/11/09 13:45:36 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:45:50 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 13:45:55 base crash: WARNING in folio_memcg 2025/11/09 13:46:05 runner 1 connected 2025/11/09 13:46:11 runner 0 connected 2025/11/09 13:46:25 runner 6 connected 2025/11/09 13:46:39 runner 8 connected 2025/11/09 13:46:44 runner 2 connected 2025/11/09 13:46:56 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:47:40 base crash: lost connection to test machine 2025/11/09 13:47:44 runner 6 connected 2025/11/09 13:48:17 base crash: lost connection to test machine 2025/11/09 13:48:27 runner 0 connected 2025/11/09 13:48:37 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:48:47 patched crashed: general protection fault in lmLogSync [need repro = true] 2025/11/09 13:48:47 scheduled a reproduction of 'general protection fault in lmLogSync' 2025/11/09 13:48:54 base crash: lost connection to test machine 2025/11/09 13:48:57 base crash: lost connection to test machine 2025/11/09 13:49:07 runner 1 connected 2025/11/09 13:49:25 runner 7 connected 2025/11/09 13:49:36 runner 6 connected 2025/11/09 13:49:39 base crash: lost connection to test machine 2025/11/09 13:49:42 runner 2 connected 2025/11/09 13:49:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 187, "corpus": 46325, "corpus [files]": 1185, "corpus [symbols]": 577, "cover overflows": 59641, "coverage": 308047, "distributor delayed": 48766, "distributor undelayed": 48764, "distributor violated": 96, "exec candidate": 81787, "exec collide": 3251, "exec fuzz": 6194, "exec gen": 307, "exec hints": 4240, "exec inject": 0, "exec minimize": 3503, "exec retries": 21, "exec seeds": 529, "exec smash": 4160, "exec total [base]": 140654, "exec total [new]": 366920, "exec triage": 147091, "executor restarts [base]": 466, "executor restarts [new]": 1042, "fault jobs": 0, "fuzzer jobs": 24, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 3, "hints jobs": 7, "max signal": 312603, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2029, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47366, "no exec duration": 337672000000, "no exec requests": 1283, "pending": 15, "prog exec time": 743, "reproducing": 4, "rpc recv": 12943302544, "rpc sent": 2720218496, "signal": 302433, "smash jobs": 5, "triage jobs": 12, "vm output": 65231338, "vm restarts [base]": 41, "vm restarts [new]": 99 } 2025/11/09 13:49:46 runner 0 connected 2025/11/09 13:50:26 runner 1 connected 2025/11/09 13:51:00 base crash: lost connection to test machine 2025/11/09 13:51:33 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 13:51:35 base crash: lost connection to test machine 2025/11/09 13:51:48 runner 1 connected 2025/11/09 13:51:57 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 13:52:22 runner 6 connected 2025/11/09 13:52:26 runner 2 connected 2025/11/09 13:52:35 base crash: lost connection to test machine 2025/11/09 13:53:25 runner 1 connected 2025/11/09 13:54:15 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2025/11/09 13:54:15 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2025/11/09 13:54:32 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/09 13:54:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 224, "corpus": 46349, "corpus [files]": 1187, "corpus [symbols]": 578, "cover overflows": 60713, "coverage": 308093, "distributor delayed": 48823, "distributor undelayed": 48821, "distributor violated": 96, "exec candidate": 81787, "exec collide": 3716, "exec fuzz": 6925, "exec gen": 343, "exec hints": 4789, "exec inject": 0, "exec minimize": 4132, "exec retries": 21, "exec seeds": 594, "exec smash": 4619, "exec total [base]": 143027, "exec total [new]": 369978, "exec triage": 147212, "executor restarts [base]": 514, "executor restarts [new]": 1079, "fault jobs": 0, "fuzzer jobs": 27, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 11, "max signal": 312679, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2444, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47407, "no exec duration": 340672000000, "no exec requests": 1286, "pending": 16, "prog exec time": 1825, "reproducing": 4, "rpc recv": 13346844372, "rpc sent": 2840894176, "signal": 302479, "smash jobs": 9, "triage jobs": 7, "vm output": 71668522, "vm restarts [base]": 46, "vm restarts [new]": 100 } 2025/11/09 13:55:03 runner 7 connected 2025/11/09 13:55:22 runner 2 connected 2025/11/09 13:58:03 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 13:58:52 runner 8 connected 2025/11/09 13:59:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 269, "corpus": 46374, "corpus [files]": 1188, "corpus [symbols]": 579, "cover overflows": 61586, "coverage": 308150, "distributor delayed": 48879, "distributor undelayed": 48879, "distributor violated": 96, "exec candidate": 81787, "exec collide": 4044, "exec fuzz": 7576, "exec gen": 381, "exec hints": 5178, "exec inject": 0, "exec minimize": 4529, "exec retries": 21, "exec seeds": 676, "exec smash": 5163, "exec total [base]": 145930, "exec total [new]": 372508, "exec triage": 147310, "executor restarts [base]": 535, "executor restarts [new]": 1098, "fault jobs": 0, "fuzzer jobs": 46, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 20, "max signal": 312759, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2663, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47443, "no exec duration": 340672000000, "no exec requests": 1286, "pending": 16, "prog exec time": 826, "reproducing": 4, "rpc recv": 13651324240, "rpc sent": 3026220160, "signal": 302530, "smash jobs": 22, "triage jobs": 4, "vm output": 78210298, "vm restarts [base]": 47, "vm restarts [new]": 102 } 2025/11/09 14:01:17 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/11/09 14:01:32 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 14:01:36 base crash: WARNING in folio_memcg 2025/11/09 14:01:49 base crash: WARNING in folio_memcg 2025/11/09 14:02:06 runner 7 connected 2025/11/09 14:02:26 runner 2 connected 2025/11/09 14:02:38 runner 0 connected 2025/11/09 14:03:06 patched crashed: kernel BUG in ocfs2_set_new_buffer_uptodate [need repro = true] 2025/11/09 14:03:06 scheduled a reproduction of 'kernel BUG in ocfs2_set_new_buffer_uptodate' 2025/11/09 14:03:56 runner 6 connected 2025/11/09 14:04:09 base crash: kernel BUG in ocfs2_set_new_buffer_uptodate 2025/11/09 14:04:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 294, "corpus": 46404, "corpus [files]": 1188, "corpus [symbols]": 579, "cover overflows": 62579, "coverage": 308218, "distributor delayed": 48961, "distributor undelayed": 48961, "distributor violated": 96, "exec candidate": 81787, "exec collide": 4416, "exec fuzz": 8352, "exec gen": 425, "exec hints": 5713, "exec inject": 0, "exec minimize": 4904, "exec retries": 21, "exec seeds": 758, "exec smash": 5743, "exec total [base]": 148543, "exec total [new]": 375406, "exec triage": 147441, "executor restarts [base]": 551, "executor restarts [new]": 1117, "fault jobs": 0, "fuzzer jobs": 49, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 21, "max signal": 312893, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2870, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47490, "no exec duration": 340672000000, "no exec requests": 1286, "pending": 17, "prog exec time": 797, "reproducing": 4, "rpc recv": 13960251772, "rpc sent": 3192158848, "signal": 302598, "smash jobs": 26, "triage jobs": 2, "vm output": 81775457, "vm restarts [base]": 49, "vm restarts [new]": 104 } 2025/11/09 14:05:07 runner 0 connected 2025/11/09 14:05:57 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 14:06:05 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/09 14:06:39 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2025/11/09 14:06:39 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2025/11/09 14:06:56 runner 6 connected 2025/11/09 14:07:11 base crash: lost connection to test machine 2025/11/09 14:07:27 runner 7 connected 2025/11/09 14:08:01 runner 0 connected 2025/11/09 14:09:04 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/09 14:09:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 314, "corpus": 46436, "corpus [files]": 1189, "corpus [symbols]": 580, "cover overflows": 63525, "coverage": 308348, "distributor delayed": 49063, "distributor undelayed": 49063, "distributor violated": 96, "exec candidate": 81787, "exec collide": 4837, "exec fuzz": 9139, "exec gen": 458, "exec hints": 6052, "exec inject": 0, "exec minimize": 5204, "exec retries": 21, "exec seeds": 858, "exec smash": 6544, "exec total [base]": 151481, "exec total [new]": 378388, "exec triage": 147644, "executor restarts [base]": 583, "executor restarts [new]": 1146, "fault jobs": 0, "fuzzer jobs": 48, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 14, "max signal": 313123, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3032, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47561, "no exec duration": 340672000000, "no exec requests": 1286, "pending": 18, "prog exec time": 537, "reproducing": 4, "rpc recv": 14280378948, "rpc sent": 3314392208, "signal": 302686, "smash jobs": 30, "triage jobs": 4, "vm output": 85445934, "vm restarts [base]": 51, "vm restarts [new]": 106 } 2025/11/09 14:09:54 runner 2 connected 2025/11/09 14:10:21 base crash: INFO: task hung in __iterate_supers 2025/11/09 14:10:21 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 14:11:10 runner 1 connected 2025/11/09 14:12:57 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 14:13:33 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 14:13:45 runner 6 connected 2025/11/09 14:14:00 base crash: lost connection to test machine 2025/11/09 14:14:15 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 14:14:23 runner 7 connected 2025/11/09 14:14:26 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 14:14:27 base crash: lost connection to test machine 2025/11/09 14:14:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 317, "corpus": 46447, "corpus [files]": 1190, "corpus [symbols]": 580, "cover overflows": 64676, "coverage": 308393, "distributor delayed": 49114, "distributor undelayed": 49108, "distributor violated": 96, "exec candidate": 81787, "exec collide": 5345, "exec fuzz": 10166, "exec gen": 513, "exec hints": 6803, "exec inject": 0, "exec minimize": 5456, "exec retries": 21, "exec seeds": 884, "exec smash": 7096, "exec total [base]": 154895, "exec total [new]": 381635, "exec triage": 147716, "executor restarts [base]": 604, "executor restarts [new]": 1164, "fault jobs": 0, "fuzzer jobs": 13, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 5, "max signal": 313220, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3151, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47588, "no exec duration": 340672000000, "no exec requests": 1286, "pending": 18, "prog exec time": 582, "reproducing": 4, "rpc recv": 14573405172, "rpc sent": 3437054672, "signal": 302730, "smash jobs": 1, "triage jobs": 7, "vm output": 88763943, "vm restarts [base]": 53, "vm restarts [new]": 108 } 2025/11/09 14:14:50 runner 0 connected 2025/11/09 14:15:04 runner 6 connected 2025/11/09 14:15:11 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 14:15:16 runner 1 connected 2025/11/09 14:15:52 base crash: lost connection to test machine 2025/11/09 14:15:59 runner 7 connected 2025/11/09 14:16:07 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 14:16:29 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 14:16:31 base crash: lost connection to test machine 2025/11/09 14:16:40 runner 1 connected 2025/11/09 14:16:55 runner 6 connected 2025/11/09 14:17:12 runner 0 connected 2025/11/09 14:17:12 base crash: lost connection to test machine 2025/11/09 14:17:18 runner 7 connected 2025/11/09 14:17:42 base crash: lost connection to test machine 2025/11/09 14:17:51 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 14:18:00 runner 1 connected 2025/11/09 14:18:30 runner 0 connected 2025/11/09 14:18:38 base crash: lost connection to test machine 2025/11/09 14:18:40 runner 7 connected 2025/11/09 14:19:24 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 14:19:27 runner 2 connected 2025/11/09 14:19:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 327, "corpus": 46461, "corpus [files]": 1190, "corpus [symbols]": 580, "cover overflows": 65753, "coverage": 308433, "distributor delayed": 49172, "distributor undelayed": 49167, "distributor violated": 96, "exec candidate": 81787, "exec collide": 5724, "exec fuzz": 10882, "exec gen": 545, "exec hints": 7176, "exec inject": 0, "exec minimize": 5942, "exec retries": 21, "exec seeds": 920, "exec smash": 7419, "exec total [base]": 156979, "exec total [new]": 384066, "exec triage": 147796, "executor restarts [base]": 629, "executor restarts [new]": 1188, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 3, "max signal": 313275, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3378, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47616, "no exec duration": 340672000000, "no exec requests": 1286, "pending": 18, "prog exec time": 536, "reproducing": 4, "rpc recv": 15082851240, "rpc sent": 3566090816, "signal": 302770, "smash jobs": 4, "triage jobs": 8, "vm output": 92062399, "vm restarts [base]": 60, "vm restarts [new]": 113 } 2025/11/09 14:20:04 base crash: WARNING in folio_memcg 2025/11/09 14:20:12 runner 7 connected 2025/11/09 14:21:02 runner 0 connected 2025/11/09 14:22:19 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 14:22:21 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 14:22:44 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 14:23:07 runner 8 connected 2025/11/09 14:23:07 base crash: lost connection to test machine 2025/11/09 14:23:11 runner 6 connected 2025/11/09 14:23:22 base crash: WARNING in folio_memcg 2025/11/09 14:23:42 base crash: lost connection to test machine 2025/11/09 14:23:56 runner 1 connected 2025/11/09 14:24:11 runner 2 connected 2025/11/09 14:24:16 VM-8 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13133: connect: connection refused 2025/11/09 14:24:16 VM-8 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:13133: connect: connection refused 2025/11/09 14:24:26 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 14:24:32 runner 0 connected 2025/11/09 14:24:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 338, "corpus": 46482, "corpus [files]": 1191, "corpus [symbols]": 581, "cover overflows": 66923, "coverage": 308470, "distributor delayed": 49221, "distributor undelayed": 49218, "distributor violated": 96, "exec candidate": 81787, "exec collide": 6426, "exec fuzz": 12119, "exec gen": 618, "exec hints": 8116, "exec inject": 0, "exec minimize": 6361, "exec retries": 21, "exec seeds": 977, "exec smash": 7906, "exec total [base]": 160033, "exec total [new]": 388077, "exec triage": 147891, "executor restarts [base]": 653, "executor restarts [new]": 1214, "fault jobs": 0, "fuzzer jobs": 14, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 8, "max signal": 313340, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3638, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47649, "no exec duration": 343690000000, "no exec requests": 1290, "pending": 18, "prog exec time": 516, "reproducing": 4, "rpc recv": 15486805716, "rpc sent": 3715444824, "signal": 302804, "smash jobs": 1, "triage jobs": 5, "vm output": 95209770, "vm restarts [base]": 64, "vm restarts [new]": 116 } 2025/11/09 14:25:15 runner 8 connected 2025/11/09 14:25:21 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/11/09 14:26:11 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/09 14:26:11 runner 6 connected 2025/11/09 14:27:00 runner 1 connected 2025/11/09 14:27:03 base crash: lost connection to test machine 2025/11/09 14:27:46 base crash: WARNING in folio_memcg 2025/11/09 14:27:52 runner 2 connected 2025/11/09 14:28:35 runner 1 connected 2025/11/09 14:29:20 patched crashed: WARNING in io_ring_exit_work [need repro = false] 2025/11/09 14:29:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 353, "corpus": 46505, "corpus [files]": 1192, "corpus [symbols]": 581, "cover overflows": 68359, "coverage": 308502, "distributor delayed": 49285, "distributor undelayed": 49284, "distributor violated": 96, "exec candidate": 81787, "exec collide": 7190, "exec fuzz": 13540, "exec gen": 703, "exec hints": 9427, "exec inject": 0, "exec minimize": 6965, "exec retries": 21, "exec seeds": 1043, "exec smash": 8388, "exec total [base]": 164422, "exec total [new]": 392946, "exec triage": 148026, "executor restarts [base]": 679, "executor restarts [new]": 1237, "fault jobs": 0, "fuzzer jobs": 24, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 1, "hints jobs": 12, "max signal": 313414, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3945, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47695, "no exec duration": 346792000000, "no exec requests": 1296, "pending": 18, "prog exec time": 345, "reproducing": 4, "rpc recv": 15863928756, "rpc sent": 3860583968, "signal": 302834, "smash jobs": 9, "triage jobs": 3, "vm output": 98252711, "vm restarts [base]": 67, "vm restarts [new]": 118 } 2025/11/09 14:29:46 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 14:29:57 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 14:30:10 runner 7 connected 2025/11/09 14:30:36 runner 8 connected 2025/11/09 14:30:40 base crash: WARNING in rate_control_rate_init 2025/11/09 14:30:48 runner 6 connected 2025/11/09 14:30:59 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 14:31:31 runner 1 connected 2025/11/09 14:32:00 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 14:32:47 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 14:33:31 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 14:33:40 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 14:33:43 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 14:33:53 base crash: WARNING in folio_memcg 2025/11/09 14:34:08 base crash: WARNING in xfrm_state_fini 2025/11/09 14:34:15 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 14:34:20 runner 6 connected 2025/11/09 14:34:29 runner 7 connected 2025/11/09 14:34:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 376, "corpus": 46516, "corpus [files]": 1192, "corpus [symbols]": 581, "cover overflows": 69318, "coverage": 308517, "distributor delayed": 49330, "distributor undelayed": 49330, "distributor violated": 96, "exec candidate": 81787, "exec collide": 7693, "exec fuzz": 14435, "exec gen": 742, "exec hints": 10124, "exec inject": 0, "exec minimize": 7270, "exec retries": 21, "exec seeds": 1077, "exec smash": 8721, "exec total [base]": 169111, "exec total [new]": 395850, "exec triage": 148119, "executor restarts [base]": 700, "executor restarts [new]": 1274, "fault jobs": 0, "fuzzer jobs": 13, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 3, "hints jobs": 6, "max signal": 313490, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4163, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47728, "no exec duration": 348842000000, "no exec requests": 1301, "pending": 18, "prog exec time": 919, "reproducing": 4, "rpc recv": 16299114972, "rpc sent": 3985224968, "signal": 302848, "smash jobs": 2, "triage jobs": 5, "vm output": 101262157, "vm restarts [base]": 68, "vm restarts [new]": 123 } 2025/11/09 14:34:44 runner 2 connected 2025/11/09 14:34:58 runner 0 connected 2025/11/09 14:35:39 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 14:36:16 repro finished 'KASAN: slab-use-after-free Read in jfs_lazycommit', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/09 14:36:16 reproduction of "WARNING in xfrm_state_fini" aborted: it's no longer needed 2025/11/09 14:36:16 reproduction of "INFO: task hung in __iterate_supers" aborted: it's no longer needed 2025/11/09 14:36:16 failed repro for "KASAN: slab-use-after-free Read in jfs_lazycommit", err=%!s() 2025/11/09 14:36:16 start reproducing 'possible deadlock in ext4_destroy_inline_data' 2025/11/09 14:36:16 "KASAN: slab-use-after-free Read in jfs_lazycommit": saved crash log into 1762698976.crash.log 2025/11/09 14:36:16 "KASAN: slab-use-after-free Read in jfs_lazycommit": saved repro log into 1762698976.repro.log 2025/11/09 14:39:09 patched crashed: KASAN: slab-out-of-bounds Read in dtSplitPage [need repro = true] 2025/11/09 14:39:09 scheduled a reproduction of 'KASAN: slab-out-of-bounds Read in dtSplitPage' 2025/11/09 14:39:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 412, "corpus": 46530, "corpus [files]": 1195, "corpus [symbols]": 583, "cover overflows": 71275, "coverage": 308537, "distributor delayed": 49368, "distributor undelayed": 49367, "distributor violated": 96, "exec candidate": 81787, "exec collide": 8561, "exec fuzz": 16116, "exec gen": 841, "exec hints": 11041, "exec inject": 0, "exec minimize": 7688, "exec retries": 22, "exec seeds": 1113, "exec smash": 9033, "exec total [base]": 173437, "exec total [new]": 400262, "exec triage": 148204, "executor restarts [base]": 724, "executor restarts [new]": 1297, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 4, "max signal": 313533, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4379, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47757, "no exec duration": 352121000000, "no exec requests": 1306, "pending": 16, "prog exec time": 469, "reproducing": 4, "rpc recv": 16611130684, "rpc sent": 4111955752, "signal": 302866, "smash jobs": 0, "triage jobs": 2, "vm output": 106877431, "vm restarts [base]": 70, "vm restarts [new]": 123 } 2025/11/09 14:40:07 runner 6 connected 2025/11/09 14:40:52 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 14:41:34 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 14:41:39 base crash: INFO: task hung in usbdev_open 2025/11/09 14:42:08 reproducing crash 'possible deadlock in ocfs2_try_remove_refcount_tree': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 14:42:08 repro finished 'possible deadlock in ocfs2_try_remove_refcount_tree', repro=true crepro=false desc='KASAN: slab-use-after-free Read in jfs_syncpt' hub=false from_dashboard=false 2025/11/09 14:42:08 found repro for "KASAN: slab-use-after-free Read in jfs_syncpt" (orig title: "possible deadlock in ocfs2_try_remove_refcount_tree", reliability: 1), took 74.81 minutes 2025/11/09 14:42:08 start reproducing 'possible deadlock in ocfs2_init_acl' 2025/11/09 14:42:08 "KASAN: slab-use-after-free Read in jfs_syncpt": saved crash log into 1762699328.crash.log 2025/11/09 14:42:08 "KASAN: slab-use-after-free Read in jfs_syncpt": saved repro log into 1762699328.repro.log 2025/11/09 14:43:22 attempt #0 to run "KASAN: slab-use-after-free Read in jfs_syncpt" on base: aborting due to context cancelation 2025/11/09 14:43:35 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 14:43:48 base crash: INFO: task hung in bdev_open 2025/11/09 14:44:13 runner 0 connected 2025/11/09 14:44:15 base crash: lost connection to test machine 2025/11/09 14:44:26 runner 8 connected 2025/11/09 14:44:37 runner 2 connected 2025/11/09 14:44:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 439, "corpus": 46541, "corpus [files]": 1197, "corpus [symbols]": 584, "cover overflows": 73316, "coverage": 308556, "distributor delayed": 49421, "distributor undelayed": 49421, "distributor violated": 96, "exec candidate": 81787, "exec collide": 9607, "exec fuzz": 18167, "exec gen": 961, "exec hints": 11527, "exec inject": 0, "exec minimize": 8043, "exec retries": 22, "exec seeds": 1143, "exec smash": 9254, "exec total [base]": 176527, "exec total [new]": 404682, "exec triage": 148310, "executor restarts [base]": 747, "executor restarts [new]": 1323, "fault jobs": 0, "fuzzer jobs": 6, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 3, "hints jobs": 2, "max signal": 313607, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4571, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47795, "no exec duration": 354149000000, "no exec requests": 1309, "pending": 15, "prog exec time": 515, "reproducing": 4, "rpc recv": 16879819656, "rpc sent": 4228412976, "signal": 302885, "smash jobs": 1, "triage jobs": 3, "vm output": 112012294, "vm restarts [base]": 72, "vm restarts [new]": 125 } 2025/11/09 14:45:06 runner 1 connected 2025/11/09 14:45:09 base crash: lost connection to test machine 2025/11/09 14:45:22 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/11/09 14:45:59 runner 2 connected 2025/11/09 14:46:11 runner 0 connected 2025/11/09 14:46:13 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 14:46:42 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 14:47:02 runner 7 connected 2025/11/09 14:47:16 repro finished 'possible deadlock in ext4_destroy_inline_data', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/09 14:47:16 failed repro for "possible deadlock in ext4_destroy_inline_data", err=%!s() 2025/11/09 14:47:16 start reproducing 'possible deadlock in ntfs_look_for_free_space' 2025/11/09 14:47:16 "possible deadlock in ext4_destroy_inline_data": saved crash log into 1762699636.crash.log 2025/11/09 14:47:16 "possible deadlock in ext4_destroy_inline_data": saved repro log into 1762699636.repro.log 2025/11/09 14:47:33 runner 6 connected 2025/11/09 14:48:50 base crash: WARNING in folio_memcg 2025/11/09 14:48:51 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 14:49:40 runner 7 connected 2025/11/09 14:49:41 runner 2 connected 2025/11/09 14:49:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 444, "corpus": 46553, "corpus [files]": 1199, "corpus [symbols]": 586, "cover overflows": 74606, "coverage": 308576, "distributor delayed": 49455, "distributor undelayed": 49452, "distributor violated": 96, "exec candidate": 81787, "exec collide": 10258, "exec fuzz": 19436, "exec gen": 1023, "exec hints": 11835, "exec inject": 0, "exec minimize": 8534, "exec retries": 23, "exec seeds": 1173, "exec smash": 9504, "exec total [base]": 179456, "exec total [new]": 407801, "exec triage": 148367, "executor restarts [base]": 782, "executor restarts [new]": 1351, "fault jobs": 0, "fuzzer jobs": 17, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 6, "max signal": 313650, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4840, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47820, "no exec duration": 354166000000, "no exec requests": 1310, "pending": 14, "prog exec time": 661, "reproducing": 4, "rpc recv": 17241696828, "rpc sent": 4345190064, "signal": 302910, "smash jobs": 4, "triage jobs": 7, "vm output": 115313215, "vm restarts [base]": 76, "vm restarts [new]": 128 } 2025/11/09 14:49:57 base crash: WARNING in xfrm_state_fini 2025/11/09 14:50:47 runner 1 connected 2025/11/09 14:51:28 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 14:51:35 base crash: lost connection to test machine 2025/11/09 14:51:52 base crash: WARNING in folio_memcg 2025/11/09 14:52:26 runner 6 connected 2025/11/09 14:52:31 runner 2 connected 2025/11/09 14:52:42 runner 1 connected 2025/11/09 14:53:26 repro finished 'possible deadlock in ocfs2_init_acl', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/09 14:53:26 failed repro for "possible deadlock in ocfs2_init_acl", err=%!s() 2025/11/09 14:53:26 start reproducing 'possible deadlock in mark_as_free_ex' 2025/11/09 14:53:26 "possible deadlock in ocfs2_init_acl": saved crash log into 1762700006.crash.log 2025/11/09 14:53:26 "possible deadlock in ocfs2_init_acl": saved repro log into 1762700006.repro.log 2025/11/09 14:54:42 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 14:54:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 451, "corpus": 46573, "corpus [files]": 1199, "corpus [symbols]": 586, "cover overflows": 75840, "coverage": 308603, "distributor delayed": 49501, "distributor undelayed": 49499, "distributor violated": 96, "exec candidate": 81787, "exec collide": 10902, "exec fuzz": 20581, "exec gen": 1082, "exec hints": 12719, "exec inject": 0, "exec minimize": 9107, "exec retries": 23, "exec seeds": 1225, "exec smash": 9959, "exec total [base]": 182250, "exec total [new]": 411695, "exec triage": 148448, "executor restarts [base]": 807, "executor restarts [new]": 1378, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 6, "max signal": 313699, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5232, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47851, "no exec duration": 354166000000, "no exec requests": 1310, "pending": 13, "prog exec time": 549, "reproducing": 4, "rpc recv": 17627040864, "rpc sent": 4480158520, "signal": 302933, "smash jobs": 3, "triage jobs": 6, "vm output": 120083988, "vm restarts [base]": 79, "vm restarts [new]": 129 } 2025/11/09 14:55:31 runner 7 connected 2025/11/09 14:56:26 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 14:57:03 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 14:57:11 patched crashed: WARNING in folio_memcg [need repro = false] 2025/11/09 14:57:15 runner 7 connected 2025/11/09 14:57:19 base crash: WARNING in folio_memcg 2025/11/09 14:57:54 runner 6 connected 2025/11/09 14:58:00 runner 8 connected 2025/11/09 14:58:09 runner 0 connected 2025/11/09 14:58:28 repro finished 'possible deadlock in ntfs_look_for_free_space', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/09 14:58:28 failed repro for "possible deadlock in ntfs_look_for_free_space", err=%!s() 2025/11/09 14:58:28 start reproducing 'WARNING in __dev_set_promiscuity' 2025/11/09 14:58:28 "possible deadlock in ntfs_look_for_free_space": saved crash log into 1762700308.crash.log 2025/11/09 14:58:28 "possible deadlock in ntfs_look_for_free_space": saved repro log into 1762700308.repro.log 2025/11/09 14:59:25 base crash: WARNING in folio_memcg 2025/11/09 14:59:40 repro finished 'INFO: task hung in bdev_open', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/09 14:59:40 failed repro for "INFO: task hung in bdev_open", err=%!s() 2025/11/09 14:59:40 start reproducing 'possible deadlock in ocfs2_xattr_set' 2025/11/09 14:59:40 "INFO: task hung in bdev_open": saved crash log into 1762700380.crash.log 2025/11/09 14:59:40 "INFO: task hung in bdev_open": saved repro log into 1762700380.repro.log 2025/11/09 14:59:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 458, "corpus": 46587, "corpus [files]": 1199, "corpus [symbols]": 586, "cover overflows": 77270, "coverage": 308639, "distributor delayed": 49536, "distributor undelayed": 49536, "distributor violated": 96, "exec candidate": 81787, "exec collide": 11669, "exec fuzz": 21993, "exec gen": 1146, "exec hints": 13523, "exec inject": 0, "exec minimize": 9471, "exec retries": 23, "exec seeds": 1258, "exec smash": 10262, "exec total [base]": 186354, "exec total [new]": 415516, "exec triage": 148520, "executor restarts [base]": 827, "executor restarts [new]": 1406, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 4, "max signal": 313750, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5442, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47877, "no exec duration": 358390000000, "no exec requests": 1320, "pending": 11, "prog exec time": 436, "reproducing": 4, "rpc recv": 18018162672, "rpc sent": 4610827608, "signal": 302961, "smash jobs": 3, "triage jobs": 4, "vm output": 125514507, "vm restarts [base]": 80, "vm restarts [new]": 133 } 2025/11/09 15:00:14 runner 0 connected 2025/11/09 15:00:35 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 15:01:21 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/11/09 15:01:39 base crash: WARNING in folio_memcg 2025/11/09 15:01:43 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 15:02:09 runner 8 connected 2025/11/09 15:02:12 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 15:02:36 runner 1 connected 2025/11/09 15:02:49 base crash: lost connection to test machine 2025/11/09 15:03:02 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 15:03:08 base crash: lost connection to test machine 2025/11/09 15:03:30 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 15:03:39 runner 2 connected 2025/11/09 15:03:58 runner 1 connected 2025/11/09 15:04:19 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 15:04:27 base crash: lost connection to test machine 2025/11/09 15:04:36 repro finished 'possible deadlock in mark_as_free_ex', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/09 15:04:36 reproduction of "kernel BUG in ocfs2_set_new_buffer_uptodate" aborted: it's no longer needed 2025/11/09 15:04:36 failed repro for "possible deadlock in mark_as_free_ex", err=%!s() 2025/11/09 15:04:36 start reproducing 'general protection fault in lmLogSync' 2025/11/09 15:04:36 "possible deadlock in mark_as_free_ex": saved crash log into 1762700676.crash.log 2025/11/09 15:04:36 "possible deadlock in mark_as_free_ex": saved repro log into 1762700676.repro.log 2025/11/09 15:04:42 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 467, "corpus": 46598, "corpus [files]": 1200, "corpus [symbols]": 586, "cover overflows": 78869, "coverage": 308650, "distributor delayed": 49573, "distributor undelayed": 49573, "distributor violated": 96, "exec candidate": 81787, "exec collide": 12468, "exec fuzz": 23564, "exec gen": 1248, "exec hints": 13740, "exec inject": 0, "exec minimize": 9938, "exec retries": 23, "exec seeds": 1290, "exec smash": 10512, "exec total [base]": 189625, "exec total [new]": 419027, "exec triage": 148593, "executor restarts [base]": 855, "executor restarts [new]": 1442, "fault jobs": 0, "fuzzer jobs": 10, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 3, "hints jobs": 1, "max signal": 313830, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5672, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47904, "no exec duration": 361390000000, "no exec requests": 1323, "pending": 9, "prog exec time": 668, "reproducing": 4, "rpc recv": 18375306664, "rpc sent": 4698592504, "signal": 302972, "smash jobs": 2, "triage jobs": 7, "vm output": 129767567, "vm restarts [base]": 84, "vm restarts [new]": 134 } 2025/11/09 15:04:51 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 15:05:09 patched crashed: lost connection to test machine [need repro = false] 2025/11/09 15:05:18 runner 2 connected 2025/11/09 15:05:53 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 15:05:54 repro finished 'WARNING in xfrm6_tunnel_net_exit', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/09 15:05:54 failed repro for "WARNING in xfrm6_tunnel_net_exit", err=%!s() 2025/11/09 15:05:54 start reproducing 'kernel BUG in jfs_evict_inode' 2025/11/09 15:05:54 "WARNING in xfrm6_tunnel_net_exit": saved crash log into 1762700754.crash.log 2025/11/09 15:05:54 "WARNING in xfrm6_tunnel_net_exit": saved repro log into 1762700754.repro.log 2025/11/09 15:05:59 runner 8 connected 2025/11/09 15:06:07 base crash: lost connection to test machine 2025/11/09 15:06:20 base crash: WARNING in folio_memcg 2025/11/09 15:06:21 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 15:06:56 runner 2 connected 2025/11/09 15:07:10 runner 0 connected 2025/11/09 15:07:34 base crash: INFO: task hung in __iterate_supers 2025/11/09 15:08:04 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 15:08:33 runner 1 connected 2025/11/09 15:08:47 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 15:09:05 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/11/09 15:09:08 patched crashed: INFO: task hung in __iterate_supers [need repro = false] 2025/11/09 15:09:22 reproducing crash 'possible deadlock in ocfs2_xattr_set': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/suballoc.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/11/09 15:09:38 bug reporting terminated 2025/11/09 15:09:38 status reporting terminated 2025/11/09 15:09:38 new: rpc server terminaled 2025/11/09 15:09:38 base: rpc server terminaled 2025/11/09 15:09:38 repro finished 'possible deadlock in ocfs2_xattr_set', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/09 15:09:38 base: pool terminated 2025/11/09 15:09:38 base: kernel context loop terminated 2025/11/09 15:10:08 repro finished 'kernel BUG in jfs_evict_inode', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/09 15:10:09 repro finished 'general protection fault in lmLogSync', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/09 15:10:11 repro finished 'WARNING in __dev_set_promiscuity', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/11/09 15:10:11 repro loop terminated 2025/11/09 15:10:11 new: pool terminated 2025/11/09 15:10:11 new: kernel context loop terminated 2025/11/09 15:10:11 diff fuzzing terminated 2025/11/09 15:10:11 fuzzing is finished 2025/11/09 15:10:11 status at the end: Title On-Base On-Patched BUG: sleeping function called from invalid context in hook_sb_delete 1 crashes 3 crashes INFO: task hung in __iterate_supers 2 crashes 2 crashes INFO: task hung in bdev_open 1 crashes 1 crashes INFO: task hung in rfkill_global_led_trigger_worker 1 crashes INFO: task hung in usbdev_open 1 crashes KASAN: out-of-bounds Read in ext4_xattr_set_entry 1 crashes[reproduced] KASAN: slab-out-of-bounds Read in dtSplitPage 1 crashes KASAN: slab-use-after-free Read in jfs_lazycommit 1 crashes KASAN: slab-use-after-free Read in jfs_syncpt [reproduced] WARNING in __dev_set_promiscuity 1 crashes WARNING in folio_memcg 20 crashes 35 crashes WARNING in io_ring_exit_work 3 crashes 2 crashes WARNING in rate_control_rate_init 1 crashes WARNING in xfrm6_tunnel_net_exit 1 crashes 6 crashes WARNING in xfrm_state_fini 4 crashes 11 crashes general protection fault in lmLogSync 1 crashes general protection fault in pcl818_ai_cancel 1 crashes 5 crashes kernel BUG in jfs_evict_inode 1 crashes kernel BUG in ocfs2_set_new_buffer_uptodate 1 crashes 1 crashes kernel BUG in txUnlock 2 crashes 4 crashes lost connection to test machine 37 crashes 36 crashes possible deadlock in attr_data_get_block 1 crashes possible deadlock in dqget 1 crashes possible deadlock in ext4_destroy_inline_data 2 crashes possible deadlock in ext4_writepages 1 crashes possible deadlock in mark_as_free_ex 1 crashes possible deadlock in ntfs_look_for_free_space 1 crashes possible deadlock in ocfs2_acquire_dquot 1 crashes possible deadlock in ocfs2_init_acl 1 crashes possible deadlock in ocfs2_reserve_suballoc_bits 1 crashes 1 crashes possible deadlock in ocfs2_try_remove_refcount_tree 4 crashes 7 crashes possible deadlock in ocfs2_xattr_set 1 crashes possible deadlock in run_unpack_ex 1 crashes 2 crashes