2025/09/09 07:00:28 extracted 327329 text symbol hashes for base and 327329 for patched 2025/09/09 07:00:28 binaries are different, continuing fuzzing 2025/09/09 07:00:28 adding modified_functions to focus areas: ["__nr_hugepages_store_common" "__unmap_hugepage_range" "__update_and_free_hugetlb_folio" "__vma_reservation_common" "adjust_pool_surplus" "alloc_and_dissolve_hugetlb_folio" "alloc_hugetlb_folio" "alloc_hugetlb_folio_nodemask" "clear_vma_resv_huge_pages" "copy_hugetlb_page_range" "demote_store" "dissolve_free_hugetlb_folio" "folio_isolate_hugetlb" "folio_putback_hugetlb" "free_huge_folio" "hugetlb_acct_memory" "hugetlb_add_to_page_cache" "hugetlb_fault" "hugetlb_mfill_atomic_pte" "hugetlb_reserve_pages" "hugetlb_unreserve_pages" "hugetlb_unshare_all_pmds" "hugetlb_vm_op_close" "hugetlb_vm_op_open" "hugetlb_wp" "move_hugetlb_state" "prep_and_add_allocated_folios" "region_del" "remove_pool_hugetlb_folio" "restore_reserve_on_error" "update_and_free_hugetlb_folio" "update_and_free_pages_bulk"] 2025/09/09 07:00:28 adding directly modified files to focus areas: ["mm/hugetlb.c" "tools/testing/selftests/mm/run_vmtests.sh" "tools/testing/selftests/mm/uffd-stress.c"] 2025/09/09 07:00:29 downloaded the corpus from https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db 2025/09/09 07:01:27 runner 1 connected 2025/09/09 07:01:27 runner 6 connected 2025/09/09 07:01:27 runner 0 connected 2025/09/09 07:01:27 runner 2 connected 2025/09/09 07:01:27 runner 3 connected 2025/09/09 07:01:27 runner 1 connected 2025/09/09 07:01:27 runner 4 connected 2025/09/09 07:01:27 runner 8 connected 2025/09/09 07:01:27 runner 2 connected 2025/09/09 07:01:27 runner 3 connected 2025/09/09 07:01:27 runner 9 connected 2025/09/09 07:01:27 runner 0 connected 2025/09/09 07:01:27 runner 7 connected 2025/09/09 07:01:28 runner 5 connected 2025/09/09 07:01:33 initializing coverage information... 2025/09/09 07:01:33 executor cover filter: 0 PCs 2025/09/09 07:01:37 discovered 7699 source files, 338732 symbols 2025/09/09 07:01:37 coverage filter: __nr_hugepages_store_common: [__nr_hugepages_store_common] 2025/09/09 07:01:37 coverage filter: __unmap_hugepage_range: [__unmap_hugepage_range] 2025/09/09 07:01:37 coverage filter: __update_and_free_hugetlb_folio: [__update_and_free_hugetlb_folio] 2025/09/09 07:01:37 coverage filter: __vma_reservation_common: [__vma_reservation_common] 2025/09/09 07:01:37 coverage filter: adjust_pool_surplus: [adjust_pool_surplus] 2025/09/09 07:01:37 coverage filter: alloc_and_dissolve_hugetlb_folio: [alloc_and_dissolve_hugetlb_folio] 2025/09/09 07:01:37 coverage filter: alloc_hugetlb_folio: [alloc_hugetlb_folio alloc_hugetlb_folio_nodemask alloc_hugetlb_folio_reserve alloc_hugetlb_folio_vma] 2025/09/09 07:01:37 coverage filter: alloc_hugetlb_folio_nodemask: [] 2025/09/09 07:01:37 coverage filter: clear_vma_resv_huge_pages: [clear_vma_resv_huge_pages] 2025/09/09 07:01:37 coverage filter: copy_hugetlb_page_range: [copy_hugetlb_page_range] 2025/09/09 07:01:37 coverage filter: demote_store: [demote_store] 2025/09/09 07:01:37 coverage filter: dissolve_free_hugetlb_folio: [dissolve_free_hugetlb_folio dissolve_free_hugetlb_folios] 2025/09/09 07:01:37 coverage filter: folio_isolate_hugetlb: [folio_isolate_hugetlb] 2025/09/09 07:01:37 coverage filter: folio_putback_hugetlb: [folio_putback_hugetlb] 2025/09/09 07:01:37 coverage filter: free_huge_folio: [free_huge_folio] 2025/09/09 07:01:37 coverage filter: hugetlb_acct_memory: [hugetlb_acct_memory] 2025/09/09 07:01:37 coverage filter: hugetlb_add_to_page_cache: [hugetlb_add_to_page_cache] 2025/09/09 07:01:37 coverage filter: hugetlb_fault: [hugetlb_fault hugetlb_fault_mutex_hash] 2025/09/09 07:01:37 coverage filter: hugetlb_mfill_atomic_pte: [hugetlb_mfill_atomic_pte] 2025/09/09 07:01:37 coverage filter: hugetlb_reserve_pages: [hugetlb_reserve_pages] 2025/09/09 07:01:37 coverage filter: hugetlb_unreserve_pages: [hugetlb_unreserve_pages] 2025/09/09 07:01:37 coverage filter: hugetlb_unshare_all_pmds: [hugetlb_unshare_all_pmds] 2025/09/09 07:01:37 coverage filter: hugetlb_vm_op_close: [hugetlb_vm_op_close] 2025/09/09 07:01:37 coverage filter: hugetlb_vm_op_open: [hugetlb_vm_op_open] 2025/09/09 07:01:37 coverage filter: hugetlb_wp: [hugetlb_wp] 2025/09/09 07:01:37 coverage filter: move_hugetlb_state: [move_hugetlb_state] 2025/09/09 07:01:37 coverage filter: prep_and_add_allocated_folios: [prep_and_add_allocated_folios] 2025/09/09 07:01:37 coverage filter: region_del: [devlink_nl_region_del_doit nvdimm_region_delete region_del] 2025/09/09 07:01:37 coverage filter: remove_pool_hugetlb_folio: [remove_pool_hugetlb_folio] 2025/09/09 07:01:37 coverage filter: restore_reserve_on_error: [restore_reserve_on_error] 2025/09/09 07:01:37 coverage filter: update_and_free_hugetlb_folio: [update_and_free_hugetlb_folio] 2025/09/09 07:01:37 coverage filter: update_and_free_pages_bulk: [update_and_free_pages_bulk] 2025/09/09 07:01:37 coverage filter: mm/hugetlb.c: [mm/hugetlb.c mm/hugetlb_cgroup.c mm/hugetlb_cma.c] 2025/09/09 07:01:37 coverage filter: tools/testing/selftests/mm/run_vmtests.sh: [] 2025/09/09 07:01:37 coverage filter: tools/testing/selftests/mm/uffd-stress.c: [] 2025/09/09 07:01:37 area "symbols": 2257 PCs in the cover filter 2025/09/09 07:01:37 area "files": 4453 PCs in the cover filter 2025/09/09 07:01:37 area "": 0 PCs in the cover filter 2025/09/09 07:01:37 executor cover filter: 0 PCs 2025/09/09 07:01:38 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8054 2025/09/09 07:01:38 base: machine check complete 2025/09/09 07:01:41 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$PERF_EVENT_IOC_DISABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ENABLE : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_ID : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_MODIFY_ATTRIBUTES : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PAUSE_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_PERIOD : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_QUERY_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_REFRESH : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_RESET : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_BPF : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_FILTER : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$PERF_EVENT_IOC_SET_OUTPUT : fd_perf [perf_event_open perf_event_open$cgroup] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_uexit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3838/8054 2025/09/09 07:01:41 new: machine check complete 2025/09/09 07:01:42 new: adding 79415 seeds 2025/09/09 07:04:17 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 07:05:12 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 07:05:13 runner 3 connected 2025/09/09 07:05:25 base crash: lost connection to test machine 2025/09/09 07:05:30 STAT { "buffer too small": 0, "candidate triage jobs": 58, "candidates": 74096, "comps overflows": 0, "corpus": 5226, "corpus [files]": 90, "corpus [symbols]": 59, "cover overflows": 3341, "coverage": 175338, "distributor delayed": 4475, "distributor undelayed": 4473, "distributor violated": 0, "exec candidate": 5319, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 11045, "exec total [new]": 23606, "exec triage": 16561, "executor restarts [base]": 59, "executor restarts [new]": 126, "fault jobs": 0, "fuzzer jobs": 58, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 177002, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 5319, "no exec duration": 49042000000, "no exec requests": 348, "pending": 0, "prog exec time": 262, "reproducing": 0, "rpc recv": 1315948048, "rpc sent": 121304392, "signal": 172811, "smash jobs": 0, "triage jobs": 0, "vm output": 2897168, "vm restarts [base]": 4, "vm restarts [new]": 11 } 2025/09/09 07:05:46 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/09/09 07:05:46 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/09 07:05:57 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/09/09 07:05:57 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/09 07:05:58 base crash: lost connection to test machine 2025/09/09 07:06:01 runner 9 connected 2025/09/09 07:06:09 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/09/09 07:06:09 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/09 07:06:14 runner 1 connected 2025/09/09 07:06:22 base crash "possible deadlock in ocfs2_try_remove_refcount_tree" is already known 2025/09/09 07:06:22 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/09 07:06:37 runner 0 connected 2025/09/09 07:06:45 runner 8 connected 2025/09/09 07:06:47 runner 2 connected 2025/09/09 07:06:58 runner 7 connected 2025/09/09 07:07:19 runner 5 connected 2025/09/09 07:07:22 base crash: KASAN: slab-use-after-free Read in xfrm_state_find 2025/09/09 07:07:25 base crash "possible deadlock in ocfs2_init_acl" is already known 2025/09/09 07:07:25 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 07:08:12 base crash "possible deadlock in dqget" is already known 2025/09/09 07:08:12 patched crashed: possible deadlock in dqget [need repro = false] 2025/09/09 07:08:19 runner 0 connected 2025/09/09 07:08:22 runner 6 connected 2025/09/09 07:08:50 base crash: KASAN: slab-use-after-free Read in __xfrm_state_lookup 2025/09/09 07:09:01 runner 2 connected 2025/09/09 07:09:07 base crash "possible deadlock in ocfs2_init_acl" is already known 2025/09/09 07:09:07 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 07:09:48 runner 2 connected 2025/09/09 07:09:54 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/09 07:09:56 runner 3 connected 2025/09/09 07:10:30 STAT { "buffer too small": 0, "candidate triage jobs": 69, "candidates": 69132, "comps overflows": 0, "corpus": 10130, "corpus [files]": 141, "corpus [symbols]": 85, "cover overflows": 6555, "coverage": 208805, "distributor delayed": 10535, "distributor undelayed": 10535, "distributor violated": 4, "exec candidate": 10283, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 20905, "exec total [new]": 46044, "exec triage": 31873, "executor restarts [base]": 90, "executor restarts [new]": 206, "fault jobs": 0, "fuzzer jobs": 69, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 210451, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 10283, "no exec duration": 49479000000, "no exec requests": 357, "pending": 0, "prog exec time": 370, "reproducing": 0, "rpc recv": 2415511800, "rpc sent": 239289096, "signal": 205160, "smash jobs": 0, "triage jobs": 0, "vm output": 5427933, "vm restarts [base]": 8, "vm restarts [new]": 19 } 2025/09/09 07:10:31 base crash "WARNING in xfrm_state_fini" is already known 2025/09/09 07:10:31 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 07:10:36 patched crashed: INFO: task hung in corrupted [need repro = true] 2025/09/09 07:10:36 scheduled a reproduction of 'INFO: task hung in corrupted' 2025/09/09 07:10:44 runner 1 connected 2025/09/09 07:11:02 base crash "possible deadlock in ocfs2_init_acl" is already known 2025/09/09 07:11:02 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 07:11:27 runner 2 connected 2025/09/09 07:11:33 runner 7 connected 2025/09/09 07:11:51 runner 9 connected 2025/09/09 07:11:58 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/09/09 07:12:03 base crash "WARNING in xfrm_state_fini" is already known 2025/09/09 07:12:03 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 07:12:49 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/09 07:12:54 runner 3 connected 2025/09/09 07:12:55 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/09/09 07:12:58 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/09 07:12:59 runner 8 connected 2025/09/09 07:13:00 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/09 07:13:28 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/09/09 07:13:38 runner 2 connected 2025/09/09 07:13:44 runner 4 connected 2025/09/09 07:13:46 patched crashed: KASAN: slab-use-after-free Write in __xfrm_state_delete [need repro = true] 2025/09/09 07:13:46 scheduled a reproduction of 'KASAN: slab-use-after-free Write in __xfrm_state_delete' 2025/09/09 07:13:49 runner 0 connected 2025/09/09 07:13:49 runner 0 connected 2025/09/09 07:14:18 runner 6 connected 2025/09/09 07:14:35 runner 7 connected 2025/09/09 07:14:55 base crash: KASAN: slab-use-after-free Write in __xfrm_state_delete 2025/09/09 07:15:02 base crash "kernel BUG in jfs_evict_inode" is already known 2025/09/09 07:15:02 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/09/09 07:15:30 STAT { "buffer too small": 0, "candidate triage jobs": 50, "candidates": 63516, "comps overflows": 0, "corpus": 15713, "corpus [files]": 194, "corpus [symbols]": 115, "cover overflows": 9928, "coverage": 233775, "distributor delayed": 15904, "distributor undelayed": 15904, "distributor violated": 5, "exec candidate": 15899, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 7, "exec seeds": 0, "exec smash": 0, "exec total [base]": 30736, "exec total [new]": 71569, "exec triage": 49059, "executor restarts [base]": 126, "executor restarts [new]": 284, "fault jobs": 0, "fuzzer jobs": 50, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 235571, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 15899, "no exec duration": 49941000000, "no exec requests": 360, "pending": 2, "prog exec time": 257, "reproducing": 0, "rpc recv": 3596312308, "rpc sent": 369573816, "signal": 229756, "smash jobs": 0, "triage jobs": 0, "vm output": 8698052, "vm restarts [base]": 11, "vm restarts [new]": 28 } 2025/09/09 07:15:52 runner 0 connected 2025/09/09 07:15:58 runner 6 connected 2025/09/09 07:16:18 base crash: general protection fault in pcl818_ai_cancel 2025/09/09 07:16:29 base crash: lost connection to test machine 2025/09/09 07:16:29 base crash: general protection fault in pcl818_ai_cancel 2025/09/09 07:16:47 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/09/09 07:16:53 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/09/09 07:17:14 runner 2 connected 2025/09/09 07:17:16 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/09/09 07:17:25 runner 0 connected 2025/09/09 07:17:26 runner 1 connected 2025/09/09 07:17:35 base crash "kernel BUG in txUnlock" is already known 2025/09/09 07:17:35 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/09 07:17:35 base crash "kernel BUG in txUnlock" is already known 2025/09/09 07:17:35 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/09 07:17:36 base crash "kernel BUG in txUnlock" is already known 2025/09/09 07:17:36 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/09 07:17:36 base crash "kernel BUG in txUnlock" is already known 2025/09/09 07:17:36 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/09 07:17:37 runner 0 connected 2025/09/09 07:17:41 runner 5 connected 2025/09/09 07:17:47 base crash: kernel BUG in txUnlock 2025/09/09 07:17:57 base crash: kernel BUG in txUnlock 2025/09/09 07:18:05 runner 7 connected 2025/09/09 07:18:21 base crash: kernel BUG in txUnlock 2025/09/09 07:18:23 runner 6 connected 2025/09/09 07:18:24 runner 9 connected 2025/09/09 07:18:25 runner 4 connected 2025/09/09 07:18:31 runner 8 connected 2025/09/09 07:18:36 runner 3 connected 2025/09/09 07:18:45 runner 2 connected 2025/09/09 07:18:57 base crash "kernel BUG in jfs_evict_inode" is already known 2025/09/09 07:18:57 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/09/09 07:19:01 base crash "kernel BUG in jfs_evict_inode" is already known 2025/09/09 07:19:01 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2025/09/09 07:19:09 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/09/09 07:19:10 runner 0 connected 2025/09/09 07:19:45 runner 2 connected 2025/09/09 07:19:50 runner 9 connected 2025/09/09 07:20:06 runner 1 connected 2025/09/09 07:20:30 STAT { "buffer too small": 0, "candidate triage jobs": 54, "candidates": 58113, "comps overflows": 0, "corpus": 21059, "corpus [files]": 225, "corpus [symbols]": 137, "cover overflows": 13291, "coverage": 253048, "distributor delayed": 21409, "distributor undelayed": 21409, "distributor violated": 83, "exec candidate": 21302, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 8, "exec seeds": 0, "exec smash": 0, "exec total [base]": 38660, "exec total [new]": 97878, "exec triage": 65635, "executor restarts [base]": 157, "executor restarts [new]": 356, "fault jobs": 0, "fuzzer jobs": 54, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 10, "hints jobs": 0, "max signal": 254766, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 21302, "no exec duration": 55245000000, "no exec requests": 370, "pending": 2, "prog exec time": 306, "reproducing": 0, "rpc recv": 4848516984, "rpc sent": 515411376, "signal": 248824, "smash jobs": 0, "triage jobs": 0, "vm output": 11939567, "vm restarts [base]": 18, "vm restarts [new]": 39 } 2025/09/09 07:20:47 base crash "possible deadlock in run_unpack_ex" is already known 2025/09/09 07:20:47 patched crashed: possible deadlock in run_unpack_ex [need repro = false] 2025/09/09 07:20:57 base crash "possible deadlock in run_unpack_ex" is already known 2025/09/09 07:20:57 patched crashed: possible deadlock in run_unpack_ex [need repro = false] 2025/09/09 07:21:43 runner 4 connected 2025/09/09 07:21:54 runner 3 connected 2025/09/09 07:23:36 base crash "unregister_netdevice: waiting for DEV to become free" is already known 2025/09/09 07:23:36 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/09/09 07:23:58 base crash "possible deadlock in ocfs2_xattr_set" is already known 2025/09/09 07:23:58 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/09/09 07:24:27 runner 9 connected 2025/09/09 07:24:37 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/09/09 07:24:48 runner 5 connected 2025/09/09 07:25:30 STAT { "buffer too small": 0, "candidate triage jobs": 41, "candidates": 52089, "comps overflows": 0, "corpus": 27011, "corpus [files]": 262, "corpus [symbols]": 157, "cover overflows": 17077, "coverage": 269870, "distributor delayed": 26429, "distributor undelayed": 26429, "distributor violated": 83, "exec candidate": 27326, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 53028, "exec total [new]": 129531, "exec triage": 84231, "executor restarts [base]": 176, "executor restarts [new]": 412, "fault jobs": 0, "fuzzer jobs": 41, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 271767, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 27326, "no exec duration": 55325000000, "no exec requests": 372, "pending": 2, "prog exec time": 211, "reproducing": 0, "rpc recv": 5860663020, "rpc sent": 694300256, "signal": 265351, "smash jobs": 0, "triage jobs": 0, "vm output": 15253593, "vm restarts [base]": 18, "vm restarts [new]": 43 } 2025/09/09 07:25:34 runner 2 connected 2025/09/09 07:26:06 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 07:27:02 runner 2 connected 2025/09/09 07:27:46 base crash: unregister_netdevice: waiting for DEV to become free 2025/09/09 07:27:54 patched crashed: KASAN: slab-use-after-free Write in __xfrm_state_delete [need repro = false] 2025/09/09 07:28:08 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 07:28:35 runner 0 connected 2025/09/09 07:28:43 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/09/09 07:28:45 runner 6 connected 2025/09/09 07:28:57 runner 0 connected 2025/09/09 07:29:33 runner 4 connected 2025/09/09 07:30:09 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/09/09 07:30:21 patched crashed: possible deadlock in attr_data_get_block [need repro = true] 2025/09/09 07:30:21 scheduled a reproduction of 'possible deadlock in attr_data_get_block' 2025/09/09 07:30:30 STAT { "buffer too small": 0, "candidate triage jobs": 49, "candidates": 46349, "comps overflows": 0, "corpus": 32615, "corpus [files]": 292, "corpus [symbols]": 173, "cover overflows": 21201, "coverage": 283808, "distributor delayed": 31840, "distributor undelayed": 31840, "distributor violated": 84, "exec candidate": 33066, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 66841, "exec total [new]": 163422, "exec triage": 102302, "executor restarts [base]": 200, "executor restarts [new]": 445, "fault jobs": 0, "fuzzer jobs": 49, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 8, "hints jobs": 0, "max signal": 286186, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 33066, "no exec duration": 55578000000, "no exec requests": 378, "pending": 3, "prog exec time": 197, "reproducing": 0, "rpc recv": 6839337276, "rpc sent": 879936728, "signal": 278749, "smash jobs": 0, "triage jobs": 0, "vm output": 17809982, "vm restarts [base]": 19, "vm restarts [new]": 48 } 2025/09/09 07:30:52 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 07:31:05 runner 9 connected 2025/09/09 07:31:05 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 07:31:10 runner 8 connected 2025/09/09 07:31:43 base crash "WARNING in xfrm6_tunnel_net_exit" is already known 2025/09/09 07:31:43 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/09/09 07:31:49 runner 7 connected 2025/09/09 07:31:55 runner 5 connected 2025/09/09 07:32:26 base crash "WARNING in xfrm_state_fini" is already known 2025/09/09 07:32:26 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 07:32:31 runner 6 connected 2025/09/09 07:32:49 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/09/09 07:32:54 base crash "WARNING in xfrm_state_fini" is already known 2025/09/09 07:32:54 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 07:33:15 runner 4 connected 2025/09/09 07:33:35 base crash "possible deadlock in ocfs2_init_acl" is already known 2025/09/09 07:33:35 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 07:33:46 runner 0 connected 2025/09/09 07:33:46 base crash "possible deadlock in ocfs2_init_acl" is already known 2025/09/09 07:33:46 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 07:33:50 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/09 07:33:51 runner 0 connected 2025/09/09 07:33:51 base crash: KASAN: slab-use-after-free Read in xfrm_alloc_spi 2025/09/09 07:34:11 base crash: WARNING in xfrm_state_fini 2025/09/09 07:34:25 runner 8 connected 2025/09/09 07:34:38 patched crashed: KASAN: slab-use-after-free Read in xfrm_state_find [need repro = false] 2025/09/09 07:34:40 runner 1 connected 2025/09/09 07:34:41 runner 2 connected 2025/09/09 07:34:43 runner 2 connected 2025/09/09 07:34:59 runner 3 connected 2025/09/09 07:35:30 STAT { "buffer too small": 0, "candidate triage jobs": 41, "candidates": 42033, "comps overflows": 0, "corpus": 36878, "corpus [files]": 316, "corpus [symbols]": 177, "cover overflows": 23690, "coverage": 294130, "distributor delayed": 36015, "distributor undelayed": 36014, "distributor violated": 85, "exec candidate": 37382, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 12, "exec seeds": 0, "exec smash": 0, "exec total [base]": 78002, "exec total [new]": 187294, "exec triage": 115423, "executor restarts [base]": 224, "executor restarts [new]": 528, "fault jobs": 0, "fuzzer jobs": 41, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 296514, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 37382, "no exec duration": 55646000000, "no exec requests": 379, "pending": 3, "prog exec time": 347, "reproducing": 0, "rpc recv": 7916999980, "rpc sent": 1030127816, "signal": 288912, "smash jobs": 0, "triage jobs": 0, "vm output": 21773333, "vm restarts [base]": 23, "vm restarts [new]": 57 } 2025/09/09 07:35:34 runner 1 connected 2025/09/09 07:37:06 base crash: kernel BUG in jfs_evict_inode 2025/09/09 07:38:03 runner 0 connected 2025/09/09 07:38:57 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 07:39:04 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/09/09 07:39:16 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/09/09 07:39:29 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 07:39:35 base crash: WARNING in xfrm_state_fini 2025/09/09 07:39:46 runner 9 connected 2025/09/09 07:40:02 runner 6 connected 2025/09/09 07:40:12 runner 7 connected 2025/09/09 07:40:17 runner 1 connected 2025/09/09 07:40:30 STAT { "buffer too small": 0, "candidate triage jobs": 30, "candidates": 38514, "comps overflows": 0, "corpus": 40355, "corpus [files]": 324, "corpus [symbols]": 181, "cover overflows": 26116, "coverage": 301250, "distributor delayed": 39356, "distributor undelayed": 39356, "distributor violated": 171, "exec candidate": 40901, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 13, "exec seeds": 0, "exec smash": 0, "exec total [base]": 89819, "exec total [new]": 209308, "exec triage": 126172, "executor restarts [base]": 253, "executor restarts [new]": 582, "fault jobs": 0, "fuzzer jobs": 30, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 303596, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 40901, "no exec duration": 55859000000, "no exec requests": 380, "pending": 3, "prog exec time": 287, "reproducing": 0, "rpc recv": 8753496784, "rpc sent": 1184662256, "signal": 296226, "smash jobs": 0, "triage jobs": 0, "vm output": 25402527, "vm restarts [base]": 24, "vm restarts [new]": 62 } 2025/09/09 07:40:31 runner 2 connected 2025/09/09 07:40:39 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 07:40:47 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2025/09/09 07:41:33 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 07:41:35 runner 8 connected 2025/09/09 07:41:44 runner 1 connected 2025/09/09 07:42:29 runner 3 connected 2025/09/09 07:42:52 base crash "possible deadlock in run_unpack_ex" is already known 2025/09/09 07:42:52 patched crashed: possible deadlock in run_unpack_ex [need repro = false] 2025/09/09 07:43:42 runner 1 connected 2025/09/09 07:43:58 base crash "possible deadlock in ocfs2_xattr_set" is already known 2025/09/09 07:43:58 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/09/09 07:44:10 base crash "possible deadlock in ocfs2_xattr_set" is already known 2025/09/09 07:44:10 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/09/09 07:44:16 patched crashed: possible deadlock in ocfs2_fiemap [need repro = true] 2025/09/09 07:44:16 scheduled a reproduction of 'possible deadlock in ocfs2_fiemap' 2025/09/09 07:44:28 base crash "possible deadlock in ocfs2_init_acl" is already known 2025/09/09 07:44:28 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 07:44:39 base crash "possible deadlock in ocfs2_init_acl" is already known 2025/09/09 07:44:39 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 07:44:55 runner 9 connected 2025/09/09 07:44:59 runner 6 connected 2025/09/09 07:45:05 runner 5 connected 2025/09/09 07:45:15 base crash: possible deadlock in ocfs2_fiemap 2025/09/09 07:45:16 runner 1 connected 2025/09/09 07:45:27 runner 3 connected 2025/09/09 07:45:30 STAT { "buffer too small": 0, "candidate triage jobs": 25, "candidates": 36575, "comps overflows": 0, "corpus": 42220, "corpus [files]": 327, "corpus [symbols]": 183, "cover overflows": 29638, "coverage": 305090, "distributor delayed": 41194, "distributor undelayed": 41194, "distributor violated": 180, "exec candidate": 42840, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 100893, "exec total [new]": 233344, "exec triage": 132300, "executor restarts [base]": 278, "executor restarts [new]": 661, "fault jobs": 0, "fuzzer jobs": 25, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 307675, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 42840, "no exec duration": 55912000000, "no exec requests": 383, "pending": 4, "prog exec time": 241, "reproducing": 0, "rpc recv": 9529860556, "rpc sent": 1357889104, "signal": 300039, "smash jobs": 0, "triage jobs": 0, "vm output": 29713938, "vm restarts [base]": 26, "vm restarts [new]": 70 } 2025/09/09 07:46:11 runner 3 connected 2025/09/09 07:47:16 base crash "possible deadlock in ocfs2_xattr_set" is already known 2025/09/09 07:47:16 patched crashed: possible deadlock in ocfs2_xattr_set [need repro = false] 2025/09/09 07:47:36 patched crashed: INFO: task hung in sync_bdevs [need repro = true] 2025/09/09 07:47:36 scheduled a reproduction of 'INFO: task hung in sync_bdevs' 2025/09/09 07:48:13 runner 9 connected 2025/09/09 07:48:14 base crash: INFO: task hung in sync_bdevs 2025/09/09 07:48:16 base crash: possible deadlock in ocfs2_xattr_set 2025/09/09 07:48:25 runner 0 connected 2025/09/09 07:48:50 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/09 07:49:03 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/09 07:49:04 runner 0 connected 2025/09/09 07:49:12 runner 2 connected 2025/09/09 07:49:16 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/09 07:49:38 runner 5 connected 2025/09/09 07:49:54 runner 1 connected 2025/09/09 07:50:05 runner 8 connected 2025/09/09 07:50:30 STAT { "buffer too small": 0, "candidate triage jobs": 7, "candidates": 35215, "comps overflows": 0, "corpus": 43473, "corpus [files]": 328, "corpus [symbols]": 183, "cover overflows": 34817, "coverage": 307659, "distributor delayed": 42387, "distributor undelayed": 42387, "distributor violated": 180, "exec candidate": 44200, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 112388, "exec total [new]": 265524, "exec triage": 136798, "executor restarts [base]": 301, "executor restarts [new]": 706, "fault jobs": 0, "fuzzer jobs": 7, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 310435, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44180, "no exec duration": 56025000000, "no exec requests": 388, "pending": 5, "prog exec time": 248, "reproducing": 0, "rpc recv": 10211924416, "rpc sent": 1528548752, "signal": 302599, "smash jobs": 0, "triage jobs": 0, "vm output": 33089146, "vm restarts [base]": 29, "vm restarts [new]": 75 } 2025/09/09 07:50:47 base crash "possible deadlock in run_unpack_ex" is already known 2025/09/09 07:50:47 patched crashed: possible deadlock in run_unpack_ex [need repro = false] 2025/09/09 07:50:49 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 07:51:18 base crash: lost connection to test machine 2025/09/09 07:51:28 base crash: possible deadlock in attr_data_get_block 2025/09/09 07:51:35 runner 7 connected 2025/09/09 07:51:45 runner 4 connected 2025/09/09 07:52:15 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:45057: connect: connection refused 2025/09/09 07:52:15 VM-7 failed reading regs: qemu hmp command 'info registers': dial tcp 127.0.0.1:45057: connect: connection refused 2025/09/09 07:52:16 runner 0 connected 2025/09/09 07:52:24 runner 3 connected 2025/09/09 07:52:25 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 07:53:02 patched crashed: possible deadlock in ntfs_look_for_free_space [need repro = true] 2025/09/09 07:53:02 scheduled a reproduction of 'possible deadlock in ntfs_look_for_free_space' 2025/09/09 07:53:22 runner 7 connected 2025/09/09 07:53:46 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/09 07:53:51 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/09 07:53:51 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 07:53:58 runner 3 connected 2025/09/09 07:54:07 base crash: kernel BUG in txUnlock 2025/09/09 07:54:12 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 07:54:32 base crash "WARNING in xfrm6_tunnel_net_exit" is already known 2025/09/09 07:54:32 patched crashed: WARNING in xfrm6_tunnel_net_exit [need repro = false] 2025/09/09 07:54:32 base crash "possible deadlock in ocfs2_init_acl" is already known 2025/09/09 07:54:32 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 07:54:35 runner 0 connected 2025/09/09 07:54:41 runner 5 connected 2025/09/09 07:54:42 runner 2 connected 2025/09/09 07:54:52 base crash: kernel BUG in txUnlock 2025/09/09 07:55:03 runner 3 connected 2025/09/09 07:55:03 runner 1 connected 2025/09/09 07:55:27 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 07:55:29 runner 8 connected 2025/09/09 07:55:29 runner 6 connected 2025/09/09 07:55:30 STAT { "buffer too small": 0, "candidate triage jobs": 3, "candidates": 34301, "comps overflows": 0, "corpus": 44282, "corpus [files]": 329, "corpus [symbols]": 183, "cover overflows": 38748, "coverage": 309249, "distributor delayed": 43265, "distributor undelayed": 43264, "distributor violated": 180, "exec candidate": 45114, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 22, "exec seeds": 0, "exec smash": 0, "exec total [base]": 123697, "exec total [new]": 290512, "exec triage": 139623, "executor restarts [base]": 319, "executor restarts [new]": 775, "fault jobs": 0, "fuzzer jobs": 3, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 312207, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45047, "no exec duration": 56025000000, "no exec requests": 388, "pending": 6, "prog exec time": 283, "reproducing": 0, "rpc recv": 10883878964, "rpc sent": 1686480312, "signal": 304198, "smash jobs": 0, "triage jobs": 0, "vm output": 36164252, "vm restarts [base]": 32, "vm restarts [new]": 85 } 2025/09/09 07:55:49 runner 2 connected 2025/09/09 07:56:24 runner 9 connected 2025/09/09 07:57:07 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 07:58:03 runner 1 connected 2025/09/09 07:58:52 base crash: kernel BUG in jfs_evict_inode 2025/09/09 07:59:49 runner 0 connected 2025/09/09 08:00:30 STAT { "buffer too small": 0, "candidate triage jobs": 11, "candidates": 11275, "comps overflows": 0, "corpus": 45073, "corpus [files]": 330, "corpus [symbols]": 183, "cover overflows": 43449, "coverage": 311406, "distributor delayed": 43932, "distributor undelayed": 43932, "distributor violated": 180, "exec candidate": 68140, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 24, "exec seeds": 0, "exec smash": 0, "exec total [base]": 135722, "exec total [new]": 319996, "exec triage": 142256, "executor restarts [base]": 343, "executor restarts [new]": 844, "fault jobs": 0, "fuzzer jobs": 11, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 10, "hints jobs": 0, "max signal": 314443, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45889, "no exec duration": 56084000000, "no exec requests": 391, "pending": 6, "prog exec time": 236, "reproducing": 0, "rpc recv": 11423398428, "rpc sent": 1834991928, "signal": 306356, "smash jobs": 0, "triage jobs": 0, "vm output": 39691669, "vm restarts [base]": 34, "vm restarts [new]": 87 } 2025/09/09 08:01:06 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/09 08:01:30 triaged 93.9% of the corpus 2025/09/09 08:01:30 starting bug reproductions 2025/09/09 08:01:30 starting bug reproductions (max 10 VMs, 7 repros) 2025/09/09 08:01:30 reproduction of "KASAN: slab-use-after-free Write in __xfrm_state_delete" aborted: it's no longer needed 2025/09/09 08:01:30 reproduction of "possible deadlock in attr_data_get_block" aborted: it's no longer needed 2025/09/09 08:01:30 reproduction of "possible deadlock in ocfs2_fiemap" aborted: it's no longer needed 2025/09/09 08:01:30 reproduction of "INFO: task hung in sync_bdevs" aborted: it's no longer needed 2025/09/09 08:01:30 start reproducing 'INFO: task hung in corrupted' 2025/09/09 08:01:30 start reproducing 'possible deadlock in ntfs_look_for_free_space' 2025/09/09 08:02:20 runner 3 connected 2025/09/09 08:02:20 runner 1 connected 2025/09/09 08:02:20 runner 4 connected 2025/09/09 08:02:20 runner 2 connected 2025/09/09 08:02:22 runner 0 connected 2025/09/09 08:02:32 base crash: WARNING in xfrm_state_fini 2025/09/09 08:03:21 runner 1 connected 2025/09/09 08:04:11 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 08:04:11 base crash: lost connection to test machine 2025/09/09 08:04:17 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:04:27 base crash: lost connection to test machine 2025/09/09 08:04:33 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:04:39 base crash: WARNING in xfrm_state_fini 2025/09/09 08:04:59 runner 0 connected 2025/09/09 08:05:00 runner 1 connected 2025/09/09 08:05:07 runner 8 connected 2025/09/09 08:05:15 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 08:05:17 runner 3 connected 2025/09/09 08:05:24 runner 1 connected 2025/09/09 08:05:30 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 7, "corpus": 45177, "corpus [files]": 330, "corpus [symbols]": 183, "cover overflows": 46164, "coverage": 311769, "distributor delayed": 44149, "distributor undelayed": 44147, "distributor violated": 185, "exec candidate": 79415, "exec collide": 556, "exec fuzz": 1035, "exec gen": 66, "exec hints": 214, "exec inject": 0, "exec minimize": 357, "exec retries": 24, "exec seeds": 90, "exec smash": 576, "exec total [base]": 145743, "exec total [new]": 334707, "exec triage": 142790, "executor restarts [base]": 363, "executor restarts [new]": 892, "fault jobs": 0, "fuzzer jobs": 38, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 9, "max signal": 314932, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 277, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46046, "no exec duration": 63335000000, "no exec requests": 401, "pending": 0, "prog exec time": 545, "reproducing": 2, "rpc recv": 12000467160, "rpc sent": 1959008768, "signal": 306713, "smash jobs": 15, "triage jobs": 14, "vm output": 42862690, "vm restarts [base]": 37, "vm restarts [new]": 95 } 2025/09/09 08:05:35 runner 2 connected 2025/09/09 08:05:37 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 08:05:44 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:06:12 runner 2 connected 2025/09/09 08:06:23 base crash: lost connection to test machine 2025/09/09 08:06:27 runner 4 connected 2025/09/09 08:06:34 runner 0 connected 2025/09/09 08:06:41 patched crashed: possible deadlock in ocfs2_truncate_file [need repro = true] 2025/09/09 08:06:41 scheduled a reproduction of 'possible deadlock in ocfs2_truncate_file' 2025/09/09 08:06:41 start reproducing 'possible deadlock in ocfs2_truncate_file' 2025/09/09 08:06:41 failed to recv *flatrpc.InfoRequestRawT: unexpected EOF 2025/09/09 08:07:19 runner 2 connected 2025/09/09 08:07:34 base crash: lost connection to test machine 2025/09/09 08:07:37 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:07:37 runner 1 connected 2025/09/09 08:07:54 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/09 08:08:11 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:08:18 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:08:29 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:08:30 runner 0 connected 2025/09/09 08:08:50 runner 2 connected 2025/09/09 08:08:56 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/09/09 08:09:00 runner 1 connected 2025/09/09 08:09:03 base crash "possible deadlock in ocfs2_init_acl" is already known 2025/09/09 08:09:03 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 08:09:07 base crash: lost connection to test machine 2025/09/09 08:09:14 runner 2 connected 2025/09/09 08:09:28 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:09:35 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:09:45 runner 9 connected 2025/09/09 08:09:51 runner 4 connected 2025/09/09 08:09:55 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:09:57 runner 3 connected 2025/09/09 08:10:25 runner 8 connected 2025/09/09 08:10:30 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 34, "corpus": 45225, "corpus [files]": 333, "corpus [symbols]": 186, "cover overflows": 47283, "coverage": 311877, "distributor delayed": 44277, "distributor undelayed": 44277, "distributor violated": 186, "exec candidate": 79415, "exec collide": 1132, "exec fuzz": 2202, "exec gen": 132, "exec hints": 852, "exec inject": 0, "exec minimize": 1267, "exec retries": 27, "exec seeds": 232, "exec smash": 1611, "exec total [base]": 151184, "exec total [new]": 339437, "exec triage": 142982, "executor restarts [base]": 400, "executor restarts [new]": 952, "fault jobs": 0, "fuzzer jobs": 46, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 17, "max signal": 315079, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 836, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46121, "no exec duration": 69633000000, "no exec requests": 409, "pending": 0, "prog exec time": 836, "reproducing": 3, "rpc recv": 12689825344, "rpc sent": 2087662456, "signal": 306812, "smash jobs": 18, "triage jobs": 11, "vm output": 46642728, "vm restarts [base]": 42, "vm restarts [new]": 104 } 2025/09/09 08:10:33 base crash: WARNING in xfrm_state_fini 2025/09/09 08:10:40 base crash: lost connection to test machine 2025/09/09 08:10:47 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:10:59 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:11:22 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:11:22 runner 1 connected 2025/09/09 08:11:28 runner 0 connected 2025/09/09 08:11:40 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:11:48 runner 3 connected 2025/09/09 08:12:00 base crash "possible deadlock in mark_as_free_ex" is already known 2025/09/09 08:12:00 patched crashed: possible deadlock in mark_as_free_ex [need repro = false] 2025/09/09 08:12:05 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:12:23 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:12:30 runner 2 connected 2025/09/09 08:12:38 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:12:48 runner 8 connected 2025/09/09 08:13:09 base crash: lost connection to test machine 2025/09/09 08:13:13 runner 3 connected 2025/09/09 08:13:14 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:13:21 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:13:30 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:13:33 base crash: lost connection to test machine 2025/09/09 08:13:54 repro finished 'possible deadlock in ntfs_look_for_free_space', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/09/09 08:13:54 failed repro for "possible deadlock in ntfs_look_for_free_space", err=%!s() 2025/09/09 08:13:54 "possible deadlock in ntfs_look_for_free_space": saved crash log into 1757405634.crash.log 2025/09/09 08:13:54 "possible deadlock in ntfs_look_for_free_space": saved repro log into 1757405634.repro.log 2025/09/09 08:13:57 runner 2 connected 2025/09/09 08:14:01 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:14:04 runner 1 connected 2025/09/09 08:14:10 runner 8 connected 2025/09/09 08:14:22 runner 3 connected 2025/09/09 08:14:36 base crash: lost connection to test machine 2025/09/09 08:14:46 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/09/09 08:14:50 runner 0 connected 2025/09/09 08:14:51 base crash: lost connection to test machine 2025/09/09 08:14:52 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:15:21 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:15:25 runner 0 connected 2025/09/09 08:15:26 patched crashed: possible deadlock in attr_data_get_block [need repro = false] 2025/09/09 08:15:30 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 54, "corpus": 45255, "corpus [files]": 334, "corpus [symbols]": 187, "cover overflows": 49026, "coverage": 311920, "distributor delayed": 44405, "distributor undelayed": 44405, "distributor violated": 186, "exec candidate": 79415, "exec collide": 2324, "exec fuzz": 4483, "exec gen": 272, "exec hints": 2080, "exec inject": 0, "exec minimize": 2036, "exec retries": 27, "exec seeds": 324, "exec smash": 2328, "exec total [base]": 154916, "exec total [new]": 346053, "exec triage": 143174, "executor restarts [base]": 461, "executor restarts [new]": 1074, "fault jobs": 0, "fuzzer jobs": 31, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 10, "max signal": 315283, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1443, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46193, "no exec duration": 72924000000, "no exec requests": 416, "pending": 0, "prog exec time": 578, "reproducing": 2, "rpc recv": 13284295348, "rpc sent": 2223510752, "signal": 306854, "smash jobs": 7, "triage jobs": 14, "vm output": 50820126, "vm restarts [base]": 47, "vm restarts [new]": 111 } 2025/09/09 08:15:35 runner 4 connected 2025/09/09 08:15:48 runner 1 connected 2025/09/09 08:16:01 base crash: lost connection to test machine 2025/09/09 08:16:06 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:16:07 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:16:12 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:16:14 runner 1 connected 2025/09/09 08:16:38 base crash: lost connection to test machine 2025/09/09 08:16:42 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:16:49 runner 0 connected 2025/09/09 08:16:56 runner 3 connected 2025/09/09 08:17:03 runner 9 connected 2025/09/09 08:17:27 runner 2 connected 2025/09/09 08:17:33 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:17:59 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:18:05 base crash: lost connection to test machine 2025/09/09 08:18:06 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:18:09 base crash: lost connection to test machine 2025/09/09 08:18:19 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:18:48 runner 1 connected 2025/09/09 08:18:54 base crash: lost connection to test machine 2025/09/09 08:18:55 runner 0 connected 2025/09/09 08:18:58 runner 3 connected 2025/09/09 08:19:08 runner 2 connected 2025/09/09 08:19:43 runner 2 connected 2025/09/09 08:19:50 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:19:53 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:20:14 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:20:27 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:20:30 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 168, "corpus": 45316, "corpus [files]": 334, "corpus [symbols]": 187, "cover overflows": 52063, "coverage": 312294, "distributor delayed": 44569, "distributor undelayed": 44569, "distributor violated": 186, "exec candidate": 79415, "exec collide": 3178, "exec fuzz": 6023, "exec gen": 369, "exec hints": 3340, "exec inject": 0, "exec minimize": 3571, "exec retries": 28, "exec seeds": 499, "exec smash": 3351, "exec total [base]": 158034, "exec total [new]": 352892, "exec triage": 143525, "executor restarts [base]": 509, "executor restarts [new]": 1147, "fault jobs": 0, "fuzzer jobs": 67, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 5, "hints jobs": 31, "max signal": 316154, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2436, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46317, "no exec duration": 73170000000, "no exec requests": 419, "pending": 0, "prog exec time": 534, "reproducing": 2, "rpc recv": 13977812648, "rpc sent": 2459539560, "signal": 307218, "smash jobs": 28, "triage jobs": 8, "vm output": 59047991, "vm restarts [base]": 53, "vm restarts [new]": 117 } 2025/09/09 08:20:49 runner 9 connected 2025/09/09 08:20:57 base crash: lost connection to test machine 2025/09/09 08:20:59 base crash: WARNING in xfrm_state_fini 2025/09/09 08:21:10 runner 8 connected 2025/09/09 08:21:16 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:21:17 base crash: lost connection to test machine 2025/09/09 08:21:18 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:21:30 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:21:48 runner 0 connected 2025/09/09 08:21:52 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:21:53 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:21:55 runner 3 connected 2025/09/09 08:21:55 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:22:08 runner 2 connected 2025/09/09 08:22:09 runner 2 connected 2025/09/09 08:22:18 runner 1 connected 2025/09/09 08:22:34 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 08:22:40 runner 9 connected 2025/09/09 08:22:41 runner 4 connected 2025/09/09 08:23:27 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:23:30 runner 3 connected 2025/09/09 08:23:37 base crash: lost connection to test machine 2025/09/09 08:23:39 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:23:58 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:24:14 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:24:26 runner 0 connected 2025/09/09 08:24:35 runner 9 connected 2025/09/09 08:24:47 runner 0 connected 2025/09/09 08:25:14 reproducing crash 'possible deadlock in ocfs2_truncate_file': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ocfs2/refcounttree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:25:14 repro finished 'possible deadlock in ocfs2_truncate_file', repro=true crepro=false desc='possible deadlock in ocfs2_try_remove_refcount_tree' hub=false from_dashboard=false 2025/09/09 08:25:14 found repro for "possible deadlock in ocfs2_try_remove_refcount_tree" (orig title: "possible deadlock in ocfs2_truncate_file", reliability: 1), took 18.52 minutes 2025/09/09 08:25:14 "possible deadlock in ocfs2_try_remove_refcount_tree": saved crash log into 1757406314.crash.log 2025/09/09 08:25:14 "possible deadlock in ocfs2_try_remove_refcount_tree": saved repro log into 1757406314.repro.log 2025/09/09 08:25:15 runner 5 connected 2025/09/09 08:25:22 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:25:30 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 287, "corpus": 45358, "corpus [files]": 335, "corpus [symbols]": 188, "cover overflows": 54881, "coverage": 312442, "distributor delayed": 44696, "distributor undelayed": 44695, "distributor violated": 186, "exec candidate": 79415, "exec collide": 3889, "exec fuzz": 7367, "exec gen": 433, "exec hints": 4342, "exec inject": 0, "exec minimize": 4595, "exec retries": 28, "exec seeds": 622, "exec smash": 4341, "exec total [base]": 161539, "exec total [new]": 358399, "exec triage": 143764, "executor restarts [base]": 549, "executor restarts [new]": 1216, "fault jobs": 0, "fuzzer jobs": 77, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 7, "hints jobs": 32, "max signal": 316642, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3009, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46405, "no exec duration": 75170000000, "no exec requests": 421, "pending": 0, "prog exec time": 838, "reproducing": 1, "rpc recv": 14754321988, "rpc sent": 2760123928, "signal": 307336, "smash jobs": 28, "triage jobs": 17, "vm output": 66251049, "vm restarts [base]": 57, "vm restarts [new]": 127 } 2025/09/09 08:25:50 base crash: lost connection to test machine 2025/09/09 08:25:59 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:26:18 runner 9 connected 2025/09/09 08:26:38 runner 2 connected 2025/09/09 08:26:38 attempt #0 to run "possible deadlock in ocfs2_try_remove_refcount_tree" on base: crashed with possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/09 08:26:38 crashes both: possible deadlock in ocfs2_try_remove_refcount_tree / possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/09 08:26:49 runner 4 connected 2025/09/09 08:27:22 base crash: lost connection to test machine 2025/09/09 08:27:23 patched crashed: WARNING in bch2_trans_put [need repro = true] 2025/09/09 08:27:23 scheduled a reproduction of 'WARNING in bch2_trans_put' 2025/09/09 08:27:23 start reproducing 'WARNING in bch2_trans_put' 2025/09/09 08:27:27 runner 0 connected 2025/09/09 08:27:45 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:27:46 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:27:55 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/09/09 08:28:11 runner 3 connected 2025/09/09 08:28:13 runner 1 connected 2025/09/09 08:28:35 runner 4 connected 2025/09/09 08:28:36 runner 3 connected 2025/09/09 08:28:43 runner 2 connected 2025/09/09 08:29:32 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:29:32 patched crashed: possible deadlock in attr_data_get_block [need repro = false] 2025/09/09 08:29:57 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:30:20 runner 2 connected 2025/09/09 08:30:21 runner 5 connected 2025/09/09 08:30:25 base crash: lost connection to test machine 2025/09/09 08:30:30 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 393, "corpus": 45408, "corpus [files]": 336, "corpus [symbols]": 189, "cover overflows": 57605, "coverage": 312561, "distributor delayed": 44811, "distributor undelayed": 44811, "distributor violated": 186, "exec candidate": 79415, "exec collide": 4654, "exec fuzz": 8896, "exec gen": 520, "exec hints": 5535, "exec inject": 0, "exec minimize": 5778, "exec retries": 29, "exec seeds": 760, "exec smash": 5398, "exec total [base]": 165026, "exec total [new]": 364579, "exec triage": 143996, "executor restarts [base]": 580, "executor restarts [new]": 1293, "fault jobs": 0, "fuzzer jobs": 64, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 29, "max signal": 316825, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3674, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46488, "no exec duration": 87229000000, "no exec requests": 434, "pending": 0, "prog exec time": 575, "reproducing": 2, "rpc recv": 15406599920, "rpc sent": 3046265512, "signal": 307454, "smash jobs": 25, "triage jobs": 10, "vm output": 74829799, "vm restarts [base]": 60, "vm restarts [new]": 135 } 2025/09/09 08:30:46 runner 3 connected 2025/09/09 08:30:59 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:31:12 base crash: lost connection to test machine 2025/09/09 08:31:15 runner 1 connected 2025/09/09 08:31:28 base crash: lost connection to test machine 2025/09/09 08:31:49 runner 2 connected 2025/09/09 08:32:01 runner 3 connected 2025/09/09 08:32:03 patched crashed: possible deadlock in kernfs_iop_getattr [need repro = true] 2025/09/09 08:32:03 scheduled a reproduction of 'possible deadlock in kernfs_iop_getattr' 2025/09/09 08:32:03 start reproducing 'possible deadlock in kernfs_iop_getattr' 2025/09/09 08:32:15 base crash: lost connection to test machine 2025/09/09 08:32:17 runner 0 connected 2025/09/09 08:32:36 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:32:52 runner 3 connected 2025/09/09 08:32:53 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 08:33:02 base crash: lost connection to test machine 2025/09/09 08:33:03 runner 1 connected 2025/09/09 08:33:20 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:33:30 base crash: lost connection to test machine 2025/09/09 08:33:32 base crash: lost connection to test machine 2025/09/09 08:33:41 runner 8 connected 2025/09/09 08:33:50 runner 0 connected 2025/09/09 08:33:52 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:33:55 base crash: possible deadlock in kernfs_iop_getattr 2025/09/09 08:34:19 runner 2 connected 2025/09/09 08:34:20 runner 3 connected 2025/09/09 08:34:46 runner 1 connected 2025/09/09 08:34:49 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:35:01 base crash: lost connection to test machine 2025/09/09 08:35:13 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:35:30 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 443, "corpus": 45455, "corpus [files]": 340, "corpus [symbols]": 193, "cover overflows": 59781, "coverage": 312641, "distributor delayed": 44953, "distributor undelayed": 44953, "distributor violated": 186, "exec candidate": 79415, "exec collide": 5522, "exec fuzz": 10429, "exec gen": 609, "exec hints": 6749, "exec inject": 0, "exec minimize": 7010, "exec retries": 29, "exec seeds": 862, "exec smash": 6577, "exec total [base]": 166671, "exec total [new]": 371064, "exec triage": 144255, "executor restarts [base]": 627, "executor restarts [new]": 1344, "fault jobs": 0, "fuzzer jobs": 55, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 25, "max signal": 317141, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4398, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46582, "no exec duration": 90012000000, "no exec requests": 437, "pending": 0, "prog exec time": 527, "reproducing": 3, "rpc recv": 15936368028, "rpc sent": 3221906128, "signal": 307531, "smash jobs": 22, "triage jobs": 8, "vm output": 82571602, "vm restarts [base]": 68, "vm restarts [new]": 139 } 2025/09/09 08:35:32 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:35:34 base crash: lost connection to test machine 2025/09/09 08:35:48 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:35:50 runner 2 connected 2025/09/09 08:35:56 patched crashed: unregister_netdevice: waiting for DEV to become free [need repro = false] 2025/09/09 08:36:00 runner 2 connected 2025/09/09 08:36:18 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:36:21 runner 9 connected 2025/09/09 08:36:23 runner 0 connected 2025/09/09 08:36:44 runner 4 connected 2025/09/09 08:36:59 base crash: WARNING in xfrm_state_fini 2025/09/09 08:37:05 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:37:06 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:37:09 base crash: lost connection to test machine 2025/09/09 08:37:12 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:37:21 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:37:30 base crash: lost connection to test machine 2025/09/09 08:37:34 base crash: lost connection to test machine 2025/09/09 08:37:36 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:37:36 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 08:37:38 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:37:47 runner 1 connected 2025/09/09 08:37:54 runner 3 connected 2025/09/09 08:37:57 runner 0 connected 2025/09/09 08:38:01 runner 8 connected 2025/09/09 08:38:11 runner 4 connected 2025/09/09 08:38:18 runner 2 connected 2025/09/09 08:38:19 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:38:23 runner 3 connected 2025/09/09 08:38:23 runner 5 connected 2025/09/09 08:38:28 runner 2 connected 2025/09/09 08:38:52 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:38:57 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:39:11 base crash: lost connection to test machine 2025/09/09 08:39:11 base crash: lost connection to test machine 2025/09/09 08:39:31 repro finished 'WARNING in bch2_trans_put', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/09/09 08:39:31 failed repro for "WARNING in bch2_trans_put", err=%!s() 2025/09/09 08:39:31 "WARNING in bch2_trans_put": saved crash log into 1757407171.crash.log 2025/09/09 08:39:31 "WARNING in bch2_trans_put": saved repro log into 1757407171.repro.log 2025/09/09 08:39:36 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:39:40 runner 0 connected 2025/09/09 08:39:46 runner 9 connected 2025/09/09 08:40:00 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:40:01 runner 3 connected 2025/09/09 08:40:01 runner 2 connected 2025/09/09 08:40:22 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:40:30 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 455, "corpus": 45480, "corpus [files]": 341, "corpus [symbols]": 194, "cover overflows": 61436, "coverage": 312745, "distributor delayed": 45012, "distributor undelayed": 45012, "distributor violated": 186, "exec candidate": 79415, "exec collide": 6579, "exec fuzz": 12406, "exec gen": 717, "exec hints": 8672, "exec inject": 0, "exec minimize": 7684, "exec retries": 29, "exec seeds": 911, "exec smash": 7276, "exec total [base]": 169478, "exec total [new]": 377666, "exec triage": 144371, "executor restarts [base]": 663, "executor restarts [new]": 1411, "fault jobs": 0, "fuzzer jobs": 25, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 12, "max signal": 317290, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4855, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46628, "no exec duration": 93484000000, "no exec requests": 444, "pending": 0, "prog exec time": 418, "reproducing": 2, "rpc recv": 16677325276, "rpc sent": 3400040752, "signal": 307632, "smash jobs": 8, "triage jobs": 5, "vm output": 86655719, "vm restarts [base]": 76, "vm restarts [new]": 149 } 2025/09/09 08:40:35 base crash: lost connection to test machine 2025/09/09 08:40:37 base crash: lost connection to test machine 2025/09/09 08:40:45 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:40:50 runner 8 connected 2025/09/09 08:40:58 base crash "possible deadlock in ocfs2_init_acl" is already known 2025/09/09 08:40:58 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 08:41:11 runner 3 connected 2025/09/09 08:41:13 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:41:24 runner 2 connected 2025/09/09 08:41:27 runner 0 connected 2025/09/09 08:41:32 base crash "kernel BUG in may_open" is already known 2025/09/09 08:41:32 patched crashed: kernel BUG in may_open [need repro = false] 2025/09/09 08:41:37 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:41:48 runner 0 connected 2025/09/09 08:41:50 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:42:04 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:42:21 runner 3 connected 2025/09/09 08:42:25 base crash: lost connection to test machine 2025/09/09 08:42:26 runner 2 connected 2025/09/09 08:42:29 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:42:32 base crash: lost connection to test machine 2025/09/09 08:42:40 runner 4 connected 2025/09/09 08:42:44 patched crashed: KASAN: slab-out-of-bounds Read in dtSplitPage [need repro = true] 2025/09/09 08:42:44 scheduled a reproduction of 'KASAN: slab-out-of-bounds Read in dtSplitPage' 2025/09/09 08:42:44 start reproducing 'KASAN: slab-out-of-bounds Read in dtSplitPage' 2025/09/09 08:42:50 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:42:51 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:43:13 runner 3 connected 2025/09/09 08:43:22 runner 0 connected 2025/09/09 08:43:25 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:43:34 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:43:39 runner 3 connected 2025/09/09 08:43:41 runner 8 connected 2025/09/09 08:43:47 base crash: KASAN: slab-out-of-bounds Read in dtSplitPage 2025/09/09 08:43:51 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:43:56 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:43:58 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:44:15 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:44:24 runner 4 connected 2025/09/09 08:44:36 runner 1 connected 2025/09/09 08:44:41 base crash: lost connection to test machine 2025/09/09 08:44:44 base crash "INFO: task hung in __closure_sync" is already known 2025/09/09 08:44:44 patched crashed: INFO: task hung in __closure_sync [need repro = false] 2025/09/09 08:44:45 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:44:45 runner 2 connected 2025/09/09 08:45:05 runner 8 connected 2025/09/09 08:45:08 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:45:13 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:45:16 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:45:18 base crash: lost connection to test machine 2025/09/09 08:45:29 runner 0 connected 2025/09/09 08:45:30 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 473, "corpus": 45500, "corpus [files]": 341, "corpus [symbols]": 194, "cover overflows": 63103, "coverage": 312781, "distributor delayed": 45111, "distributor undelayed": 45110, "distributor violated": 186, "exec candidate": 79415, "exec collide": 7870, "exec fuzz": 14737, "exec gen": 848, "exec hints": 10634, "exec inject": 0, "exec minimize": 8133, "exec retries": 29, "exec seeds": 970, "exec smash": 7809, "exec total [base]": 174154, "exec total [new]": 384560, "exec triage": 144511, "executor restarts [base]": 704, "executor restarts [new]": 1486, "fault jobs": 0, "fuzzer jobs": 20, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 4, "hints jobs": 7, "max signal": 317415, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5171, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46680, "no exec duration": 94810000000, "no exec requests": 450, "pending": 0, "prog exec time": 440, "reproducing": 3, "rpc recv": 17418716196, "rpc sent": 3560528544, "signal": 307667, "smash jobs": 2, "triage jobs": 11, "vm output": 89083653, "vm restarts [base]": 82, "vm restarts [new]": 160 } 2025/09/09 08:45:31 runner 9 connected 2025/09/09 08:45:35 repro finished 'INFO: task hung in corrupted', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/09/09 08:45:35 failed repro for "INFO: task hung in corrupted", err=%!s() 2025/09/09 08:45:35 "INFO: task hung in corrupted": saved crash log into 1757407535.crash.log 2025/09/09 08:45:35 "INFO: task hung in corrupted": saved repro log into 1757407535.repro.log 2025/09/09 08:45:50 base crash: lost connection to test machine 2025/09/09 08:46:04 runner 3 connected 2025/09/09 08:46:06 runner 0 connected 2025/09/09 08:46:06 runner 1 connected 2025/09/09 08:46:36 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:46:40 runner 2 connected 2025/09/09 08:47:16 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:47:26 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/09 08:47:42 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:47:52 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:48:05 runner 8 connected 2025/09/09 08:48:10 base crash: lost connection to test machine 2025/09/09 08:48:10 base crash "kernel BUG in may_open" is already known 2025/09/09 08:48:10 patched crashed: kernel BUG in may_open [need repro = false] 2025/09/09 08:48:17 runner 3 connected 2025/09/09 08:48:31 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:48:37 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:48:41 runner 0 connected 2025/09/09 08:48:45 base crash: lost connection to test machine 2025/09/09 08:48:59 runner 2 connected 2025/09/09 08:49:00 runner 2 connected 2025/09/09 08:49:06 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:49:10 base crash: lost connection to test machine 2025/09/09 08:49:12 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:49:23 base crash: lost connection to test machine 2025/09/09 08:49:26 runner 8 connected 2025/09/09 08:49:29 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:49:33 runner 3 connected 2025/09/09 08:49:48 base crash: lost connection to test machine 2025/09/09 08:49:55 runner 5 connected 2025/09/09 08:49:55 patched crashed: general protection fault in device_move [need repro = true] 2025/09/09 08:49:55 scheduled a reproduction of 'general protection fault in device_move' 2025/09/09 08:49:55 start reproducing 'general protection fault in device_move' 2025/09/09 08:49:59 runner 0 connected 2025/09/09 08:50:01 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:50:12 runner 1 connected 2025/09/09 08:50:21 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:50:30 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 492, "corpus": 45536, "corpus [files]": 343, "corpus [symbols]": 196, "cover overflows": 64940, "coverage": 312848, "distributor delayed": 45226, "distributor undelayed": 45220, "distributor violated": 186, "exec candidate": 79415, "exec collide": 9180, "exec fuzz": 17252, "exec gen": 978, "exec hints": 12258, "exec inject": 0, "exec minimize": 8860, "exec retries": 29, "exec seeds": 1061, "exec smash": 8591, "exec total [base]": 178208, "exec total [new]": 391932, "exec triage": 144698, "executor restarts [base]": 769, "executor restarts [new]": 1591, "fault jobs": 0, "fuzzer jobs": 22, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 7, "max signal": 317563, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5665, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46752, "no exec duration": 101371000000, "no exec requests": 459, "pending": 0, "prog exec time": 494, "reproducing": 3, "rpc recv": 18153939288, "rpc sent": 3722232864, "signal": 307728, "smash jobs": 5, "triage jobs": 10, "vm output": 92000792, "vm restarts [base]": 88, "vm restarts [new]": 169 } 2025/09/09 08:50:31 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:50:31 base crash: kernel BUG in may_open 2025/09/09 08:50:36 runner 2 connected 2025/09/09 08:50:44 runner 3 connected 2025/09/09 08:50:50 runner 8 connected 2025/09/09 08:50:53 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:51:10 runner 2 connected 2025/09/09 08:51:11 base crash: lost connection to test machine 2025/09/09 08:51:16 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 08:51:20 runner 9 connected 2025/09/09 08:51:20 runner 0 connected 2025/09/09 08:51:39 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:51:43 runner 4 connected 2025/09/09 08:52:01 runner 1 connected 2025/09/09 08:52:04 runner 5 connected 2025/09/09 08:52:06 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:52:29 runner 8 connected 2025/09/09 08:52:31 base crash: lost connection to test machine 2025/09/09 08:52:37 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:52:40 base crash: lost connection to test machine 2025/09/09 08:52:41 base crash: WARNING in xfrm_state_fini 2025/09/09 08:52:45 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/09 08:53:19 runner 0 connected 2025/09/09 08:53:29 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:53:30 runner 2 connected 2025/09/09 08:53:31 runner 1 connected 2025/09/09 08:53:35 runner 2 connected 2025/09/09 08:53:35 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:53:48 base crash: lost connection to test machine 2025/09/09 08:54:00 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:54:14 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:54:23 base crash: lost connection to test machine 2025/09/09 08:54:23 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:54:33 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:54:37 runner 0 connected 2025/09/09 08:54:43 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:54:55 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:54:57 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/09 08:55:02 runner 8 connected 2025/09/09 08:55:12 runner 3 connected 2025/09/09 08:55:12 runner 2 connected 2025/09/09 08:55:16 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:55:23 runner 5 connected 2025/09/09 08:55:30 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 546, "corpus": 45572, "corpus [files]": 344, "corpus [symbols]": 197, "cover overflows": 66693, "coverage": 313193, "distributor delayed": 45323, "distributor undelayed": 45323, "distributor violated": 186, "exec candidate": 79415, "exec collide": 10080, "exec fuzz": 18829, "exec gen": 1074, "exec hints": 13425, "exec inject": 0, "exec minimize": 9525, "exec retries": 30, "exec seeds": 1165, "exec smash": 9356, "exec total [base]": 180840, "exec total [new]": 397372, "exec triage": 144852, "executor restarts [base]": 809, "executor restarts [new]": 1646, "fault jobs": 0, "fuzzer jobs": 32, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 4, "hints jobs": 17, "max signal": 317926, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6081, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46806, "no exec duration": 104802000000, "no exec requests": 465, "pending": 0, "prog exec time": 868, "reproducing": 3, "rpc recv": 18891132900, "rpc sent": 3856525464, "signal": 307934, "smash jobs": 11, "triage jobs": 4, "vm output": 96993527, "vm restarts [base]": 96, "vm restarts [new]": 180 } 2025/09/09 08:55:47 runner 2 connected 2025/09/09 08:55:57 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:56:02 base crash: lost connection to test machine 2025/09/09 08:56:03 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:56:03 base crash: possible deadlock in ocfs2_init_acl 2025/09/09 08:56:12 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:56:46 runner 3 connected 2025/09/09 08:56:52 runner 0 connected 2025/09/09 08:56:52 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:57:00 runner 2 connected 2025/09/09 08:57:17 base crash: lost connection to test machine 2025/09/09 08:57:18 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:57:23 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:57:28 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/09 08:57:41 base crash: INFO: task hung in vfs_setxattr 2025/09/09 08:57:42 runner 8 connected 2025/09/09 08:57:42 base crash: possible deadlock in ocfs2_init_acl 2025/09/09 08:57:43 patched crashed: INFO: task hung in vfs_setxattr [need repro = false] 2025/09/09 08:57:50 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:58:05 runner 1 connected 2025/09/09 08:58:18 runner 3 connected 2025/09/09 08:58:29 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 08:58:31 runner 3 connected 2025/09/09 08:58:31 runner 0 connected 2025/09/09 08:58:32 runner 4 connected 2025/09/09 08:58:35 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:58:43 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:58:45 base crash: lost connection to test machine 2025/09/09 08:58:51 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:58:55 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 08:59:06 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/09 08:59:06 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:59:08 base crash: possible deadlock in ocfs2_init_acl 2025/09/09 08:59:15 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 08:59:19 runner 8 connected 2025/09/09 08:59:34 runner 2 connected 2025/09/09 08:59:39 runner 2 connected 2025/09/09 08:59:44 runner 5 connected 2025/09/09 08:59:50 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:59:54 runner 3 connected 2025/09/09 08:59:57 runner 3 connected 2025/09/09 08:59:58 reproducing crash 'possible deadlock in kernfs_iop_getattr': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/kernfs/inode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 08:59:58 repro finished 'possible deadlock in kernfs_iop_getattr', repro=true crepro=false desc='possible deadlock in kernfs_iop_getattr' hub=false from_dashboard=false 2025/09/09 08:59:58 found repro for "possible deadlock in kernfs_iop_getattr" (orig title: "-SAME-", reliability: 1), took 27.88 minutes 2025/09/09 08:59:58 "possible deadlock in kernfs_iop_getattr": saved crash log into 1757408398.crash.log 2025/09/09 08:59:58 "possible deadlock in kernfs_iop_getattr": saved repro log into 1757408398.repro.log 2025/09/09 09:00:05 runner 4 connected 2025/09/09 09:00:21 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:00:30 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:00:30 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 590, "corpus": 45599, "corpus [files]": 344, "corpus [symbols]": 197, "cover overflows": 68831, "coverage": 313270, "distributor delayed": 45411, "distributor undelayed": 45410, "distributor violated": 186, "exec candidate": 79415, "exec collide": 11132, "exec fuzz": 20771, "exec gen": 1186, "exec hints": 15027, "exec inject": 0, "exec minimize": 10016, "exec retries": 30, "exec seeds": 1236, "exec smash": 10052, "exec total [base]": 183826, "exec total [new]": 403464, "exec triage": 144973, "executor restarts [base]": 851, "executor restarts [new]": 1701, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 7, "max signal": 318083, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 6363, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46851, "no exec duration": 107037000000, "no exec requests": 472, "pending": 0, "prog exec time": 539, "reproducing": 2, "rpc recv": 19629898220, "rpc sent": 4039297192, "signal": 307985, "smash jobs": 3, "triage jobs": 5, "vm output": 103758844, "vm restarts [base]": 103, "vm restarts [new]": 190 } 2025/09/09 09:00:31 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/09/09 09:00:46 runner 0 connected 2025/09/09 09:00:54 base crash: lost connection to test machine 2025/09/09 09:01:07 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:01:13 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:01:18 repro finished 'general protection fault in device_move', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/09/09 09:01:18 failed repro for "general protection fault in device_move", err=%!s() 2025/09/09 09:01:18 "general protection fault in device_move": saved crash log into 1757408478.crash.log 2025/09/09 09:01:18 "general protection fault in device_move": saved repro log into 1757408478.repro.log 2025/09/09 09:01:18 attempt #0 to run "possible deadlock in kernfs_iop_getattr" on base: crashed with possible deadlock in kernfs_iop_getattr 2025/09/09 09:01:18 crashes both: possible deadlock in kernfs_iop_getattr / possible deadlock in kernfs_iop_getattr 2025/09/09 09:01:18 runner 5 connected 2025/09/09 09:01:19 runner 3 connected 2025/09/09 09:01:40 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:01:43 runner 1 connected 2025/09/09 09:02:02 runner 2 connected 2025/09/09 09:02:08 runner 0 connected 2025/09/09 09:02:16 base crash: lost connection to test machine 2025/09/09 09:02:24 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:02:29 runner 1 connected 2025/09/09 09:02:31 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 09:02:53 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:03:05 runner 2 connected 2025/09/09 09:03:11 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 09:03:26 base crash: kernel BUG in txUnlock 2025/09/09 09:03:28 runner 4 connected 2025/09/09 09:03:31 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:03:36 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 09:03:43 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:04:00 runner 2 connected 2025/09/09 09:04:13 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:04:13 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:04:14 runner 1 connected 2025/09/09 09:04:22 runner 3 connected 2025/09/09 09:04:26 runner 8 connected 2025/09/09 09:04:43 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:04:44 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:04:51 base crash: lost connection to test machine 2025/09/09 09:04:52 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:04:54 base crash: lost connection to test machine 2025/09/09 09:05:02 runner 9 connected 2025/09/09 09:05:03 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:05:05 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:05:06 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:05:30 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 639, "corpus": 45637, "corpus [files]": 347, "corpus [symbols]": 200, "cover overflows": 71019, "coverage": 313636, "distributor delayed": 45512, "distributor undelayed": 45511, "distributor violated": 186, "exec candidate": 79415, "exec collide": 12365, "exec fuzz": 23169, "exec gen": 1315, "exec hints": 16240, "exec inject": 0, "exec minimize": 10915, "exec retries": 30, "exec seeds": 1349, "exec smash": 10830, "exec total [base]": 187292, "exec total [new]": 410424, "exec triage": 145172, "executor restarts [base]": 921, "executor restarts [new]": 1834, "fault jobs": 0, "fuzzer jobs": 17, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 1, "hints jobs": 8, "max signal": 318302, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7008, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46922, "no exec duration": 111903000000, "no exec requests": 483, "pending": 0, "prog exec time": 537, "reproducing": 1, "rpc recv": 20307401200, "rpc sent": 4199227016, "signal": 308133, "smash jobs": 4, "triage jobs": 5, "vm output": 107087606, "vm restarts [base]": 107, "vm restarts [new]": 200 } 2025/09/09 09:05:31 runner 5 connected 2025/09/09 09:05:32 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:05:33 runner 0 connected 2025/09/09 09:05:34 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:05:36 base crash: general protection fault in pcl818_ai_cancel 2025/09/09 09:05:39 runner 2 connected 2025/09/09 09:05:41 runner 3 connected 2025/09/09 09:05:43 runner 1 connected 2025/09/09 09:05:52 runner 4 connected 2025/09/09 09:05:55 runner 2 connected 2025/09/09 09:06:08 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:06:21 runner 8 connected 2025/09/09 09:06:24 runner 9 connected 2025/09/09 09:06:24 runner 0 connected 2025/09/09 09:06:25 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:06:48 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:06:56 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:06:58 runner 5 connected 2025/09/09 09:07:10 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:07:25 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:07:27 base crash: lost connection to test machine 2025/09/09 09:07:35 base crash: lost connection to test machine 2025/09/09 09:07:37 runner 2 connected 2025/09/09 09:07:42 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:07:45 runner 4 connected 2025/09/09 09:07:47 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:08:15 runner 1 connected 2025/09/09 09:08:18 runner 1 connected 2025/09/09 09:08:22 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:08:25 runner 2 connected 2025/09/09 09:08:27 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:08:38 runner 5 connected 2025/09/09 09:08:44 base crash: lost connection to test machine 2025/09/09 09:08:54 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:09:12 runner 2 connected 2025/09/09 09:09:16 runner 4 connected 2025/09/09 09:09:18 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:09:33 runner 0 connected 2025/09/09 09:09:39 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:09:41 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2025/09/09 09:09:42 runner 0 connected 2025/09/09 09:09:43 base crash: lost connection to test machine 2025/09/09 09:09:59 base crash: lost connection to test machine 2025/09/09 09:10:07 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/09/09 09:10:09 runner 1 connected 2025/09/09 09:10:28 runner 3 connected 2025/09/09 09:10:30 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 689, "corpus": 45669, "corpus [files]": 347, "corpus [symbols]": 200, "cover overflows": 72786, "coverage": 313734, "distributor delayed": 45610, "distributor undelayed": 45610, "distributor violated": 186, "exec candidate": 79415, "exec collide": 13354, "exec fuzz": 25008, "exec gen": 1410, "exec hints": 17468, "exec inject": 0, "exec minimize": 11896, "exec retries": 30, "exec seeds": 1453, "exec smash": 11537, "exec total [base]": 190771, "exec total [new]": 416572, "exec triage": 145366, "executor restarts [base]": 975, "executor restarts [new]": 1932, "fault jobs": 0, "fuzzer jobs": 28, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 13, "max signal": 318493, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 7583, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46996, "no exec duration": 112786000000, "no exec requests": 486, "pending": 0, "prog exec time": 574, "reproducing": 1, "rpc recv": 21260115240, "rpc sent": 4373429864, "signal": 308208, "smash jobs": 6, "triage jobs": 9, "vm output": 111502316, "vm restarts [base]": 113, "vm restarts [new]": 217 } 2025/09/09 09:10:31 runner 8 connected 2025/09/09 09:10:32 runner 2 connected 2025/09/09 09:10:54 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:10:55 runner 3 connected 2025/09/09 09:10:55 runner 9 connected 2025/09/09 09:11:40 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:11:42 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:11:46 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:12:17 reproducing crash 'KASAN: slab-out-of-bounds Read in dtSplitPage': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_dtree.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:12:17 repro finished 'KASAN: slab-out-of-bounds Read in dtSplitPage', repro=true crepro=false desc='KASAN: slab-out-of-bounds Read in dtInsertEntry' hub=false from_dashboard=false 2025/09/09 09:12:17 found repro for "KASAN: slab-out-of-bounds Read in dtInsertEntry" (orig title: "KASAN: slab-out-of-bounds Read in dtSplitPage", reliability: 1), took 29.50 minutes 2025/09/09 09:12:17 "KASAN: slab-out-of-bounds Read in dtInsertEntry": saved crash log into 1757409137.crash.log 2025/09/09 09:12:17 "KASAN: slab-out-of-bounds Read in dtInsertEntry": saved repro log into 1757409137.repro.log 2025/09/09 09:12:20 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:12:27 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/09/09 09:12:29 runner 7 connected 2025/09/09 09:12:36 runner 2 connected 2025/09/09 09:12:37 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:12:40 runner 4 connected 2025/09/09 09:12:43 base crash: lost connection to test machine 2025/09/09 09:13:06 runner 6 connected 2025/09/09 09:13:07 base crash: lost connection to test machine 2025/09/09 09:13:11 runner 9 connected 2025/09/09 09:13:16 runner 5 connected 2025/09/09 09:13:27 runner 1 connected 2025/09/09 09:13:30 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:13:34 runner 2 connected 2025/09/09 09:13:36 attempt #0 to run "KASAN: slab-out-of-bounds Read in dtInsertEntry" on base: crashed with KASAN: slab-out-of-bounds Read in dtInsertEntry 2025/09/09 09:13:36 crashes both: KASAN: slab-out-of-bounds Read in dtInsertEntry / KASAN: slab-out-of-bounds Read in dtInsertEntry 2025/09/09 09:13:55 runner 3 connected 2025/09/09 09:14:01 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:14:05 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:14:09 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:14:18 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:14:20 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:14:20 runner 2 connected 2025/09/09 09:14:25 runner 0 connected 2025/09/09 09:14:33 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:14:50 runner 6 connected 2025/09/09 09:14:52 runner 3 connected 2025/09/09 09:14:59 runner 1 connected 2025/09/09 09:15:08 runner 8 connected 2025/09/09 09:15:08 runner 7 connected 2025/09/09 09:15:23 runner 4 connected 2025/09/09 09:15:30 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 723, "corpus": 45705, "corpus [files]": 348, "corpus [symbols]": 201, "cover overflows": 75501, "coverage": 313789, "distributor delayed": 45692, "distributor undelayed": 45692, "distributor violated": 186, "exec candidate": 79415, "exec collide": 15245, "exec fuzz": 28554, "exec gen": 1593, "exec hints": 19084, "exec inject": 0, "exec minimize": 12897, "exec retries": 30, "exec seeds": 1559, "exec smash": 12434, "exec total [base]": 195230, "exec total [new]": 426028, "exec triage": 145566, "executor restarts [base]": 1032, "executor restarts [new]": 2057, "fault jobs": 0, "fuzzer jobs": 16, "fuzzing VMs [base]": 4, "fuzzing VMs [new]": 8, "hints jobs": 6, "max signal": 318641, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 8252, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47073, "no exec duration": 121928000000, "no exec requests": 499, "pending": 0, "prog exec time": 500, "reproducing": 0, "rpc recv": 22243312344, "rpc sent": 4580092184, "signal": 308258, "smash jobs": 3, "triage jobs": 7, "vm output": 115214755, "vm restarts [base]": 118, "vm restarts [new]": 233 } 2025/09/09 09:15:50 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:15:59 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:15:59 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:16:16 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:16:27 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:16:39 runner 5 connected 2025/09/09 09:16:48 runner 2 connected 2025/09/09 09:16:48 runner 3 connected 2025/09/09 09:16:53 base crash: possible deadlock in ocfs2_init_acl 2025/09/09 09:17:06 runner 8 connected 2025/09/09 09:17:09 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:17:15 runner 1 connected 2025/09/09 09:17:18 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:17:25 base crash: lost connection to test machine 2025/09/09 09:17:25 base crash: lost connection to test machine 2025/09/09 09:17:42 runner 1 connected 2025/09/09 09:17:55 patched crashed: KASAN: slab-use-after-free Write in bch2_get_next_dev [need repro = true] 2025/09/09 09:17:55 scheduled a reproduction of 'KASAN: slab-use-after-free Write in bch2_get_next_dev' 2025/09/09 09:17:55 start reproducing 'KASAN: slab-use-after-free Write in bch2_get_next_dev' 2025/09/09 09:17:58 runner 6 connected 2025/09/09 09:18:10 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:18:10 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2025/09/09 09:18:14 runner 3 connected 2025/09/09 09:18:14 runner 0 connected 2025/09/09 09:18:33 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 09:18:44 runner 2 connected 2025/09/09 09:18:58 runner 4 connected 2025/09/09 09:19:00 runner 9 connected 2025/09/09 09:19:08 base crash: lost connection to test machine 2025/09/09 09:19:17 base crash: possible deadlock in ocfs2_init_acl 2025/09/09 09:19:18 reproducing crash 'KASAN: slab-use-after-free Write in bch2_get_next_dev': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/bcachefs/alloc_background.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:19:22 runner 8 connected 2025/09/09 09:19:22 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:19:56 runner 3 connected 2025/09/09 09:19:59 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:20:05 runner 0 connected 2025/09/09 09:20:05 base crash: lost connection to test machine 2025/09/09 09:20:10 reproducing crash 'KASAN: slab-use-after-free Write in bch2_get_next_dev': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/bcachefs/alloc_background.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:20:10 runner 3 connected 2025/09/09 09:20:19 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:20:30 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:20:30 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 765, "corpus": 45738, "corpus [files]": 349, "corpus [symbols]": 202, "cover overflows": 77643, "coverage": 313865, "distributor delayed": 45769, "distributor undelayed": 45769, "distributor violated": 186, "exec candidate": 79415, "exec collide": 16933, "exec fuzz": 31934, "exec gen": 1775, "exec hints": 19518, "exec inject": 0, "exec minimize": 13572, "exec retries": 31, "exec seeds": 1650, "exec smash": 13219, "exec total [base]": 199394, "exec total [new]": 433427, "exec triage": 145733, "executor restarts [base]": 1092, "executor restarts [new]": 2203, "fault jobs": 0, "fuzzer jobs": 19, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 4, "max signal": 318910, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 8685, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47142, "no exec duration": 125407000000, "no exec requests": 505, "pending": 0, "prog exec time": 730, "reproducing": 1, "rpc recv": 22998968752, "rpc sent": 4766339776, "signal": 308321, "smash jobs": 6, "triage jobs": 9, "vm output": 120150647, "vm restarts [base]": 123, "vm restarts [new]": 244 } 2025/09/09 09:20:40 base crash: lost connection to test machine 2025/09/09 09:20:43 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:20:48 runner 5 connected 2025/09/09 09:20:54 runner 2 connected 2025/09/09 09:21:08 runner 2 connected 2025/09/09 09:21:17 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:21:19 runner 6 connected 2025/09/09 09:21:30 runner 0 connected 2025/09/09 09:21:31 runner 9 connected 2025/09/09 09:22:04 base crash: lost connection to test machine 2025/09/09 09:22:06 runner 3 connected 2025/09/09 09:22:08 patched crashed: general protection fault in pcl818_ai_cancel [need repro = false] 2025/09/09 09:22:26 base crash "KASAN: slab-out-of-bounds Write in bch2_dirent_init_name" is already known 2025/09/09 09:22:26 patched crashed: KASAN: slab-out-of-bounds Write in bch2_dirent_init_name [need repro = false] 2025/09/09 09:22:26 base crash: lost connection to test machine 2025/09/09 09:22:30 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:22:54 runner 3 connected 2025/09/09 09:23:05 runner 5 connected 2025/09/09 09:23:05 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:23:15 runner 9 connected 2025/09/09 09:23:17 runner 0 connected 2025/09/09 09:23:19 runner 8 connected 2025/09/09 09:23:54 runner 2 connected 2025/09/09 09:24:08 base crash: WARNING in xfrm_state_fini 2025/09/09 09:24:15 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:24:20 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/09 09:24:22 patched crashed: KASAN: slab-out-of-bounds Read in dtSplitPage [need repro = false] 2025/09/09 09:24:42 base crash: KASAN: slab-out-of-bounds Write in bch2_dirent_init_name 2025/09/09 09:24:57 runner 2 connected 2025/09/09 09:25:01 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:25:02 runner 9 connected 2025/09/09 09:25:09 runner 5 connected 2025/09/09 09:25:11 runner 4 connected 2025/09/09 09:25:13 base crash: lost connection to test machine 2025/09/09 09:25:30 STAT { "buffer too small": 1, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 842, "corpus": 45779, "corpus [files]": 351, "corpus [symbols]": 202, "cover overflows": 79818, "coverage": 313946, "distributor delayed": 45846, "distributor undelayed": 45846, "distributor violated": 186, "exec candidate": 79415, "exec collide": 18065, "exec fuzz": 34269, "exec gen": 1893, "exec hints": 19897, "exec inject": 0, "exec minimize": 14514, "exec retries": 31, "exec seeds": 1773, "exec smash": 14086, "exec total [base]": 202361, "exec total [new]": 439488, "exec triage": 145889, "executor restarts [base]": 1143, "executor restarts [new]": 2281, "fault jobs": 0, "fuzzer jobs": 19, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 7, "hints jobs": 6, "max signal": 319077, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 9308, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47202, "no exec duration": 125407000000, "no exec requests": 505, "pending": 0, "prog exec time": 821, "reproducing": 1, "rpc recv": 23780588736, "rpc sent": 4931241160, "signal": 308396, "smash jobs": 7, "triage jobs": 6, "vm output": 127145713, "vm restarts [base]": 128, "vm restarts [new]": 256 } 2025/09/09 09:25:31 runner 1 connected 2025/09/09 09:25:32 base crash: lost connection to test machine 2025/09/09 09:25:51 runner 3 connected 2025/09/09 09:25:52 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:26:01 base crash: lost connection to test machine 2025/09/09 09:26:02 runner 3 connected 2025/09/09 09:26:03 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:26:21 runner 2 connected 2025/09/09 09:26:21 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:26:42 runner 4 connected 2025/09/09 09:26:44 patched crashed: INFO: rcu detected stall in corrupted [need repro = false] 2025/09/09 09:26:51 runner 7 connected 2025/09/09 09:26:52 runner 1 connected 2025/09/09 09:27:11 runner 2 connected 2025/09/09 09:27:33 runner 6 connected 2025/09/09 09:27:46 base crash: lost connection to test machine 2025/09/09 09:28:03 reproducing crash 'KASAN: slab-use-after-free Write in bch2_get_next_dev': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/bcachefs/alloc_background.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:28:14 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 09:28:21 base crash: WARNING in xfrm_state_fini 2025/09/09 09:28:21 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:28:23 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:28:28 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:28:36 runner 1 connected 2025/09/09 09:29:03 runner 4 connected 2025/09/09 09:29:10 runner 2 connected 2025/09/09 09:29:11 runner 3 connected 2025/09/09 09:29:12 runner 6 connected 2025/09/09 09:29:14 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:29:17 runner 7 connected 2025/09/09 09:29:25 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:29:39 base crash: lost connection to test machine 2025/09/09 09:29:42 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:29:49 reproducing crash 'KASAN: slab-use-after-free Write in bch2_get_next_dev': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/bcachefs/alloc_background.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:30:03 runner 5 connected 2025/09/09 09:30:10 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:30:11 base crash: KASAN: slab-out-of-bounds Read in dtSplitPage 2025/09/09 09:30:14 runner 2 connected 2025/09/09 09:30:19 reproducing crash 'KASAN: slab-use-after-free Write in bch2_get_next_dev': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/bcachefs/alloc_background.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:30:28 runner 0 connected 2025/09/09 09:30:30 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 900, "corpus": 45811, "corpus [files]": 351, "corpus [symbols]": 202, "cover overflows": 82016, "coverage": 314010, "distributor delayed": 45948, "distributor undelayed": 45948, "distributor violated": 186, "exec candidate": 79415, "exec collide": 19249, "exec fuzz": 36644, "exec gen": 2018, "exec hints": 20362, "exec inject": 0, "exec minimize": 15400, "exec retries": 32, "exec seeds": 1875, "exec smash": 14863, "exec total [base]": 205727, "exec total [new]": 445600, "exec triage": 146081, "executor restarts [base]": 1177, "executor restarts [new]": 2358, "fault jobs": 0, "fuzzer jobs": 32, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 8, "max signal": 319177, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 9853, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47275, "no exec duration": 125414000000, "no exec requests": 506, "pending": 0, "prog exec time": 823, "reproducing": 1, "rpc recv": 24581801492, "rpc sent": 5130383616, "signal": 308451, "smash jobs": 4, "triage jobs": 20, "vm output": 133019251, "vm restarts [base]": 135, "vm restarts [new]": 267 } 2025/09/09 09:30:33 runner 4 connected 2025/09/09 09:30:34 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:30:59 runner 7 connected 2025/09/09 09:30:59 runner 2 connected 2025/09/09 09:31:02 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:31:08 base crash: lost connection to test machine 2025/09/09 09:31:22 runner 3 connected 2025/09/09 09:31:40 base crash: lost connection to test machine 2025/09/09 09:31:52 runner 5 connected 2025/09/09 09:32:06 runner 3 connected 2025/09/09 09:32:16 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/09 09:32:20 reproducing crash 'KASAN: slab-use-after-free Write in bch2_get_next_dev': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/bcachefs/alloc_background.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:32:31 runner 1 connected 2025/09/09 09:33:12 runner 3 connected 2025/09/09 09:33:12 base crash: lost connection to test machine 2025/09/09 09:33:32 patched crashed: WARNING in hfsplus_bnode_create [need repro = true] 2025/09/09 09:33:32 scheduled a reproduction of 'WARNING in hfsplus_bnode_create' 2025/09/09 09:33:32 start reproducing 'WARNING in hfsplus_bnode_create' 2025/09/09 09:33:47 base crash: lost connection to test machine 2025/09/09 09:33:57 reproducing crash 'KASAN: slab-use-after-free Write in bch2_get_next_dev': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/bcachefs/alloc_background.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:34:09 runner 1 connected 2025/09/09 09:34:19 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 09:34:26 base crash "INFO: task hung in __iterate_supers" is already known 2025/09/09 09:34:26 patched crashed: INFO: task hung in __iterate_supers [need repro = false] 2025/09/09 09:34:28 runner 7 connected 2025/09/09 09:34:39 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:34:44 runner 2 connected 2025/09/09 09:34:48 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:35:09 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 09:35:10 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:35:15 runner 8 connected 2025/09/09 09:35:15 runner 9 connected 2025/09/09 09:35:26 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:35:27 base crash: lost connection to test machine 2025/09/09 09:35:30 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 933, "corpus": 45838, "corpus [files]": 352, "corpus [symbols]": 203, "cover overflows": 83755, "coverage": 314052, "distributor delayed": 46017, "distributor undelayed": 46017, "distributor violated": 186, "exec candidate": 79415, "exec collide": 20645, "exec fuzz": 39364, "exec gen": 2180, "exec hints": 21160, "exec inject": 0, "exec minimize": 16008, "exec retries": 32, "exec seeds": 1952, "exec smash": 15466, "exec total [base]": 208666, "exec total [new]": 452089, "exec triage": 146204, "executor restarts [base]": 1218, "executor restarts [new]": 2431, "fault jobs": 0, "fuzzer jobs": 9, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 4, "max signal": 319266, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 10231, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47316, "no exec duration": 131706000000, "no exec requests": 517, "pending": 0, "prog exec time": 407, "reproducing": 2, "rpc recv": 25227902496, "rpc sent": 5270711032, "signal": 308500, "smash jobs": 1, "triage jobs": 4, "vm output": 137201858, "vm restarts [base]": 140, "vm restarts [new]": 275 } 2025/09/09 09:35:37 runner 4 connected 2025/09/09 09:35:58 runner 5 connected 2025/09/09 09:36:07 runner 6 connected 2025/09/09 09:36:08 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:36:09 patched crashed: WARNING in xfrm_state_fini [need repro = false] 2025/09/09 09:36:15 runner 0 connected 2025/09/09 09:36:20 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:36:32 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:36:48 base crash: lost connection to test machine 2025/09/09 09:36:56 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:36:56 runner 7 connected 2025/09/09 09:36:57 runner 8 connected 2025/09/09 09:36:58 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:37:09 runner 9 connected 2025/09/09 09:37:18 base crash: WARNING in hfsplus_bnode_create 2025/09/09 09:37:21 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:37:34 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:37:36 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:37:38 runner 0 connected 2025/09/09 09:37:45 runner 4 connected 2025/09/09 09:37:48 runner 6 connected 2025/09/09 09:37:59 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:38:07 runner 2 connected 2025/09/09 09:38:09 runner 3 connected 2025/09/09 09:38:22 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:38:25 runner 7 connected 2025/09/09 09:38:49 runner 8 connected 2025/09/09 09:38:49 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:38:51 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:39:09 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 09:39:24 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:39:30 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:39:33 base crash: lost connection to test machine 2025/09/09 09:39:35 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:39:38 runner 6 connected 2025/09/09 09:39:41 runner 5 connected 2025/09/09 09:39:46 base crash: WARNING in dbAdjTree 2025/09/09 09:39:58 runner 9 connected 2025/09/09 09:40:12 runner 7 connected 2025/09/09 09:40:23 runner 1 connected 2025/09/09 09:40:24 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:40:24 runner 8 connected 2025/09/09 09:40:27 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:40:30 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:40:30 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 939, "corpus": 45851, "corpus [files]": 354, "corpus [symbols]": 205, "cover overflows": 85187, "coverage": 314065, "distributor delayed": 46066, "distributor undelayed": 46066, "distributor violated": 186, "exec candidate": 79415, "exec collide": 22152, "exec fuzz": 42250, "exec gen": 2342, "exec hints": 21778, "exec inject": 0, "exec minimize": 16366, "exec retries": 32, "exec seeds": 1991, "exec smash": 15703, "exec total [base]": 213053, "exec total [new]": 457979, "exec triage": 146284, "executor restarts [base]": 1255, "executor restarts [new]": 2510, "fault jobs": 0, "fuzzer jobs": 12, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 5, "max signal": 319319, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 10490, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47344, "no exec duration": 133027000000, "no exec requests": 525, "pending": 0, "prog exec time": 343, "reproducing": 2, "rpc recv": 26080663112, "rpc sent": 5415209760, "signal": 308513, "smash jobs": 5, "triage jobs": 2, "vm output": 140416155, "vm restarts [base]": 144, "vm restarts [new]": 291 } 2025/09/09 09:40:37 runner 0 connected 2025/09/09 09:40:46 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:41:01 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2025/09/09 09:41:12 runner 6 connected 2025/09/09 09:41:13 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:41:16 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:41:21 runner 3 connected 2025/09/09 09:41:24 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 09:41:35 runner 9 connected 2025/09/09 09:41:37 base crash: lost connection to test machine 2025/09/09 09:41:51 runner 2 connected 2025/09/09 09:41:53 patched crashed: kernel BUG in txUnlock [need repro = false] 2025/09/09 09:41:56 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:42:01 base crash: lost connection to test machine 2025/09/09 09:42:02 runner 4 connected 2025/09/09 09:42:13 runner 5 connected 2025/09/09 09:42:25 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:42:26 runner 3 connected 2025/09/09 09:42:29 base crash: lost connection to test machine 2025/09/09 09:42:40 base crash: lost connection to test machine 2025/09/09 09:42:41 runner 8 connected 2025/09/09 09:42:45 runner 6 connected 2025/09/09 09:42:49 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:42:51 runner 0 connected 2025/09/09 09:43:19 runner 2 connected 2025/09/09 09:43:24 patched crashed: KASAN: slab-use-after-free Read in xfrm_alloc_spi [need repro = false] 2025/09/09 09:43:26 base crash: lost connection to test machine 2025/09/09 09:43:26 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:43:31 runner 1 connected 2025/09/09 09:43:38 runner 7 connected 2025/09/09 09:43:43 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:43:51 base crash: kernel BUG in txUnlock 2025/09/09 09:43:59 base crash: lost connection to test machine 2025/09/09 09:44:13 runner 5 connected 2025/09/09 09:44:15 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:44:16 runner 8 connected 2025/09/09 09:44:17 runner 3 connected 2025/09/09 09:44:31 runner 4 connected 2025/09/09 09:44:41 runner 1 connected 2025/09/09 09:44:49 runner 0 connected 2025/09/09 09:45:05 base crash: possible deadlock in ocfs2_init_acl 2025/09/09 09:45:07 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:45:13 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:45:14 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:45:30 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 951, "corpus": 45865, "corpus [files]": 354, "corpus [symbols]": 205, "cover overflows": 87043, "coverage": 314105, "distributor delayed": 46122, "distributor undelayed": 46122, "distributor violated": 186, "exec candidate": 79415, "exec collide": 23937, "exec fuzz": 45555, "exec gen": 2528, "exec hints": 22338, "exec inject": 0, "exec minimize": 16778, "exec retries": 32, "exec seeds": 2033, "exec smash": 16048, "exec total [base]": 215954, "exec total [new]": 464722, "exec triage": 146389, "executor restarts [base]": 1307, "executor restarts [new]": 2581, "fault jobs": 0, "fuzzer jobs": 15, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 2, "max signal": 319410, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 10764, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47385, "no exec duration": 139147000000, "no exec requests": 533, "pending": 0, "prog exec time": 535, "reproducing": 2, "rpc recv": 26865702280, "rpc sent": 5564185080, "signal": 308553, "smash jobs": 5, "triage jobs": 8, "vm output": 144827260, "vm restarts [base]": 153, "vm restarts [new]": 302 } 2025/09/09 09:45:42 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:45:45 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:45:55 runner 1 connected 2025/09/09 09:45:55 runner 7 connected 2025/09/09 09:46:02 runner 6 connected 2025/09/09 09:46:20 base crash: lost connection to test machine 2025/09/09 09:46:34 runner 9 connected 2025/09/09 09:46:36 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:47:06 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:47:09 runner 0 connected 2025/09/09 09:47:16 base crash: lost connection to test machine 2025/09/09 09:47:33 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:47:40 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:47:56 runner 4 connected 2025/09/09 09:48:05 runner 1 connected 2025/09/09 09:48:14 base crash: lost connection to test machine 2025/09/09 09:48:21 base crash: lost connection to test machine 2025/09/09 09:48:23 runner 8 connected 2025/09/09 09:48:30 runner 9 connected 2025/09/09 09:48:33 base crash: lost connection to test machine 2025/09/09 09:48:56 base crash: WARNING in xfrm_state_fini 2025/09/09 09:49:03 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:49:04 runner 0 connected 2025/09/09 09:49:10 runner 2 connected 2025/09/09 09:49:21 runner 3 connected 2025/09/09 09:49:23 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:49:27 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:49:32 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:49:45 runner 1 connected 2025/09/09 09:49:52 runner 4 connected 2025/09/09 09:50:00 patched crashed: INFO: task hung in corrupted [need repro = true] 2025/09/09 09:50:00 scheduled a reproduction of 'INFO: task hung in corrupted' 2025/09/09 09:50:00 start reproducing 'INFO: task hung in corrupted' 2025/09/09 09:50:20 runner 9 connected 2025/09/09 09:50:23 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:50:30 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 990, "corpus": 45893, "corpus [files]": 354, "corpus [symbols]": 205, "cover overflows": 89482, "coverage": 314224, "distributor delayed": 46208, "distributor undelayed": 46208, "distributor violated": 186, "exec candidate": 79415, "exec collide": 25371, "exec fuzz": 48380, "exec gen": 2700, "exec hints": 22803, "exec inject": 0, "exec minimize": 17507, "exec retries": 32, "exec seeds": 2114, "exec smash": 16729, "exec total [base]": 219444, "exec total [new]": 471266, "exec triage": 146546, "executor restarts [base]": 1346, "executor restarts [new]": 2646, "fault jobs": 0, "fuzzer jobs": 10, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 4, "max signal": 319611, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 11222, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47442, "no exec duration": 139493000000, "no exec requests": 538, "pending": 0, "prog exec time": 598, "reproducing": 3, "rpc recv": 27547819136, "rpc sent": 5769080496, "signal": 308669, "smash jobs": 2, "triage jobs": 4, "vm output": 149275459, "vm restarts [base]": 160, "vm restarts [new]": 310 } 2025/09/09 09:50:36 base crash: lost connection to test machine 2025/09/09 09:50:44 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:50:49 runner 6 connected 2025/09/09 09:50:56 base crash: WARNING in xfrm6_tunnel_net_exit 2025/09/09 09:51:16 base crash: lost connection to test machine 2025/09/09 09:51:20 reproducing crash 'WARNING in hfsplus_bnode_create': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/hfsplus/bnode.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:51:20 repro finished 'WARNING in hfsplus_bnode_create', repro=true crepro=false desc='WARNING in hfsplus_bnode_create' hub=false from_dashboard=false 2025/09/09 09:51:20 found repro for "WARNING in hfsplus_bnode_create" (orig title: "-SAME-", reliability: 1), took 17.74 minutes 2025/09/09 09:51:20 "WARNING in hfsplus_bnode_create": saved crash log into 1757411480.crash.log 2025/09/09 09:51:20 "WARNING in hfsplus_bnode_create": saved repro log into 1757411480.repro.log 2025/09/09 09:51:23 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:51:25 runner 2 connected 2025/09/09 09:51:31 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:51:33 runner 5 connected 2025/09/09 09:51:36 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:51:45 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:51:47 runner 3 connected 2025/09/09 09:51:53 runner 0 connected 2025/09/09 09:52:05 runner 1 connected 2025/09/09 09:52:12 runner 6 connected 2025/09/09 09:52:20 runner 9 connected 2025/09/09 09:52:32 runner 8 connected 2025/09/09 09:52:41 runner 7 connected 2025/09/09 09:52:42 attempt #0 to run "WARNING in hfsplus_bnode_create" on base: crashed with WARNING in hfsplus_bnode_create 2025/09/09 09:52:42 crashes both: WARNING in hfsplus_bnode_create / WARNING in hfsplus_bnode_create 2025/09/09 09:53:02 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:53:27 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:53:31 runner 0 connected 2025/09/09 09:53:38 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 09:53:51 runner 4 connected 2025/09/09 09:54:17 runner 6 connected 2025/09/09 09:54:26 runner 8 connected 2025/09/09 09:54:58 base crash: lost connection to test machine 2025/09/09 09:55:11 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:55:14 reproducing crash 'KASAN: slab-use-after-free Write in bch2_get_next_dev': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/bcachefs/alloc_background.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2025/09/09 09:55:30 STAT { "buffer too small": 3, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 1026, "corpus": 45913, "corpus [files]": 354, "corpus [symbols]": 205, "cover overflows": 91426, "coverage": 314292, "distributor delayed": 46264, "distributor undelayed": 46264, "distributor violated": 186, "exec candidate": 79415, "exec collide": 26782, "exec fuzz": 51104, "exec gen": 2846, "exec hints": 23578, "exec inject": 0, "exec minimize": 17796, "exec retries": 33, "exec seeds": 2172, "exec smash": 17116, "exec total [base]": 223321, "exec total [new]": 477160, "exec triage": 146640, "executor restarts [base]": 1382, "executor restarts [new]": 2717, "fault jobs": 0, "fuzzer jobs": 20, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 9, "max signal": 319724, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 11483, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47481, "no exec duration": 142857000000, "no exec requests": 544, "pending": 0, "prog exec time": 862, "reproducing": 2, "rpc recv": 28216942644, "rpc sent": 6041790120, "signal": 308728, "smash jobs": 5, "triage jobs": 6, "vm output": 156011133, "vm restarts [base]": 164, "vm restarts [new]": 320 } 2025/09/09 09:55:38 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:55:47 runner 0 connected 2025/09/09 09:56:00 runner 9 connected 2025/09/09 09:56:11 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:56:27 base crash: lost connection to test machine 2025/09/09 09:56:27 runner 8 connected 2025/09/09 09:57:00 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2025/09/09 09:57:00 runner 4 connected 2025/09/09 09:57:03 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:57:16 runner 3 connected 2025/09/09 09:57:23 reproducing crash 'KASAN: slab-use-after-free Write in bch2_get_next_dev': reproducer is too unreliable: 0.10 2025/09/09 09:57:23 repro finished 'KASAN: slab-use-after-free Write in bch2_get_next_dev', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/09/09 09:57:23 failed repro for "KASAN: slab-use-after-free Write in bch2_get_next_dev", err=%!s() 2025/09/09 09:57:23 "KASAN: slab-use-after-free Write in bch2_get_next_dev": saved crash log into 1757411843.crash.log 2025/09/09 09:57:23 "KASAN: slab-use-after-free Write in bch2_get_next_dev": saved repro log into 1757411843.repro.log 2025/09/09 09:57:31 base crash: lost connection to test machine 2025/09/09 09:57:48 runner 5 connected 2025/09/09 09:57:49 patched crashed: INFO: trying to register non-static key in ocfs2_dlm_shutdown [need repro = true] 2025/09/09 09:57:49 scheduled a reproduction of 'INFO: trying to register non-static key in ocfs2_dlm_shutdown' 2025/09/09 09:57:49 start reproducing 'INFO: trying to register non-static key in ocfs2_dlm_shutdown' 2025/09/09 09:57:53 runner 9 connected 2025/09/09 09:58:20 runner 0 connected 2025/09/09 09:58:26 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:58:26 base crash: lost connection to test machine 2025/09/09 09:58:31 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:58:46 runner 1 connected 2025/09/09 09:58:55 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:59:14 runner 7 connected 2025/09/09 09:59:14 runner 2 connected 2025/09/09 09:59:20 runner 4 connected 2025/09/09 09:59:24 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 09:59:44 runner 5 connected 2025/09/09 09:59:55 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 10:00:14 runner 1 connected 2025/09/09 10:00:15 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 10:00:20 patched crashed: lost connection to test machine [need repro = false] 2025/09/09 10:00:26 status reporting terminated 2025/09/09 10:00:26 bug reporting terminated 2025/09/09 10:00:26 repro finished 'INFO: trying to register non-static key in ocfs2_dlm_shutdown', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/09/09 10:00:27 syz-diff (base): kernel context loop terminated 2025/09/09 10:00:34 repro finished 'INFO: task hung in corrupted', repro=false crepro=false desc='' hub=false from_dashboard=false 2025/09/09 10:01:02 syz-diff (new): kernel context loop terminated 2025/09/09 10:01:02 diff fuzzing terminated 2025/09/09 10:01:02 fuzzing is finished 2025/09/09 10:01:02 status at the end: Title On-Base On-Patched INFO: rcu detected stall in corrupted 1 crashes INFO: task hung in __closure_sync 1 crashes INFO: task hung in __iterate_supers 1 crashes INFO: task hung in corrupted 2 crashes INFO: task hung in sync_bdevs 1 crashes 1 crashes INFO: task hung in vfs_setxattr 1 crashes 1 crashes INFO: trying to register non-static key in ocfs2_dlm_shutdown 1 crashes KASAN: slab-out-of-bounds Read in dtInsertEntry 1 crashes [reproduced] KASAN: slab-out-of-bounds Read in dtSplitPage 2 crashes 2 crashes KASAN: slab-out-of-bounds Write in bch2_dirent_init_name 1 crashes 1 crashes KASAN: slab-use-after-free Read in __xfrm_state_lookup 1 crashes KASAN: slab-use-after-free Read in xfrm_alloc_spi 2 crashes 8 crashes KASAN: slab-use-after-free Read in xfrm_state_find 1 crashes 1 crashes KASAN: slab-use-after-free Write in __xfrm_state_delete 1 crashes 2 crashes KASAN: slab-use-after-free Write in bch2_get_next_dev 1 crashes WARNING in bch2_trans_put 1 crashes WARNING in dbAdjTree 1 crashes WARNING in hfsplus_bnode_create 2 crashes 1 crashes[reproduced] WARNING in xfrm6_tunnel_net_exit 1 crashes 2 crashes WARNING in xfrm_state_fini 11 crashes 21 crashes general protection fault in device_move 1 crashes general protection fault in pcl818_ai_cancel 3 crashes 6 crashes kernel BUG in jfs_evict_inode 2 crashes 3 crashes kernel BUG in may_open 1 crashes 2 crashes kernel BUG in txUnlock 7 crashes 10 crashes lost connection to test machine 105 crashes 185 crashes possible deadlock in attr_data_get_block 1 crashes 3 crashes possible deadlock in dqget 1 crashes possible deadlock in kernfs_iop_getattr 2 crashes 1 crashes[reproduced] possible deadlock in mark_as_free_ex 1 crashes possible deadlock in ntfs_look_for_free_space 1 crashes possible deadlock in ocfs2_fiemap 1 crashes 1 crashes possible deadlock in ocfs2_init_acl 6 crashes 19 crashes possible deadlock in ocfs2_reserve_suballoc_bits 2 crashes 4 crashes possible deadlock in ocfs2_truncate_file 1 crashes possible deadlock in ocfs2_try_remove_refcount_tree 6 crashes 15 crashes[reproduced] possible deadlock in ocfs2_xattr_set 1 crashes 4 crashes possible deadlock in run_unpack_ex 4 crashes unregister_netdevice: waiting for DEV to become free 1 crashes 4 crashes