2026/01/29 10:19:53 extracted 326156 text symbol hashes for base and 326156 for patched 2026/01/29 10:19:53 binaries are different, continuing fuzzing 2026/01/29 10:19:53 adding modified_functions to focus areas: ["__kmalloc_cache_node_noprof" "__kmalloc_cache_noprof" "__kmalloc_node_noprof" "__kmalloc_node_track_caller_noprof" "__kmalloc_noprof" "__kmem_cache_alloc_bulk" "__kmem_cache_do_shrink" "__kmem_cache_shutdown" "__kvmalloc_node_noprof" "__pcs_replace_empty_main" "__pcs_replace_full_main" "__refill_objects_node" "__slab_free" "do_kmem_cache_create" "flush_cpu_sheaves" "free_deferred_objects" "kfree" "kfree_nolock" "kmalloc_nolock_noprof" "kmem_cache_alloc_bulk_noprof" "kmem_cache_alloc_from_sheaf_noprof" "kmem_cache_alloc_lru_noprof" "kmem_cache_alloc_node_noprof" "kmem_cache_alloc_noprof" "kmem_cache_free" "kmem_cache_free_bulk" "ksize" "kvfree_rcu_cb" "memcg_alloc_abort_single" "refill_objects" "sheaf_flush_main" "slab_out_of_memory" "validate_slab_cache"] 2026/01/29 10:19:53 adding directly modified files to focus areas: ["mm/slub.c"] 2026/01/29 10:19:53 downloading corpus #1: "https://storage.googleapis.com/syzkaller/corpus/ci-upstream-kasan-gce-root-corpus.db" 2026/01/29 10:20:52 runner 3 connected 2026/01/29 10:20:52 runner 2 connected 2026/01/29 10:20:52 runner 4 connected 2026/01/29 10:20:53 runner 0 connected 2026/01/29 10:20:53 runner 8 connected 2026/01/29 10:20:53 runner 7 connected 2026/01/29 10:20:53 runner 1 connected 2026/01/29 10:20:53 runner 5 connected 2026/01/29 10:20:53 runner 0 connected 2026/01/29 10:20:53 runner 2 connected 2026/01/29 10:20:54 runner 6 connected 2026/01/29 10:20:54 runner 1 connected 2026/01/29 10:20:59 initializing coverage information... 2026/01/29 10:20:59 executor cover filter: 0 PCs 2026/01/29 10:21:03 discovered 7661 source files, 337553 symbols 2026/01/29 10:21:03 coverage filter: ^__kmalloc_cache_node_noprof$: [] 2026/01/29 10:21:03 coverage filter: ^__kmalloc_cache_noprof$: [] 2026/01/29 10:21:03 coverage filter: ^__kmalloc_node_noprof$: [] 2026/01/29 10:21:03 coverage filter: ^__kmalloc_node_track_caller_noprof$: [] 2026/01/29 10:21:03 coverage filter: ^__kmalloc_noprof$: [] 2026/01/29 10:21:03 coverage filter: ^__kmem_cache_alloc_bulk$: [] 2026/01/29 10:21:03 coverage filter: ^__kmem_cache_do_shrink$: [] 2026/01/29 10:21:03 coverage filter: ^__kmem_cache_shutdown$: [] 2026/01/29 10:21:03 coverage filter: ^__kvmalloc_node_noprof$: [] 2026/01/29 10:21:03 coverage filter: ^__pcs_replace_empty_main$: [] 2026/01/29 10:21:03 coverage filter: ^__pcs_replace_full_main$: [] 2026/01/29 10:21:03 coverage filter: ^__refill_objects_node$: [] 2026/01/29 10:21:03 coverage filter: ^__slab_free$: [] 2026/01/29 10:21:03 coverage filter: ^do_kmem_cache_create$: [] 2026/01/29 10:21:03 coverage filter: ^flush_cpu_sheaves$: [] 2026/01/29 10:21:03 coverage filter: ^free_deferred_objects$: [] 2026/01/29 10:21:03 coverage filter: ^kfree$: [] 2026/01/29 10:21:03 coverage filter: ^kfree_nolock$: [] 2026/01/29 10:21:03 coverage filter: ^kmalloc_nolock_noprof$: [] 2026/01/29 10:21:03 coverage filter: ^kmem_cache_alloc_bulk_noprof$: [] 2026/01/29 10:21:03 coverage filter: ^kmem_cache_alloc_from_sheaf_noprof$: [] 2026/01/29 10:21:03 coverage filter: ^kmem_cache_alloc_lru_noprof$: [] 2026/01/29 10:21:03 coverage filter: ^kmem_cache_alloc_node_noprof$: [] 2026/01/29 10:21:03 coverage filter: ^kmem_cache_alloc_noprof$: [] 2026/01/29 10:21:03 coverage filter: ^kmem_cache_free$: [] 2026/01/29 10:21:03 coverage filter: ^kmem_cache_free_bulk$: [] 2026/01/29 10:21:03 coverage filter: ^ksize$: [] 2026/01/29 10:21:03 coverage filter: ^kvfree_rcu_cb$: [] 2026/01/29 10:21:03 coverage filter: ^memcg_alloc_abort_single$: [] 2026/01/29 10:21:03 coverage filter: ^refill_objects$: [] 2026/01/29 10:21:03 coverage filter: ^sheaf_flush_main$: [] 2026/01/29 10:21:03 coverage filter: ^slab_out_of_memory$: [] 2026/01/29 10:21:03 coverage filter: ^validate_slab_cache$: [] 2026/01/29 10:21:03 coverage filter: mm/slub.c: [] 2026/01/29 10:21:03 area "symbols": 0 PCs in the cover filter 2026/01/29 10:21:03 area "files": 0 PCs in the cover filter 2026/01/29 10:21:03 area "": 0 PCs in the cover filter 2026/01/29 10:21:03 executor cover filter: 0 PCs 2026/01/29 10:21:04 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] bpf$BPF_TASK_FD_QUERY : fd_perf_base [bpf$BPF_RAW_TRACEPOINT_OPEN bpf$BPF_RAW_TRACEPOINT_OPEN_UNNAMED perf_event_open perf_event_open$cgroup] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TDX_CAPABILITIES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_TDX_FINALIZE_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_TDX_GET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TDX_INIT_MEM_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_TDX_INIT_VCPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TDX_INIT_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3836/8071 2026/01/29 10:21:04 base: machine check complete 2026/01/29 10:21:06 machine check: disabled the following syscalls: fsetxattr$security_selinux : selinux is not enabled fsetxattr$security_smack_transmute : smack is not enabled fsetxattr$smack_xattr_label : smack is not enabled get_thread_area : syscall get_thread_area is not present lookup_dcookie : syscall lookup_dcookie is not present lsetxattr$security_selinux : selinux is not enabled lsetxattr$security_smack_transmute : smack is not enabled lsetxattr$smack_xattr_label : smack is not enabled mount$esdfs : /proc/filesystems does not contain esdfs mount$incfs : /proc/filesystems does not contain incremental-fs openat$acpi_thermal_rel : failed to open /dev/acpi_thermal_rel: no such file or directory openat$ashmem : failed to open /dev/ashmem: no such file or directory openat$bifrost : failed to open /dev/bifrost: no such file or directory openat$binder : failed to open /dev/binder: no such file or directory openat$camx : failed to open /dev/v4l/by-path/platform-soc@0:qcom_cam-req-mgr-video-index0: no such file or directory openat$capi20 : failed to open /dev/capi20: no such file or directory openat$cdrom1 : failed to open /dev/cdrom1: no such file or directory openat$damon_attrs : failed to open /sys/kernel/debug/damon/attrs: no such file or directory openat$damon_init_regions : failed to open /sys/kernel/debug/damon/init_regions: no such file or directory openat$damon_kdamond_pid : failed to open /sys/kernel/debug/damon/kdamond_pid: no such file or directory openat$damon_mk_contexts : failed to open /sys/kernel/debug/damon/mk_contexts: no such file or directory openat$damon_monitor_on : failed to open /sys/kernel/debug/damon/monitor_on: no such file or directory openat$damon_rm_contexts : failed to open /sys/kernel/debug/damon/rm_contexts: no such file or directory openat$damon_schemes : failed to open /sys/kernel/debug/damon/schemes: no such file or directory openat$damon_target_ids : failed to open /sys/kernel/debug/damon/target_ids: no such file or directory openat$hwbinder : failed to open /dev/hwbinder: no such file or directory openat$i915 : failed to open /dev/i915: no such file or directory openat$img_rogue : failed to open /dev/img-rogue: no such file or directory openat$irnet : failed to open /dev/irnet: no such file or directory openat$keychord : failed to open /dev/keychord: no such file or directory openat$kvm : failed to open /dev/kvm: no such file or directory openat$lightnvm : failed to open /dev/lightnvm/control: no such file or directory openat$mali : failed to open /dev/mali0: no such file or directory openat$md : failed to open /dev/md0: no such file or directory openat$msm : failed to open /dev/msm: no such file or directory openat$ndctl0 : failed to open /dev/ndctl0: no such file or directory openat$nmem0 : failed to open /dev/nmem0: no such file or directory openat$pktcdvd : failed to open /dev/pktcdvd/control: no such file or directory openat$pmem0 : failed to open /dev/pmem0: no such file or directory openat$proc_capi20 : failed to open /proc/capi/capi20: no such file or directory openat$proc_capi20ncci : failed to open /proc/capi/capi20ncci: no such file or directory openat$proc_reclaim : failed to open /proc/self/reclaim: no such file or directory openat$ptp1 : failed to open /dev/ptp1: no such file or directory openat$rnullb : failed to open /dev/rnullb0: no such file or directory openat$selinux_access : failed to open /selinux/access: no such file or directory openat$selinux_attr : selinux is not enabled openat$selinux_avc_cache_stats : failed to open /selinux/avc/cache_stats: no such file or directory openat$selinux_avc_cache_threshold : failed to open /selinux/avc/cache_threshold: no such file or directory openat$selinux_avc_hash_stats : failed to open /selinux/avc/hash_stats: no such file or directory openat$selinux_checkreqprot : failed to open /selinux/checkreqprot: no such file or directory openat$selinux_commit_pending_bools : failed to open /selinux/commit_pending_bools: no such file or directory openat$selinux_context : failed to open /selinux/context: no such file or directory openat$selinux_create : failed to open /selinux/create: no such file or directory openat$selinux_enforce : failed to open /selinux/enforce: no such file or directory openat$selinux_load : failed to open /selinux/load: no such file or directory openat$selinux_member : failed to open /selinux/member: no such file or directory openat$selinux_mls : failed to open /selinux/mls: no such file or directory openat$selinux_policy : failed to open /selinux/policy: no such file or directory openat$selinux_relabel : failed to open /selinux/relabel: no such file or directory openat$selinux_status : failed to open /selinux/status: no such file or directory openat$selinux_user : failed to open /selinux/user: no such file or directory openat$selinux_validatetrans : failed to open /selinux/validatetrans: no such file or directory openat$sev : failed to open /dev/sev: no such file or directory openat$sgx_provision : failed to open /dev/sgx_provision: no such file or directory openat$smack_task_current : smack is not enabled openat$smack_thread_current : smack is not enabled openat$smackfs_access : failed to open /sys/fs/smackfs/access: no such file or directory openat$smackfs_ambient : failed to open /sys/fs/smackfs/ambient: no such file or directory openat$smackfs_change_rule : failed to open /sys/fs/smackfs/change-rule: no such file or directory openat$smackfs_cipso : failed to open /sys/fs/smackfs/cipso: no such file or directory openat$smackfs_cipsonum : failed to open /sys/fs/smackfs/direct: no such file or directory openat$smackfs_ipv6host : failed to open /sys/fs/smackfs/ipv6host: no such file or directory openat$smackfs_load : failed to open /sys/fs/smackfs/load: no such file or directory openat$smackfs_logging : failed to open /sys/fs/smackfs/logging: no such file or directory openat$smackfs_netlabel : failed to open /sys/fs/smackfs/netlabel: no such file or directory openat$smackfs_onlycap : failed to open /sys/fs/smackfs/onlycap: no such file or directory openat$smackfs_ptrace : failed to open /sys/fs/smackfs/ptrace: no such file or directory openat$smackfs_relabel_self : failed to open /sys/fs/smackfs/relabel-self: no such file or directory openat$smackfs_revoke_subject : failed to open /sys/fs/smackfs/revoke-subject: no such file or directory openat$smackfs_syslog : failed to open /sys/fs/smackfs/syslog: no such file or directory openat$smackfs_unconfined : failed to open /sys/fs/smackfs/unconfined: no such file or directory openat$tlk_device : failed to open /dev/tlk_device: no such file or directory openat$trusty : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_avb : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_gatekeeper : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwkey : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_hwrng : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_km_secure : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$trusty_storage : failed to open /dev/trusty-ipc-dev0: no such file or directory openat$tty : failed to open /dev/tty: no such device or address openat$uverbs0 : failed to open /dev/infiniband/uverbs0: no such file or directory openat$vfio : failed to open /dev/vfio/vfio: no such file or directory openat$vndbinder : failed to open /dev/vndbinder: no such file or directory openat$vtpm : failed to open /dev/vtpmx: no such file or directory openat$xenevtchn : failed to open /dev/xen/evtchn: no such file or directory openat$zygote : failed to open /dev/socket/zygote: no such file or directory pkey_alloc : pkey_alloc(0x0, 0x0) failed: no space left on device read$smackfs_access : smack is not enabled read$smackfs_cipsonum : smack is not enabled read$smackfs_logging : smack is not enabled read$smackfs_ptrace : smack is not enabled set_thread_area : syscall set_thread_area is not present setxattr$security_selinux : selinux is not enabled setxattr$security_smack_transmute : smack is not enabled setxattr$smack_xattr_label : smack is not enabled socket$hf : socket$hf(0x13, 0x2, 0x0) failed: address family not supported by protocol socket$inet6_dccp : socket$inet6_dccp(0xa, 0x6, 0x0) failed: socket type not supported socket$inet_dccp : socket$inet_dccp(0x2, 0x6, 0x0) failed: socket type not supported socket$vsock_dgram : socket$vsock_dgram(0x28, 0x2, 0x0) failed: no such device syz_btf_id_by_name$bpf_lsm : failed to open /sys/kernel/btf/vmlinux: no such file or directory syz_init_net_socket$bt_cmtp : syz_init_net_socket$bt_cmtp(0x1f, 0x3, 0x5) failed: protocol not supported syz_kvm_setup_cpu$ppc64 : unsupported arch syz_mount_image$bcachefs : /proc/filesystems does not contain bcachefs syz_mount_image$ntfs : /proc/filesystems does not contain ntfs syz_mount_image$reiserfs : /proc/filesystems does not contain reiserfs syz_mount_image$sysv : /proc/filesystems does not contain sysv syz_mount_image$v7 : /proc/filesystems does not contain v7 syz_open_dev$dricontrol : failed to open /dev/dri/controlD#: no such file or directory syz_open_dev$drirender : failed to open /dev/dri/renderD#: no such file or directory syz_open_dev$floppy : failed to open /dev/fd#: no such file or directory syz_open_dev$ircomm : failed to open /dev/ircomm#: no such file or directory syz_open_dev$sndhw : failed to open /dev/snd/hwC#D#: no such file or directory syz_pkey_set : pkey_alloc(0x0, 0x0) failed: no space left on device uselib : syscall uselib is not present write$selinux_access : selinux is not enabled write$selinux_attr : selinux is not enabled write$selinux_context : selinux is not enabled write$selinux_create : selinux is not enabled write$selinux_load : selinux is not enabled write$selinux_user : selinux is not enabled write$selinux_validatetrans : selinux is not enabled write$smack_current : smack is not enabled write$smackfs_access : smack is not enabled write$smackfs_change_rule : smack is not enabled write$smackfs_cipso : smack is not enabled write$smackfs_cipsonum : smack is not enabled write$smackfs_ipv6host : smack is not enabled write$smackfs_label : smack is not enabled write$smackfs_labels_list : smack is not enabled write$smackfs_load : smack is not enabled write$smackfs_logging : smack is not enabled write$smackfs_netlabel : smack is not enabled write$smackfs_ptrace : smack is not enabled transitively disabled the following syscalls (missing resource [creating syscalls]): bind$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] bpf$BPF_TASK_FD_QUERY : fd_perf_base [bpf$BPF_RAW_TRACEPOINT_OPEN bpf$BPF_RAW_TRACEPOINT_OPEN_UNNAMED perf_event_open perf_event_open$cgroup] close$ibv_device : fd_rdma [openat$uverbs0] connect$hf : sock_hf [socket$hf] connect$vsock_dgram : sock_vsock_dgram [socket$vsock_dgram] getsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] getsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] getsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] getsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] ioctl$ACPI_THERMAL_GET_ART : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_ART_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_COUNT : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ACPI_THERMAL_GET_TRT_LEN : fd_acpi_thermal_rel [openat$acpi_thermal_rel] ioctl$ASHMEM_GET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PIN_STATUS : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_GET_SIZE : fd_ashmem [openat$ashmem] ioctl$ASHMEM_PURGE_ALL_CACHES : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_NAME : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_PROT_MASK : fd_ashmem [openat$ashmem] ioctl$ASHMEM_SET_SIZE : fd_ashmem [openat$ashmem] ioctl$CAPI_CLR_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_ERRCODE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_FLAGS : fd_capi20 [openat$capi20] ioctl$CAPI_GET_MANUFACTURER : fd_capi20 [openat$capi20] ioctl$CAPI_GET_PROFILE : fd_capi20 [openat$capi20] ioctl$CAPI_GET_SERIAL : fd_capi20 [openat$capi20] ioctl$CAPI_INSTALLED : fd_capi20 [openat$capi20] ioctl$CAPI_MANUFACTURER_CMD : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_GETUNIT : fd_capi20 [openat$capi20] ioctl$CAPI_NCCI_OPENCOUNT : fd_capi20 [openat$capi20] ioctl$CAPI_REGISTER : fd_capi20 [openat$capi20] ioctl$CAPI_SET_FLAGS : fd_capi20 [openat$capi20] ioctl$CREATE_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DESTROY_COUNTERS : fd_rdma [openat$uverbs0] ioctl$DRM_IOCTL_I915_GEM_BUSY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2 : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_EXECBUFFER2_WR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_APERTURE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_GET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MADVISE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_GTT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_MMAP_OFFSET : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PREAD : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_PWRITE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_CACHING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_DOMAIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SET_TILING : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_SW_FINISH : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_THROTTLE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_UNPIN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_USERPTR : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_CREATE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_VM_DESTROY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GEM_WAIT : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GETPARAM : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_PIPE_FROM_CRTC_ID : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_GET_RESET_STATS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_ATTRS : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_OVERLAY_PUT_IMAGE : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_ADD_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_OPEN : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_PERF_REMOVE_CONFIG : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_QUERY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_REG_READ : fd_i915 [openat$i915] ioctl$DRM_IOCTL_I915_SET_SPRITE_COLORKEY : fd_i915 [openat$i915] ioctl$DRM_IOCTL_MSM_GEM_CPU_FINI : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_CPU_PREP : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_INFO : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_MADVISE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GEM_SUBMIT : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_GET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SET_PARAM : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_NEW : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_SUBMITQUEUE_QUERY : fd_msm [openat$msm] ioctl$DRM_IOCTL_MSM_WAIT_FENCE : fd_msm [openat$msm] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPEXEC: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CACHE_CACHEOPQUEUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTACQUIREREMOTECTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_CMM_DEVMEMINTUNEXPORTCTX: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYSPARSECHANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DEVICEMEMHISTORY_DEVICEMEMHISTORYUNMAPVRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMEXPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_DMABUF_PHYSMEMIMPORTSPARSEDMABUF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBCONTROL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_HTBUFFER_HTBLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_CHANGESPARSEMEM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMFLUSHDEVSLCRANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMGETFAULTADDRESS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTCTXDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPCREATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTHEAPDESTROY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTPINVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTREGISTERPFNOTIFYKM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPAGES: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNMAPPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPIN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNPININVALIDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINTUNRESERVERANGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMINVALIDATEFBSCTABLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_DEVMEMISVDEVADDRVALID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_GETMAXDEVMEMSIZE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCONFIGNAME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPCOUNT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_HEAPCFGHEAPDETAILS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDLOCKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PHYSMEMNEWRAMBACKEDPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMREXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRGETUID: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRLOCALIMPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNEXPORTPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNMAKELOCALIMPORTHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PMRUNREFUNLOCKPMR: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_MM_PVRSRVUPDATEOOMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLACQUIREDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCLOSESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLCOMMITSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLDISCOVERSTREAMS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLOPENSTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRELEASEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLRESERVESTREAM: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_PVRTL_TLWRITEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXCLEARBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXDISABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXENABLEBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXOVERALLOCATEBPREGISTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXBREAKPOINT_RGXSETBREAKPOINT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXCREATECOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXDESTROYCOMPUTECONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXFLUSHCOMPUTEDATA: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXGETLASTCOMPUTECONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXKICKCDM2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXNOTIFYCOMPUTEWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXCMP_RGXSETCOMPUTECONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXCURRENTTIME: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGDUMPFREELISTPAGELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGPHRCONFIGURE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETFWLOG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETHCSDEADLINE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSIDPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXFWDBG_RGXFWDEBUGSETOSNEWONLINESTATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGCUSTOMCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCONFIGENABLEHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERF: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXCTRLHWPERFCOUNTERS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXHWPERF_RGXGETHWPERFBVNCFEATUREFLAGS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXCREATEKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXDESTROYKICKSYNCCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXKICKSYNC2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXKICKSYNC_RGXSETKICKSYNCCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXADDREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXCLEARREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXDISABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXENABLEREGCONFIG: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXREGCONFIG_RGXSETREGCONFIGTYPE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXSIGNALS_RGXNOTIFYSIGNALUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATERENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXCREATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYFREELIST: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYHWRTDATASET: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYRENDERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXDESTROYZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXGETLASTRENDERCONTEXTRESETREASON: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXKICKTA3D2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXRENDERCONTEXTSTALLED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXSETRENDERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTA3D_RGXUNPOPULATEZSBUFFER: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMGETSHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMNOTIFYWRITEOFFSETUPDATE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMRELEASESHAREDMEMORY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ2_RGXTDMSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXCREATETRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXDESTROYTRANSFERCONTEXT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPRIORITY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSETTRANSFERCONTEXTPROPERTY: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_RGXTQ_RGXSUBMITTRANSFER2: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ACQUIREINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_ALIGNMENTCHECK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_CONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DISCONNECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_DUMPDEBUGINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTCLOSE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTOPEN: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAIT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_EVENTOBJECTWAITTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_FINDPROCESSMEMSTATS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVCLOCKSPEED: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETDEVICESTATUS: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_GETMULTICOREINFO: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_HWOPTIMEOUT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEGLOBALEVENTOBJECT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SRVCORE_RELEASEINFOPAGE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDADD: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNCTRACKING_SYNCRECORDREMOVEBYHANDLE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_ALLOCSYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_FREESYNCPRIMITIVEBLOCK: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCALLOCEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCCHECKPOINTSIGNALLEDPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCFREEEVENT: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPCBP: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPPOL: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMPDUMPVALUE: fd_rogue [openat$img_rogue] ioctl$DRM_IOCTL_PVR_SRVKM_CMD_PVRSRV_BRIDGE_SYNC_SYNCPRIMSET: fd_rogue [openat$img_rogue] ioctl$FLOPPY_FDCLRPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDDEFPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDEJECT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFLUSH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTBEG : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTEND : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDFMTTRK : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETDRVTYP : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETFDCSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDGETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGOFF : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDMSGON : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDPOLLDRVSTAT : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRAWCMD : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDRESET : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETDRVPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETEMSGTRESH : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETMAXERRS : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDSETPRM : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDTWADDLE : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORCLR : fd_floppy [syz_open_dev$floppy] ioctl$FLOPPY_FDWERRORGET : fd_floppy [syz_open_dev$floppy] ioctl$KBASE_HWCNT_READER_CLEAR : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DISABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_DUMP : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_ENABLE_EVENT : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_API_VERSION_WITH_FEATURES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_SIZE : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_GET_HWVER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_PUT_BUFFER_WITH_CYCLES: fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_HWCNT_READER_SET_INTERVAL : fd_hwcnt [ioctl$KBASE_IOCTL_HWCNT_READER_SETUP] ioctl$KBASE_IOCTL_BUFFER_LIVENESS_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CONTEXT_PRIORITY_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_CPU_QUEUE_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_EVENT_SIGNAL : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_GET_GLB_IFACE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_BIND : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_CREATE_1_6 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_GROUP_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_KICK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_REGISTER_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_QUEUE_TERMINATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_INIT_1_13 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_CS_TILER_HEAP_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_DISJOINT_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_FENCE_VALIDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CONTEXT_ID : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_CPU_GPU_TIMEINFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_DDK_VERSION : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_GET_GPUPROPS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_CLEAR : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_DUMP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_ENABLE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_READER_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_HWCNT_SET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_JOB_SUBMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_DELETE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KCPU_QUEUE_ENQUEUE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_CMD : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_ENUM_INFO : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_GET_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_PUT_SAMPLE : fd_kinstr [ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP] ioctl$KBASE_IOCTL_KINSTR_PRFCNT_SETUP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALIAS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_ALLOC_EX : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_COMMIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_EXEC_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_CPU_OFFSET : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FIND_GPU_START_AND_OFFSET: fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FLAGS_CHANGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_FREE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_IMPORT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_10_2 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_JIT_INIT_11_5 : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_PROFILE_ADD : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_QUERY : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_MEM_SYNC : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_POST_TERM : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_READ_USER_PAGE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_FLAGS : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SET_LIMITED_CORE_COUNT : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_SOFT_EVENT_UPDATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_MAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STICKY_RESOURCE_UNMAP : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_STREAM_CREATE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_ACQUIRE : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_TLSTREAM_FLUSH : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK : fd_bifrost [openat$bifrost openat$mali] ioctl$KBASE_IOCTL_VERSION_CHECK_RESERVED : fd_bifrost [openat$bifrost openat$mali] ioctl$KVM_ASSIGN_SET_MSIX_ENTRY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_ASSIGN_SET_MSIX_NR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DIRTY_LOG_RING_ACQ_REL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_DISABLE_QUIRKS2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_ENFORCE_PV_FEATURE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_EXCEPTION_PAYLOAD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_HYPERCALL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_EXIT_ON_EMULATION_FAILURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HALT_POLL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_DIRECT_TLBFLUSH : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENFORCE_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_ENLIGHTENED_VMCS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SEND_IPI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_SYNIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_SYNIC2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_HYPERV_TLBFLUSH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_HYPERV_VP_INDEX : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MAX_VCPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MEMORY_FAULT_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_MSR_PLATFORM_INFO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PMU_CAPABILITY : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_PTP_KVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SGX_ATTRIBUTE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SPLIT_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_STEAL_TIME : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_SYNC_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_CAP_VM_COPY_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_DISABLE_NX_HUGE_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_VM_TYPES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X2APIC_API : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_APIC_BUS_CYCLES_NS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_BUS_LOCK_EXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_DISABLE_EXITS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_GUEST_MODE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_NOTIFY_VMEXIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_X86_USER_SPACE_MSR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CAP_XEN_HVM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CHECK_EXTENSION : fd_kvm [openat$kvm] ioctl$KVM_CHECK_EXTENSION_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CLEAR_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_DEVICE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_GUEST_MEMFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VCPU : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_CREATE_VM : fd_kvm [openat$kvm] ioctl$KVM_DIRTY_TLB : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_API_VERSION : fd_kvm [openat$kvm] ioctl$KVM_GET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_GET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_DIRTY_LOG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_EMULATED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_MSRS_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_FEATURE_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_MSR_INDEX_LIST : fd_kvm [openat$kvm] ioctl$KVM_GET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_REG_LIST : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_STATS_FD_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_SUPPORTED_CPUID : fd_kvm [openat$kvm] ioctl$KVM_GET_SUPPORTED_HV_CPUID_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_SUPPORTED_HV_CPUID_sys : fd_kvm [openat$kvm] ioctl$KVM_GET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_GET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_VCPU_MMAP_SIZE : fd_kvm [openat$kvm] ioctl$KVM_GET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_GET_XSAVE2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_HAS_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_HAS_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_HYPERV_EVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_INTERRUPT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_IOEVENTFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQFD : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_IRQ_LINE_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_KVMCLOCK_CTRL : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_MEMORY_ENCRYPT_REG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_MEMORY_ENCRYPT_UNREG_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_NMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_PPC_ALLOCATE_HTAB : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_PRE_FAULT_MEMORY : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_REGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_REINJECT_CONTROL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RESET_DIRTY_RINGS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_RUN : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_S390_VCPU_FAULT : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_BOOT_CPU_ID : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CLOCK : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_CPUID2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEBUGREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR : fd_kvmdev [ioctl$KVM_CREATE_DEVICE] ioctl$KVM_SET_DEVICE_ATTR_vcpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_DEVICE_ATTR_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_FPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_GSI_ROUTING : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_GUEST_DEBUG_x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_IDENTITY_MAP_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_IRQCHIP : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_LAPIC : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MEMORY_ATTRIBUTES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_MP_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_MSRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NESTED_STATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_NR_MMU_PAGES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_ONE_REG : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_PIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_PIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_REGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SIGNAL_MASK : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_SREGS2 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_cpu : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_TSC_KHZ_vm : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_TSS_ADDR : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_USER_MEMORY_REGION2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SET_VAPIC_ADDR : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_VCPU_EVENTS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XCRS : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SET_XSAVE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_SEV_CERT_EXPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_DECRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_DBG_ENCRYPT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_ES_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GET_ATTESTATION_REPORT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_GUEST_STATUS : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_INIT2 : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_MEASURE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_SECRET : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_LAUNCH_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_RECEIVE_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_CANCEL : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_DATA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SEND_UPDATE_VMSA : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_FINISH : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_START : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SEV_SNP_LAUNCH_UPDATE : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SIGNAL_MSI : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_SMI : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TDX_CAPABILITIES : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_TDX_FINALIZE_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_TDX_GET_CPUID : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TDX_INIT_MEM_REGION : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_TDX_INIT_VCPU : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TDX_INIT_VM : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_TPR_ACCESS_REPORTING : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_TRANSLATE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_UNREGISTER_COALESCED_MMIO : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_X86_GET_MCE_CAP_SUPPORTED : fd_kvm [openat$kvm] ioctl$KVM_X86_SETUP_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MCE : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] ioctl$KVM_X86_SET_MSR_FILTER : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$KVM_XEN_HVM_CONFIG : fd_kvmvm [ioctl$KVM_CREATE_VM] ioctl$READ_COUNTERS : fd_rdma [openat$uverbs0] ioctl$SNDRV_FIREWIRE_IOCTL_GET_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_LOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_TASCAM_STATE : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_FIREWIRE_IOCTL_UNLOCK : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_LOAD : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_DSP_STATUS : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_INFO : fd_snd_hw [syz_open_dev$sndhw] ioctl$SNDRV_HWDEP_IOCTL_PVERSION : fd_snd_hw [syz_open_dev$sndhw] ioctl$TE_IOCTL_CLOSE_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_LAUNCH_OPERATION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_OPEN_CLIENT_SESSION : fd_tlk [openat$tlk_device] ioctl$TE_IOCTL_SS_CMD : fd_tlk [openat$tlk_device] ioctl$TIPC_IOC_CONNECT : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] ioctl$TIPC_IOC_CONNECT_avb : fd_trusty_avb [openat$trusty_avb] ioctl$TIPC_IOC_CONNECT_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] ioctl$TIPC_IOC_CONNECT_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] ioctl$TIPC_IOC_CONNECT_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] ioctl$TIPC_IOC_CONNECT_keymaster_secure : fd_trusty_km_secure [openat$trusty_km_secure] ioctl$TIPC_IOC_CONNECT_km : fd_trusty_km [openat$trusty_km] ioctl$TIPC_IOC_CONNECT_storage : fd_trusty_storage [openat$trusty_storage] ioctl$VFIO_CHECK_EXTENSION : fd_vfio [openat$vfio] ioctl$VFIO_GET_API_VERSION : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_GET_INFO : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_MAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_IOMMU_UNMAP_DMA : fd_vfio [openat$vfio] ioctl$VFIO_SET_IOMMU : fd_vfio [openat$vfio] ioctl$VTPM_PROXY_IOC_NEW_DEV : fd_vtpm [openat$vtpm] ioctl$sock_bt_cmtp_CMTPCONNADD : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPCONNDEL : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNINFO : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] ioctl$sock_bt_cmtp_CMTPGETCONNLIST : sock_bt_cmtp [syz_init_net_socket$bt_cmtp] mmap$DRM_I915 : fd_i915 [openat$i915] mmap$DRM_MSM : fd_msm [openat$msm] mmap$KVM_VCPU : vcpu_mmap_size [ioctl$KVM_GET_VCPU_MMAP_SIZE] mmap$bifrost : fd_bifrost [openat$bifrost openat$mali] mmap$perf : fd_perf [perf_event_open perf_event_open$cgroup] pkey_free : pkey [pkey_alloc] pkey_mprotect : pkey [pkey_alloc] read$sndhw : fd_snd_hw [syz_open_dev$sndhw] read$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] recvmsg$hf : sock_hf [socket$hf] sendmsg$hf : sock_hf [socket$hf] setsockopt$inet6_dccp_buf : sock_dccp6 [socket$inet6_dccp] setsockopt$inet6_dccp_int : sock_dccp6 [socket$inet6_dccp] setsockopt$inet_dccp_buf : sock_dccp [socket$inet_dccp] setsockopt$inet_dccp_int : sock_dccp [socket$inet_dccp] syz_kvm_add_vcpu$x86 : kvm_syz_vm$x86 [syz_kvm_setup_syzos_vm$x86] syz_kvm_assert_syzos_kvm_exit$x86 : kvm_run_ptr [mmap$KVM_VCPU] syz_kvm_assert_syzos_uexit$x86 : fd_kvmcpu [ioctl$KVM_CREATE_VCPU syz_kvm_add_vcpu$x86] syz_kvm_setup_cpu$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_kvm_setup_syzos_vm$x86 : fd_kvmvm [ioctl$KVM_CREATE_VM] syz_memcpy_off$KVM_EXIT_HYPERCALL : kvm_run_ptr [mmap$KVM_VCPU] syz_memcpy_off$KVM_EXIT_MMIO : kvm_run_ptr [mmap$KVM_VCPU] write$ALLOC_MW : fd_rdma [openat$uverbs0] write$ALLOC_PD : fd_rdma [openat$uverbs0] write$ATTACH_MCAST : fd_rdma [openat$uverbs0] write$CLOSE_XRCD : fd_rdma [openat$uverbs0] write$CREATE_AH : fd_rdma [openat$uverbs0] write$CREATE_COMP_CHANNEL : fd_rdma [openat$uverbs0] write$CREATE_CQ : fd_rdma [openat$uverbs0] write$CREATE_CQ_EX : fd_rdma [openat$uverbs0] write$CREATE_FLOW : fd_rdma [openat$uverbs0] write$CREATE_QP : fd_rdma [openat$uverbs0] write$CREATE_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$CREATE_SRQ : fd_rdma [openat$uverbs0] write$CREATE_WQ : fd_rdma [openat$uverbs0] write$DEALLOC_MW : fd_rdma [openat$uverbs0] write$DEALLOC_PD : fd_rdma [openat$uverbs0] write$DEREG_MR : fd_rdma [openat$uverbs0] write$DESTROY_AH : fd_rdma [openat$uverbs0] write$DESTROY_CQ : fd_rdma [openat$uverbs0] write$DESTROY_FLOW : fd_rdma [openat$uverbs0] write$DESTROY_QP : fd_rdma [openat$uverbs0] write$DESTROY_RWQ_IND_TBL : fd_rdma [openat$uverbs0] write$DESTROY_SRQ : fd_rdma [openat$uverbs0] write$DESTROY_WQ : fd_rdma [openat$uverbs0] write$DETACH_MCAST : fd_rdma [openat$uverbs0] write$MLX5_ALLOC_PD : fd_rdma [openat$uverbs0] write$MLX5_CREATE_CQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_DV_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_QP : fd_rdma [openat$uverbs0] write$MLX5_CREATE_SRQ : fd_rdma [openat$uverbs0] write$MLX5_CREATE_WQ : fd_rdma [openat$uverbs0] write$MLX5_GET_CONTEXT : fd_rdma [openat$uverbs0] write$MLX5_MODIFY_WQ : fd_rdma [openat$uverbs0] write$MODIFY_QP : fd_rdma [openat$uverbs0] write$MODIFY_SRQ : fd_rdma [openat$uverbs0] write$OPEN_XRCD : fd_rdma [openat$uverbs0] write$POLL_CQ : fd_rdma [openat$uverbs0] write$POST_RECV : fd_rdma [openat$uverbs0] write$POST_SEND : fd_rdma [openat$uverbs0] write$POST_SRQ_RECV : fd_rdma [openat$uverbs0] write$QUERY_DEVICE_EX : fd_rdma [openat$uverbs0] write$QUERY_PORT : fd_rdma [openat$uverbs0] write$QUERY_QP : fd_rdma [openat$uverbs0] write$QUERY_SRQ : fd_rdma [openat$uverbs0] write$REG_MR : fd_rdma [openat$uverbs0] write$REQ_NOTIFY_CQ : fd_rdma [openat$uverbs0] write$REREG_MR : fd_rdma [openat$uverbs0] write$RESIZE_CQ : fd_rdma [openat$uverbs0] write$capi20 : fd_capi20 [openat$capi20] write$capi20_data : fd_capi20 [openat$capi20] write$damon_attrs : fd_damon_attrs [openat$damon_attrs] write$damon_contexts : fd_damon_contexts [openat$damon_mk_contexts openat$damon_rm_contexts] write$damon_init_regions : fd_damon_init_regions [openat$damon_init_regions] write$damon_monitor_on : fd_damon_monitor_on [openat$damon_monitor_on] write$damon_schemes : fd_damon_schemes [openat$damon_schemes] write$damon_target_ids : fd_damon_target_ids [openat$damon_target_ids] write$proc_reclaim : fd_proc_reclaim [openat$proc_reclaim] write$sndhw : fd_snd_hw [syz_open_dev$sndhw] write$sndhw_fireworks : fd_snd_hw [syz_open_dev$sndhw] write$trusty : fd_trusty [openat$trusty openat$trusty_avb openat$trusty_gatekeeper ...] write$trusty_avb : fd_trusty_avb [openat$trusty_avb] write$trusty_gatekeeper : fd_trusty_gatekeeper [openat$trusty_gatekeeper] write$trusty_hwkey : fd_trusty_hwkey [openat$trusty_hwkey] write$trusty_hwrng : fd_trusty_hwrng [openat$trusty_hwrng] write$trusty_km : fd_trusty_km [openat$trusty_km] write$trusty_km_secure : fd_trusty_km_secure [openat$trusty_km_secure] write$trusty_storage : fd_trusty_storage [openat$trusty_storage] BinFmtMisc : enabled Comparisons : enabled Coverage : enabled DelayKcovMmap : enabled DevlinkPCI : PCI device 0000:00:10.0 is not available ExtraCoverage : enabled Fault : enabled KCSAN : write(/sys/kernel/debug/kcsan, on) failed KcovResetIoctl : kernel does not support ioctl(KCOV_RESET_TRACE) LRWPANEmulation : enabled Leak : failed to write(kmemleak, "scan=off") NetDevices : enabled NetInjection : enabled NicVF : PCI device 0000:00:11.0 is not available SandboxAndroid : setfilecon: setxattr failed. (errno 1: Operation not permitted). . process exited with status 67. SandboxNamespace : enabled SandboxNone : enabled SandboxSetuid : enabled Swap : enabled USBEmulation : enabled VhciInjection : enabled WifiEmulation : enabled syscalls : 3836/8071 2026/01/29 10:21:06 new: machine check complete 2026/01/29 10:21:07 new: adding 83271 seeds 2026/01/29 10:21:29 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = true] 2026/01/29 10:21:29 scheduled a reproduction of 'kernel BUG in hpage_collapse_scan_file' 2026/01/29 10:21:39 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = true] 2026/01/29 10:21:39 scheduled a reproduction of 'kernel BUG in hpage_collapse_scan_file' 2026/01/29 10:21:50 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = true] 2026/01/29 10:21:50 scheduled a reproduction of 'kernel BUG in hpage_collapse_scan_file' 2026/01/29 10:22:01 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = true] 2026/01/29 10:22:01 scheduled a reproduction of 'kernel BUG in hpage_collapse_scan_file' 2026/01/29 10:22:26 runner 3 connected 2026/01/29 10:22:36 runner 6 connected 2026/01/29 10:22:41 runner 8 connected 2026/01/29 10:22:42 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 10:22:50 runner 0 connected 2026/01/29 10:23:06 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:23:17 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:23:38 runner 1 connected 2026/01/29 10:23:42 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:23:53 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:23:56 runner 7 connected 2026/01/29 10:24:14 runner 0 connected 2026/01/29 10:24:30 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:24:40 runner 3 connected 2026/01/29 10:24:41 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:24:50 runner 1 connected 2026/01/29 10:24:56 STAT { "buffer too small": 0, "candidate triage jobs": 47, "candidates": 79406, "comps overflows": 0, "corpus": 3803, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 2146, "coverage": 151630, "distributor delayed": 5182, "distributor undelayed": 5167, "distributor violated": 16, "exec candidate": 3865, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 7828, "exec total [new]": 17074, "exec triage": 12026, "executor restarts [base]": 59, "executor restarts [new]": 113, "fault jobs": 0, "fuzzer jobs": 47, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 153607, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 3865, "no exec duration": 42792000000, "no exec requests": 367, "pending": 4, "prog exec time": 250, "reproducing": 0, "rpc recv": 1312460060, "rpc sent": 90790248, "signal": 149577, "smash jobs": 0, "triage jobs": 0, "vm output": 2116072, "vm restarts [base]": 4, "vm restarts [new]": 17 } 2026/01/29 10:24:57 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:24:58 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:25:07 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:25:10 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:25:18 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:25:24 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 10:25:28 runner 7 connected 2026/01/29 10:25:28 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:25:32 runner 5 connected 2026/01/29 10:25:39 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:25:49 runner 0 connected 2026/01/29 10:25:55 runner 2 connected 2026/01/29 10:25:57 runner 8 connected 2026/01/29 10:25:59 runner 4 connected 2026/01/29 10:26:00 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:26:06 runner 6 connected 2026/01/29 10:26:11 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:26:14 runner 2 connected 2026/01/29 10:26:18 runner 3 connected 2026/01/29 10:26:24 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:26:29 runner 1 connected 2026/01/29 10:26:35 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:26:40 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:26:51 runner 5 connected 2026/01/29 10:26:52 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:26:55 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 10:27:00 runner 7 connected 2026/01/29 10:27:07 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 10:27:21 runner 2 connected 2026/01/29 10:27:32 runner 8 connected 2026/01/29 10:27:37 runner 4 connected 2026/01/29 10:27:45 runner 1 connected 2026/01/29 10:27:50 runner 0 connected 2026/01/29 10:27:57 runner 0 connected 2026/01/29 10:28:02 patched crashed: KASAN: slab-use-after-free Read in jfs_lazycommit [need repro = true] 2026/01/29 10:28:02 scheduled a reproduction of 'KASAN: slab-use-after-free Read in jfs_lazycommit' 2026/01/29 10:28:58 runner 4 connected 2026/01/29 10:29:13 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:29:15 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:29:23 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:29:25 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:29:38 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 10:29:56 STAT { "buffer too small": 0, "candidate triage jobs": 33, "candidates": 75435, "comps overflows": 0, "corpus": 7766, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 4233, "coverage": 189370, "distributor delayed": 11252, "distributor undelayed": 11252, "distributor violated": 81, "exec candidate": 7836, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 17402, "exec total [new]": 33939, "exec triage": 24190, "executor restarts [base]": 68, "executor restarts [new]": 190, "fault jobs": 0, "fuzzer jobs": 33, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 190656, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 7836, "no exec duration": 42894000000, "no exec requests": 371, "pending": 5, "prog exec time": 357, "reproducing": 0, "rpc recv": 2591991060, "rpc sent": 202661856, "signal": 186834, "smash jobs": 0, "triage jobs": 0, "vm output": 4831495, "vm restarts [base]": 7, "vm restarts [new]": 33 } 2026/01/29 10:30:11 runner 6 connected 2026/01/29 10:30:11 runner 8 connected 2026/01/29 10:30:15 runner 5 connected 2026/01/29 10:30:20 runner 1 connected 2026/01/29 10:30:37 runner 2 connected 2026/01/29 10:31:02 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2026/01/29 10:31:02 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2026/01/29 10:31:13 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = true] 2026/01/29 10:31:13 scheduled a reproduction of 'possible deadlock in ocfs2_reserve_suballoc_bits' 2026/01/29 10:31:13 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2026/01/29 10:31:34 patched crashed: possible deadlock in ocfs2_init_acl [need repro = true] 2026/01/29 10:31:34 scheduled a reproduction of 'possible deadlock in ocfs2_init_acl' 2026/01/29 10:31:44 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:31:47 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:31:55 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:31:58 runner 2 connected 2026/01/29 10:31:59 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:32:02 runner 4 connected 2026/01/29 10:32:06 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2026/01/29 10:32:06 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2026/01/29 10:32:10 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:32:10 runner 0 connected 2026/01/29 10:32:18 patched crashed: kernel BUG in jfs_evict_inode [need repro = true] 2026/01/29 10:32:18 scheduled a reproduction of 'kernel BUG in jfs_evict_inode' 2026/01/29 10:32:24 runner 1 connected 2026/01/29 10:32:33 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:32:34 runner 3 connected 2026/01/29 10:32:37 runner 8 connected 2026/01/29 10:32:38 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 10:32:44 runner 7 connected 2026/01/29 10:32:44 base crash: kernel BUG in jfs_evict_inode 2026/01/29 10:32:45 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:32:48 runner 0 connected 2026/01/29 10:32:56 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:32:57 runner 6 connected 2026/01/29 10:33:00 runner 5 connected 2026/01/29 10:33:02 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2026/01/29 10:33:03 base crash: kernel BUG in jfs_evict_inode 2026/01/29 10:33:08 runner 2 connected 2026/01/29 10:33:22 runner 4 connected 2026/01/29 10:33:24 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:33:28 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:33:29 runner 0 connected 2026/01/29 10:33:33 runner 2 connected 2026/01/29 10:33:34 runner 1 connected 2026/01/29 10:33:38 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:33:39 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:33:45 runner 8 connected 2026/01/29 10:33:52 runner 3 connected 2026/01/29 10:33:53 runner 1 connected 2026/01/29 10:34:14 runner 7 connected 2026/01/29 10:34:19 runner 0 connected 2026/01/29 10:34:28 runner 6 connected 2026/01/29 10:34:35 runner 2 connected 2026/01/29 10:34:56 STAT { "buffer too small": 0, "candidate triage jobs": 53, "candidates": 71602, "comps overflows": 0, "corpus": 11547, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 6058, "coverage": 212854, "distributor delayed": 16812, "distributor undelayed": 16812, "distributor violated": 212, "exec candidate": 11669, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 0, "exec seeds": 0, "exec smash": 0, "exec total [base]": 24043, "exec total [new]": 50592, "exec triage": 35811, "executor restarts [base]": 96, "executor restarts [new]": 286, "fault jobs": 0, "fuzzer jobs": 53, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 214474, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 11669, "no exec duration": 43268000000, "no exec requests": 376, "pending": 10, "prog exec time": 164, "reproducing": 0, "rpc recv": 4028329252, "rpc sent": 322109408, "signal": 210099, "smash jobs": 0, "triage jobs": 0, "vm output": 7697243, "vm restarts [base]": 12, "vm restarts [new]": 55 } 2026/01/29 10:35:15 base crash: possible deadlock in ext4_destroy_inline_data 2026/01/29 10:35:20 base crash: kernel BUG in jfs_evict_inode 2026/01/29 10:35:43 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:35:53 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:36:12 runner 0 connected 2026/01/29 10:36:16 runner 2 connected 2026/01/29 10:36:40 runner 5 connected 2026/01/29 10:36:42 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:36:50 runner 6 connected 2026/01/29 10:36:52 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:37:02 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:37:08 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:37:13 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:37:20 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:37:24 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:37:32 runner 3 connected 2026/01/29 10:37:35 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:37:43 runner 8 connected 2026/01/29 10:37:58 runner 4 connected 2026/01/29 10:37:59 runner 1 connected 2026/01/29 10:38:03 runner 0 connected 2026/01/29 10:38:10 runner 6 connected 2026/01/29 10:38:13 runner 5 connected 2026/01/29 10:38:21 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:38:25 runner 7 connected 2026/01/29 10:38:32 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:38:38 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 10:38:56 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 10:39:01 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:39:13 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:39:18 runner 3 connected 2026/01/29 10:39:27 base crash: WARNING in hci_conn_timeout 2026/01/29 10:39:28 runner 8 connected 2026/01/29 10:39:29 runner 0 connected 2026/01/29 10:39:36 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:39:48 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:39:51 runner 5 connected 2026/01/29 10:39:54 runner 2 connected 2026/01/29 10:39:56 STAT { "buffer too small": 0, "candidate triage jobs": 59, "candidates": 67532, "comps overflows": 0, "corpus": 15565, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 8400, "coverage": 230615, "distributor delayed": 22575, "distributor undelayed": 22542, "distributor violated": 213, "exec candidate": 15739, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 2, "exec seeds": 0, "exec smash": 0, "exec total [base]": 31671, "exec total [new]": 69628, "exec triage": 48362, "executor restarts [base]": 118, "executor restarts [new]": 350, "fault jobs": 0, "fuzzer jobs": 59, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 232297, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 15739, "no exec duration": 43320000000, "no exec requests": 378, "pending": 10, "prog exec time": 312, "reproducing": 0, "rpc recv": 5083395960, "rpc sent": 442903264, "signal": 227534, "smash jobs": 0, "triage jobs": 0, "vm output": 10158115, "vm restarts [base]": 16, "vm restarts [new]": 68 } 2026/01/29 10:40:03 runner 1 connected 2026/01/29 10:40:24 runner 1 connected 2026/01/29 10:40:26 runner 6 connected 2026/01/29 10:40:45 runner 0 connected 2026/01/29 10:42:04 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:42:09 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 10:42:15 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:42:41 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2026/01/29 10:42:41 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2026/01/29 10:42:57 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2026/01/29 10:42:57 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2026/01/29 10:43:03 runner 6 connected 2026/01/29 10:43:07 runner 0 connected 2026/01/29 10:43:08 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = true] 2026/01/29 10:43:08 scheduled a reproduction of 'possible deadlock in ocfs2_try_remove_refcount_tree' 2026/01/29 10:43:13 runner 1 connected 2026/01/29 10:43:16 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2026/01/29 10:43:39 runner 4 connected 2026/01/29 10:43:54 runner 3 connected 2026/01/29 10:44:04 runner 0 connected 2026/01/29 10:44:13 runner 1 connected 2026/01/29 10:44:30 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:44:41 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:44:56 STAT { "buffer too small": 0, "candidate triage jobs": 44, "candidates": 63652, "comps overflows": 0, "corpus": 19407, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 10219, "coverage": 244211, "distributor delayed": 27673, "distributor undelayed": 27673, "distributor violated": 258, "exec candidate": 19619, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 38182, "exec total [new]": 87359, "exec triage": 60155, "executor restarts [base]": 152, "executor restarts [new]": 431, "fault jobs": 0, "fuzzer jobs": 44, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 7, "hints jobs": 0, "max signal": 245958, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 19619, "no exec duration": 44416000000, "no exec requests": 385, "pending": 13, "prog exec time": 307, "reproducing": 0, "rpc recv": 6078229852, "rpc sent": 556641776, "signal": 241068, "smash jobs": 0, "triage jobs": 0, "vm output": 13317621, "vm restarts [base]": 19, "vm restarts [new]": 76 } 2026/01/29 10:45:29 runner 8 connected 2026/01/29 10:45:39 runner 5 connected 2026/01/29 10:46:32 patched crashed: INFO: task hung in switchdev_deferred_process_work [need repro = true] 2026/01/29 10:46:32 scheduled a reproduction of 'INFO: task hung in switchdev_deferred_process_work' 2026/01/29 10:46:33 base crash: possible deadlock in ocfs2_init_acl 2026/01/29 10:46:34 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2026/01/29 10:47:22 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:47:28 runner 2 connected 2026/01/29 10:47:30 runner 0 connected 2026/01/29 10:47:32 runner 7 connected 2026/01/29 10:47:33 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:48:20 runner 8 connected 2026/01/29 10:48:29 runner 3 connected 2026/01/29 10:48:38 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:48:49 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:49:02 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2026/01/29 10:49:06 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:49:13 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2026/01/29 10:49:15 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2026/01/29 10:49:17 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:49:28 runner 0 connected 2026/01/29 10:49:38 runner 8 connected 2026/01/29 10:49:56 STAT { "buffer too small": 0, "candidate triage jobs": 165, "candidates": 59582, "comps overflows": 0, "corpus": 23304, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 12199, "coverage": 256923, "distributor delayed": 32963, "distributor undelayed": 32842, "distributor violated": 426, "exec candidate": 23689, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 4, "exec seeds": 0, "exec smash": 0, "exec total [base]": 45382, "exec total [new]": 106608, "exec triage": 72321, "executor restarts [base]": 174, "executor restarts [new]": 505, "fault jobs": 0, "fuzzer jobs": 165, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 259093, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 23689, "no exec duration": 44424000000, "no exec requests": 386, "pending": 14, "prog exec time": 259, "reproducing": 0, "rpc recv": 6947088736, "rpc sent": 663972880, "signal": 253676, "smash jobs": 0, "triage jobs": 0, "vm output": 16190130, "vm restarts [base]": 20, "vm restarts [new]": 84 } 2026/01/29 10:49:59 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2026/01/29 10:50:00 runner 3 connected 2026/01/29 10:50:01 runner 7 connected 2026/01/29 10:50:03 runner 5 connected 2026/01/29 10:50:05 runner 4 connected 2026/01/29 10:50:08 base crash: INFO: trying to register non-static key in ocfs2_dlm_shutdown 2026/01/29 10:50:08 runner 1 connected 2026/01/29 10:50:10 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2026/01/29 10:50:13 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2026/01/29 10:50:15 base crash: kernel BUG in jfs_evict_inode 2026/01/29 10:50:25 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2026/01/29 10:50:28 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:50:40 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:50:50 runner 2 connected 2026/01/29 10:51:04 runner 8 connected 2026/01/29 10:51:04 runner 0 connected 2026/01/29 10:51:05 runner 1 connected 2026/01/29 10:51:07 runner 0 connected 2026/01/29 10:51:13 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:51:18 runner 6 connected 2026/01/29 10:51:22 runner 4 connected 2026/01/29 10:51:24 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:51:28 runner 5 connected 2026/01/29 10:52:11 runner 3 connected 2026/01/29 10:52:13 runner 7 connected 2026/01/29 10:52:32 base crash: kernel BUG in jfs_evict_inode 2026/01/29 10:52:54 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:52:54 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:52:56 patched crashed: kernel BUG in txUnlock [need repro = true] 2026/01/29 10:52:56 scheduled a reproduction of 'kernel BUG in txUnlock' 2026/01/29 10:52:58 patched crashed: kernel BUG in txUnlock [need repro = true] 2026/01/29 10:52:58 scheduled a reproduction of 'kernel BUG in txUnlock' 2026/01/29 10:52:59 patched crashed: kernel BUG in txUnlock [need repro = true] 2026/01/29 10:52:59 scheduled a reproduction of 'kernel BUG in txUnlock' 2026/01/29 10:52:59 patched crashed: kernel BUG in txUnlock [need repro = true] 2026/01/29 10:52:59 scheduled a reproduction of 'kernel BUG in txUnlock' 2026/01/29 10:53:05 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:53:11 patched crashed: kernel BUG in txUnlock [need repro = true] 2026/01/29 10:53:11 scheduled a reproduction of 'kernel BUG in txUnlock' 2026/01/29 10:53:15 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:53:21 runner 0 connected 2026/01/29 10:53:44 runner 4 connected 2026/01/29 10:53:45 base crash: INFO: task hung in __iterate_supers 2026/01/29 10:53:46 runner 6 connected 2026/01/29 10:53:47 runner 5 connected 2026/01/29 10:53:49 runner 0 connected 2026/01/29 10:53:51 runner 7 connected 2026/01/29 10:53:54 runner 3 connected 2026/01/29 10:53:56 runner 1 connected 2026/01/29 10:54:01 runner 8 connected 2026/01/29 10:54:02 base crash: kernel BUG in txUnlock 2026/01/29 10:54:13 runner 2 connected 2026/01/29 10:54:27 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 10:54:35 runner 2 connected 2026/01/29 10:54:56 STAT { "buffer too small": 0, "candidate triage jobs": 51, "candidates": 56377, "comps overflows": 0, "corpus": 26572, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 13680, "coverage": 265652, "distributor delayed": 37042, "distributor undelayed": 37042, "distributor violated": 542, "exec candidate": 26894, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 6, "exec seeds": 0, "exec smash": 0, "exec total [base]": 50504, "exec total [new]": 122224, "exec triage": 82148, "executor restarts [base]": 201, "executor restarts [new]": 624, "fault jobs": 0, "fuzzer jobs": 51, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 9, "hints jobs": 0, "max signal": 267585, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 26894, "no exec duration": 44815000000, "no exec requests": 388, "pending": 19, "prog exec time": 239, "reproducing": 0, "rpc recv": 8227762192, "rpc sent": 779050536, "signal": 262371, "smash jobs": 0, "triage jobs": 0, "vm output": 19232158, "vm restarts [base]": 24, "vm restarts [new]": 106 } 2026/01/29 10:55:00 runner 0 connected 2026/01/29 10:55:16 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:55:22 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:55:24 runner 1 connected 2026/01/29 10:55:26 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:55:32 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:55:32 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:55:37 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:55:43 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:55:48 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:55:50 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 10:55:53 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:56:05 runner 5 connected 2026/01/29 10:56:12 runner 1 connected 2026/01/29 10:56:16 runner 0 connected 2026/01/29 10:56:22 runner 4 connected 2026/01/29 10:56:26 runner 2 connected 2026/01/29 10:56:30 runner 3 connected 2026/01/29 10:56:31 runner 6 connected 2026/01/29 10:56:38 runner 8 connected 2026/01/29 10:56:40 runner 0 connected 2026/01/29 10:56:44 runner 7 connected 2026/01/29 10:57:01 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2026/01/29 10:57:16 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:57:27 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:57:37 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2026/01/29 10:57:47 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2026/01/29 10:57:58 runner 5 connected 2026/01/29 10:58:05 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2026/01/29 10:58:12 runner 1 connected 2026/01/29 10:58:21 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 10:58:21 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:58:25 runner 7 connected 2026/01/29 10:58:26 runner 6 connected 2026/01/29 10:58:31 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:58:32 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:58:42 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:58:45 runner 0 connected 2026/01/29 10:58:53 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:58:55 runner 2 connected 2026/01/29 10:59:11 runner 1 connected 2026/01/29 10:59:18 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:59:18 runner 5 connected 2026/01/29 10:59:28 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:59:28 runner 4 connected 2026/01/29 10:59:29 runner 3 connected 2026/01/29 10:59:36 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 10:59:39 runner 1 connected 2026/01/29 10:59:39 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 10:59:44 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:59:51 runner 8 connected 2026/01/29 10:59:55 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 10:59:56 STAT { "buffer too small": 0, "candidate triage jobs": 36, "candidates": 53495, "comps overflows": 0, "corpus": 29445, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 15055, "coverage": 272900, "distributor delayed": 41781, "distributor undelayed": 41766, "distributor violated": 574, "exec candidate": 29776, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 8, "exec seeds": 0, "exec smash": 0, "exec total [base]": 57645, "exec total [new]": 136438, "exec triage": 90924, "executor restarts [base]": 229, "executor restarts [new]": 695, "fault jobs": 0, "fuzzer jobs": 36, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 274781, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 29776, "no exec duration": 44815000000, "no exec requests": 388, "pending": 19, "prog exec time": 268, "reproducing": 0, "rpc recv": 9470622620, "rpc sent": 897421056, "signal": 269571, "smash jobs": 0, "triage jobs": 0, "vm output": 21936959, "vm restarts [base]": 29, "vm restarts [new]": 125 } 2026/01/29 11:00:14 runner 6 connected 2026/01/29 11:00:15 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:00:25 runner 7 connected 2026/01/29 11:00:26 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:00:31 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:00:33 runner 0 connected 2026/01/29 11:00:35 runner 2 connected 2026/01/29 11:00:37 runner 2 connected 2026/01/29 11:00:42 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:00:45 runner 0 connected 2026/01/29 11:00:50 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:00:58 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:01:00 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:01:06 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:01:12 runner 1 connected 2026/01/29 11:01:13 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:01:17 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:01:17 runner 4 connected 2026/01/29 11:01:27 runner 8 connected 2026/01/29 11:01:32 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:01:36 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:01:39 runner 5 connected 2026/01/29 11:01:40 runner 3 connected 2026/01/29 11:01:47 runner 7 connected 2026/01/29 11:01:48 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:01:49 runner 6 connected 2026/01/29 11:01:49 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:01:54 runner 2 connected 2026/01/29 11:02:01 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:02:03 runner 0 connected 2026/01/29 11:02:05 runner 1 connected 2026/01/29 11:02:11 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:02:22 runner 1 connected 2026/01/29 11:02:25 runner 4 connected 2026/01/29 11:02:40 runner 0 connected 2026/01/29 11:02:45 runner 8 connected 2026/01/29 11:02:53 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:02:58 runner 5 connected 2026/01/29 11:03:01 runner 7 connected 2026/01/29 11:03:02 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:03:07 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:03:17 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:03:31 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:03:42 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:03:44 runner 1 connected 2026/01/29 11:03:51 runner 2 connected 2026/01/29 11:03:52 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:04:05 runner 4 connected 2026/01/29 11:04:06 runner 6 connected 2026/01/29 11:04:06 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:04:17 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:04:21 runner 8 connected 2026/01/29 11:04:34 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:04:38 runner 3 connected 2026/01/29 11:04:41 runner 5 connected 2026/01/29 11:04:43 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:04:54 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:04:56 STAT { "buffer too small": 0, "candidate triage jobs": 33, "candidates": 50780, "comps overflows": 0, "corpus": 32139, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 16495, "coverage": 279295, "distributor delayed": 46546, "distributor undelayed": 46543, "distributor violated": 632, "exec candidate": 32491, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 8, "exec seeds": 0, "exec smash": 0, "exec total [base]": 63950, "exec total [new]": 151199, "exec triage": 99187, "executor restarts [base]": 257, "executor restarts [new]": 776, "fault jobs": 0, "fuzzer jobs": 33, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 281106, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 32491, "no exec duration": 45286000000, "no exec requests": 394, "pending": 19, "prog exec time": 196, "reproducing": 0, "rpc recv": 10846741008, "rpc sent": 1024635856, "signal": 275924, "smash jobs": 0, "triage jobs": 0, "vm output": 24044922, "vm restarts [base]": 35, "vm restarts [new]": 148 } 2026/01/29 11:04:59 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:05:02 runner 2 connected 2026/01/29 11:05:14 runner 1 connected 2026/01/29 11:05:25 runner 0 connected 2026/01/29 11:05:41 runner 6 connected 2026/01/29 11:05:44 runner 0 connected 2026/01/29 11:05:49 runner 2 connected 2026/01/29 11:06:08 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:06:09 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:06:17 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:06:19 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:06:20 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:06:44 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:06:54 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:06:59 runner 8 connected 2026/01/29 11:07:05 runner 7 connected 2026/01/29 11:07:09 runner 3 connected 2026/01/29 11:07:12 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:07:14 runner 2 connected 2026/01/29 11:07:17 runner 0 connected 2026/01/29 11:07:33 runner 4 connected 2026/01/29 11:07:43 runner 6 connected 2026/01/29 11:07:53 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:08:03 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:08:10 runner 0 connected 2026/01/29 11:08:14 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:08:21 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:08:24 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:08:32 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:08:50 runner 2 connected 2026/01/29 11:09:01 runner 5 connected 2026/01/29 11:09:02 runner 8 connected 2026/01/29 11:09:14 runner 4 connected 2026/01/29 11:09:17 runner 3 connected 2026/01/29 11:09:21 runner 7 connected 2026/01/29 11:09:35 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:09:45 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:09:46 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:09:52 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:09:54 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:09:56 STAT { "buffer too small": 0, "candidate triage jobs": 34, "candidates": 47107, "comps overflows": 0, "corpus": 35764, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 18504, "coverage": 286982, "distributor delayed": 51486, "distributor undelayed": 51479, "distributor violated": 891, "exec candidate": 36164, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 10, "exec seeds": 0, "exec smash": 0, "exec total [base]": 72013, "exec total [new]": 171009, "exec triage": 110325, "executor restarts [base]": 282, "executor restarts [new]": 865, "fault jobs": 0, "fuzzer jobs": 34, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 288818, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 36164, "no exec duration": 49900000000, "no exec requests": 404, "pending": 19, "prog exec time": 233, "reproducing": 0, "rpc recv": 12026544348, "rpc sent": 1164379712, "signal": 283519, "smash jobs": 0, "triage jobs": 0, "vm output": 27265565, "vm restarts [base]": 39, "vm restarts [new]": 164 } 2026/01/29 11:10:02 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:10:04 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:10:15 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:10:21 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:10:25 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:10:25 runner 0 connected 2026/01/29 11:10:35 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:10:36 runner 5 connected 2026/01/29 11:10:37 runner 2 connected 2026/01/29 11:10:41 runner 7 connected 2026/01/29 11:10:43 runner 1 connected 2026/01/29 11:10:52 runner 4 connected 2026/01/29 11:10:54 runner 6 connected 2026/01/29 11:11:11 runner 0 connected 2026/01/29 11:11:12 runner 1 connected 2026/01/29 11:11:15 runner 3 connected 2026/01/29 11:11:21 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:11:25 runner 8 connected 2026/01/29 11:11:33 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:11:39 patched crashed: possible deadlock in ocfs2_del_inode_from_orphan [need repro = true] 2026/01/29 11:11:39 scheduled a reproduction of 'possible deadlock in ocfs2_del_inode_from_orphan' 2026/01/29 11:11:57 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:11:59 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:12:08 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:12:10 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:12:18 runner 2 connected 2026/01/29 11:12:23 runner 5 connected 2026/01/29 11:12:23 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:12:25 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:12:29 runner 0 connected 2026/01/29 11:12:34 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:12:37 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:12:49 runner 6 connected 2026/01/29 11:12:54 runner 8 connected 2026/01/29 11:12:58 runner 1 connected 2026/01/29 11:12:59 runner 3 connected 2026/01/29 11:13:13 runner 4 connected 2026/01/29 11:13:22 runner 1 connected 2026/01/29 11:13:24 runner 7 connected 2026/01/29 11:13:27 runner 2 connected 2026/01/29 11:13:51 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:14:01 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:14:14 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:14:24 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:14:48 runner 8 connected 2026/01/29 11:14:56 STAT { "buffer too small": 0, "candidate triage jobs": 36, "candidates": 44139, "comps overflows": 0, "corpus": 38699, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 20185, "coverage": 292628, "distributor delayed": 56081, "distributor undelayed": 56079, "distributor violated": 961, "exec candidate": 39132, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 79549, "exec total [new]": 187391, "exec triage": 119291, "executor restarts [base]": 306, "executor restarts [new]": 948, "fault jobs": 0, "fuzzer jobs": 36, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 294565, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 39132, "no exec duration": 49982000000, "no exec requests": 407, "pending": 20, "prog exec time": 251, "reproducing": 0, "rpc recv": 13256491148, "rpc sent": 1275401752, "signal": 289107, "smash jobs": 0, "triage jobs": 0, "vm output": 29927622, "vm restarts [base]": 44, "vm restarts [new]": 182 } 2026/01/29 11:14:58 runner 1 connected 2026/01/29 11:15:06 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:15:11 runner 7 connected 2026/01/29 11:15:17 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:15:22 runner 2 connected 2026/01/29 11:15:56 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:16:03 runner 6 connected 2026/01/29 11:16:07 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:16:09 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:16:13 runner 3 connected 2026/01/29 11:16:18 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:16:19 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:16:25 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:16:34 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:16:40 patched crashed: INFO: task hung in evict [need repro = true] 2026/01/29 11:16:40 scheduled a reproduction of 'INFO: task hung in evict' 2026/01/29 11:16:45 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:16:53 runner 7 connected 2026/01/29 11:16:54 base crash: INFO: task hung in evict 2026/01/29 11:16:55 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:16:56 runner 0 connected 2026/01/29 11:16:59 runner 2 connected 2026/01/29 11:17:07 runner 8 connected 2026/01/29 11:17:09 runner 4 connected 2026/01/29 11:17:14 runner 1 connected 2026/01/29 11:17:23 runner 6 connected 2026/01/29 11:17:31 runner 5 connected 2026/01/29 11:17:35 runner 1 connected 2026/01/29 11:17:42 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:17:42 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:17:43 runner 0 connected 2026/01/29 11:17:45 runner 3 connected 2026/01/29 11:17:53 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:18:01 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:18:02 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:18:04 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:18:11 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:18:12 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:18:13 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:18:22 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:18:33 runner 0 connected 2026/01/29 11:18:35 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:18:38 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:18:40 runner 8 connected 2026/01/29 11:18:42 runner 7 connected 2026/01/29 11:18:52 runner 4 connected 2026/01/29 11:18:52 runner 1 connected 2026/01/29 11:18:54 runner 6 connected 2026/01/29 11:18:59 runner 0 connected 2026/01/29 11:18:59 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:18:59 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:19:01 runner 2 connected 2026/01/29 11:19:02 runner 3 connected 2026/01/29 11:19:09 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:19:12 runner 5 connected 2026/01/29 11:19:12 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:19:20 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:19:20 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:19:23 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:19:24 runner 1 connected 2026/01/29 11:19:28 runner 2 connected 2026/01/29 11:19:31 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:19:49 runner 0 connected 2026/01/29 11:19:49 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:19:50 runner 8 connected 2026/01/29 11:19:56 STAT { "buffer too small": 0, "candidate triage jobs": 186, "candidates": 42217, "comps overflows": 0, "corpus": 40449, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 21181, "coverage": 295900, "distributor delayed": 59213, "distributor undelayed": 59033, "distributor violated": 961, "exec candidate": 41054, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 15, "exec seeds": 0, "exec smash": 0, "exec total [base]": 86629, "exec total [new]": 197974, "exec triage": 124699, "executor restarts [base]": 329, "executor restarts [new]": 1023, "fault jobs": 0, "fuzzer jobs": 186, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 298070, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 41054, "no exec duration": 49982000000, "no exec requests": 407, "pending": 21, "prog exec time": 148, "reproducing": 0, "rpc recv": 14577310080, "rpc sent": 1397923064, "signal": 292331, "smash jobs": 0, "triage jobs": 0, "vm output": 31766403, "vm restarts [base]": 49, "vm restarts [new]": 207 } 2026/01/29 11:19:58 runner 7 connected 2026/01/29 11:20:00 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:20:01 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:20:02 runner 4 connected 2026/01/29 11:20:10 runner 6 connected 2026/01/29 11:20:11 runner 1 connected 2026/01/29 11:20:12 runner 3 connected 2026/01/29 11:20:22 runner 2 connected 2026/01/29 11:20:36 patched crashed: WARNING in iomap_zero_range [need repro = true] 2026/01/29 11:20:36 scheduled a reproduction of 'WARNING in iomap_zero_range' 2026/01/29 11:20:39 runner 1 connected 2026/01/29 11:20:43 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:20:47 patched crashed: WARNING in iomap_zero_range [need repro = true] 2026/01/29 11:20:47 scheduled a reproduction of 'WARNING in iomap_zero_range' 2026/01/29 11:20:53 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:20:56 runner 0 connected 2026/01/29 11:20:59 runner 2 connected 2026/01/29 11:20:59 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:21:10 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:21:20 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:21:28 base crash: WARNING in iomap_zero_range 2026/01/29 11:21:31 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:21:33 runner 8 connected 2026/01/29 11:21:33 runner 5 connected 2026/01/29 11:21:37 runner 1 connected 2026/01/29 11:21:42 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:21:43 runner 6 connected 2026/01/29 11:21:48 runner 7 connected 2026/01/29 11:22:00 runner 3 connected 2026/01/29 11:22:17 runner 0 connected 2026/01/29 11:22:17 runner 2 connected 2026/01/29 11:22:20 runner 0 connected 2026/01/29 11:22:31 runner 2 connected 2026/01/29 11:22:56 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:23:08 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:23:09 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:23:09 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:23:18 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:23:20 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:23:42 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:23:53 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:23:53 runner 1 connected 2026/01/29 11:23:58 runner 7 connected 2026/01/29 11:23:59 runner 0 connected 2026/01/29 11:24:05 runner 3 connected 2026/01/29 11:24:08 runner 4 connected 2026/01/29 11:24:10 runner 8 connected 2026/01/29 11:24:21 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:24:32 runner 1 connected 2026/01/29 11:24:40 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:24:47 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:24:50 runner 5 connected 2026/01/29 11:24:50 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:24:56 STAT { "buffer too small": 0, "candidate triage jobs": 40, "candidates": 39097, "comps overflows": 0, "corpus": 43675, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 22858, "coverage": 302062, "distributor delayed": 63955, "distributor undelayed": 63936, "distributor violated": 1053, "exec candidate": 44174, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 91359, "exec total [new]": 217064, "exec triage": 134582, "executor restarts [base]": 359, "executor restarts [new]": 1119, "fault jobs": 0, "fuzzer jobs": 40, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 304394, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 44174, "no exec duration": 49982000000, "no exec requests": 407, "pending": 23, "prog exec time": 277, "reproducing": 0, "rpc recv": 15853198540, "rpc sent": 1529242352, "signal": 298486, "smash jobs": 0, "triage jobs": 0, "vm output": 35154298, "vm restarts [base]": 56, "vm restarts [new]": 227 } 2026/01/29 11:24:58 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:24:59 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:25:04 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:25:14 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:25:18 runner 1 connected 2026/01/29 11:25:36 runner 6 connected 2026/01/29 11:25:36 runner 3 connected 2026/01/29 11:25:39 runner 7 connected 2026/01/29 11:25:45 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:25:48 runner 1 connected 2026/01/29 11:25:48 runner 2 connected 2026/01/29 11:25:51 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:25:53 runner 0 connected 2026/01/29 11:25:56 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:26:02 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:26:04 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:26:12 runner 8 connected 2026/01/29 11:26:20 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:26:30 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:26:33 runner 2 connected 2026/01/29 11:26:35 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:26:41 runner 4 connected 2026/01/29 11:26:46 runner 5 connected 2026/01/29 11:26:54 runner 1 connected 2026/01/29 11:26:57 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:26:58 runner 7 connected 2026/01/29 11:27:03 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:27:10 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:27:15 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:27:16 runner 6 connected 2026/01/29 11:27:20 runner 3 connected 2026/01/29 11:27:23 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:27:25 runner 2 connected 2026/01/29 11:27:33 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:27:34 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:27:42 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:27:44 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:27:45 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:27:46 runner 1 connected 2026/01/29 11:27:48 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:27:52 runner 8 connected 2026/01/29 11:27:58 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:27:59 runner 2 connected 2026/01/29 11:28:05 runner 0 connected 2026/01/29 11:28:11 runner 0 connected 2026/01/29 11:28:14 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:28:23 runner 1 connected 2026/01/29 11:28:23 runner 4 connected 2026/01/29 11:28:31 runner 5 connected 2026/01/29 11:28:31 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:28:32 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:28:34 runner 2 connected 2026/01/29 11:28:34 runner 3 connected 2026/01/29 11:28:38 runner 7 connected 2026/01/29 11:28:42 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:28:48 runner 6 connected 2026/01/29 11:28:51 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:28:53 patched crashed: possible deadlock in ext4_writepages [need repro = true] 2026/01/29 11:28:53 scheduled a reproduction of 'possible deadlock in ext4_writepages' 2026/01/29 11:28:55 patched crashed: WARNING in hci_conn_timeout [need repro = false] 2026/01/29 11:28:56 patched crashed: WARNING in hci_conn_timeout [need repro = false] 2026/01/29 11:28:57 patched crashed: WARNING in hci_conn_timeout [need repro = false] 2026/01/29 11:29:00 patched crashed: WARNING in hci_conn_timeout [need repro = false] 2026/01/29 11:29:02 runner 1 connected 2026/01/29 11:29:06 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:29:20 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:29:21 runner 0 connected 2026/01/29 11:29:22 runner 8 connected 2026/01/29 11:29:30 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:29:31 runner 0 connected 2026/01/29 11:29:41 runner 1 connected 2026/01/29 11:29:44 runner 4 connected 2026/01/29 11:29:45 runner 5 connected 2026/01/29 11:29:46 runner 2 connected 2026/01/29 11:29:46 runner 3 connected 2026/01/29 11:29:49 runner 7 connected 2026/01/29 11:29:52 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:29:52 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:29:55 runner 2 connected 2026/01/29 11:29:56 STAT { "buffer too small": 0, "candidate triage jobs": 2, "candidates": 38104, "comps overflows": 0, "corpus": 44688, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 23434, "coverage": 304042, "distributor delayed": 66345, "distributor undelayed": 66345, "distributor violated": 1117, "exec candidate": 45167, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 16, "exec seeds": 0, "exec smash": 0, "exec total [base]": 94826, "exec total [new]": 224036, "exec triage": 137649, "executor restarts [base]": 389, "executor restarts [new]": 1219, "fault jobs": 0, "fuzzer jobs": 2, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 306019, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45167, "no exec duration": 49982000000, "no exec requests": 407, "pending": 24, "prog exec time": 164, "reproducing": 0, "rpc recv": 17278125096, "rpc sent": 1656988440, "signal": 300455, "smash jobs": 0, "triage jobs": 0, "vm output": 37106058, "vm restarts [base]": 66, "vm restarts [new]": 256 } 2026/01/29 11:30:04 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:30:05 base crash: possible deadlock in ext4_writepages 2026/01/29 11:30:08 runner 6 connected 2026/01/29 11:30:14 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:30:15 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:30:19 runner 1 connected 2026/01/29 11:30:30 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2026/01/29 11:30:41 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2026/01/29 11:30:42 runner 0 connected 2026/01/29 11:30:43 runner 8 connected 2026/01/29 11:30:43 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:30:53 runner 4 connected 2026/01/29 11:30:54 runner 1 connected 2026/01/29 11:30:56 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:31:04 runner 7 connected 2026/01/29 11:31:04 runner 0 connected 2026/01/29 11:31:20 runner 3 connected 2026/01/29 11:31:32 runner 2 connected 2026/01/29 11:31:33 runner 2 connected 2026/01/29 11:31:37 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2026/01/29 11:31:38 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:31:39 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:31:43 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:31:46 runner 6 connected 2026/01/29 11:31:51 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:31:56 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:32:00 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:32:16 base crash: possible deadlock in ext4_writepages 2026/01/29 11:32:30 runner 0 connected 2026/01/29 11:32:33 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:32:34 runner 5 connected 2026/01/29 11:32:36 runner 7 connected 2026/01/29 11:32:39 runner 1 connected 2026/01/29 11:32:41 runner 4 connected 2026/01/29 11:32:46 runner 2 connected 2026/01/29 11:32:49 runner 1 connected 2026/01/29 11:33:00 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:33:05 runner 0 connected 2026/01/29 11:33:26 base crash: WARNING in hci_conn_timeout 2026/01/29 11:33:29 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:33:31 runner 3 connected 2026/01/29 11:33:49 runner 8 connected 2026/01/29 11:33:56 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:33:56 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:34:08 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:34:11 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:34:15 runner 1 connected 2026/01/29 11:34:19 runner 6 connected 2026/01/29 11:34:28 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:34:42 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:34:46 runner 3 connected 2026/01/29 11:34:54 runner 2 connected 2026/01/29 11:34:56 STAT { "buffer too small": 0, "candidate triage jobs": 24, "candidates": 37335, "comps overflows": 0, "corpus": 45413, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 24850, "coverage": 305649, "distributor delayed": 67839, "distributor undelayed": 67831, "distributor violated": 1142, "exec candidate": 45936, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 17, "exec seeds": 0, "exec smash": 0, "exec total [base]": 98597, "exec total [new]": 235058, "exec triage": 139957, "executor restarts [base]": 417, "executor restarts [new]": 1297, "fault jobs": 0, "fuzzer jobs": 24, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 307703, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 45936, "no exec duration": 50616000000, "no exec requests": 410, "pending": 24, "prog exec time": 399, "reproducing": 0, "rpc recv": 18312606120, "rpc sent": 1768039560, "signal": 302062, "smash jobs": 0, "triage jobs": 0, "vm output": 39181903, "vm restarts [base]": 73, "vm restarts [new]": 275 } 2026/01/29 11:35:06 runner 0 connected 2026/01/29 11:35:08 runner 0 connected 2026/01/29 11:35:15 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:35:23 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:35:25 runner 2 connected 2026/01/29 11:35:32 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:35:40 runner 8 connected 2026/01/29 11:35:48 patched crashed: INFO: task hung in corrupted [need repro = true] 2026/01/29 11:35:48 scheduled a reproduction of 'INFO: task hung in corrupted' 2026/01/29 11:35:52 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2026/01/29 11:36:04 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2026/01/29 11:36:12 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:36:12 runner 2 connected 2026/01/29 11:36:12 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:36:13 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:36:14 patched crashed: INFO: task hung in reg_process_self_managed_hints [need repro = true] 2026/01/29 11:36:14 scheduled a reproduction of 'INFO: task hung in reg_process_self_managed_hints' 2026/01/29 11:36:17 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:36:19 patched crashed: INFO: task hung in reg_process_self_managed_hints [need repro = true] 2026/01/29 11:36:19 scheduled a reproduction of 'INFO: task hung in reg_process_self_managed_hints' 2026/01/29 11:36:20 runner 3 connected 2026/01/29 11:36:29 runner 0 connected 2026/01/29 11:36:38 runner 4 connected 2026/01/29 11:36:38 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:36:41 runner 0 connected 2026/01/29 11:36:44 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:36:51 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:36:54 runner 8 connected 2026/01/29 11:37:00 runner 1 connected 2026/01/29 11:37:01 runner 6 connected 2026/01/29 11:37:03 runner 5 connected 2026/01/29 11:37:03 runner 1 connected 2026/01/29 11:37:07 runner 2 connected 2026/01/29 11:37:09 runner 7 connected 2026/01/29 11:37:23 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:37:29 runner 2 connected 2026/01/29 11:37:32 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:37:33 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:37:38 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:37:40 runner 3 connected 2026/01/29 11:37:40 runner 0 connected 2026/01/29 11:37:42 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:37:43 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:38:09 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:38:09 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:38:13 runner 1 connected 2026/01/29 11:38:20 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:38:22 runner 5 connected 2026/01/29 11:38:25 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:38:27 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:38:29 runner 4 connected 2026/01/29 11:38:31 runner 6 connected 2026/01/29 11:38:33 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:38:33 runner 8 connected 2026/01/29 11:38:34 runner 2 connected 2026/01/29 11:38:47 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:38:56 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:38:58 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:38:59 runner 3 connected 2026/01/29 11:38:59 runner 2 connected 2026/01/29 11:39:07 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:39:09 runner 0 connected 2026/01/29 11:39:09 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:39:14 runner 7 connected 2026/01/29 11:39:17 runner 1 connected 2026/01/29 11:39:18 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:39:23 runner 0 connected 2026/01/29 11:39:35 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:39:37 runner 1 connected 2026/01/29 11:39:45 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:39:46 runner 6 connected 2026/01/29 11:39:48 runner 8 connected 2026/01/29 11:39:54 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:39:56 STAT { "buffer too small": 0, "candidate triage jobs": 10, "candidates": 36923, "comps overflows": 0, "corpus": 45812, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 25950, "coverage": 306362, "distributor delayed": 68779, "distributor undelayed": 68777, "distributor violated": 1142, "exec candidate": 46348, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 18, "exec seeds": 0, "exec smash": 0, "exec total [base]": 100972, "exec total [new]": 243388, "exec triage": 141268, "executor restarts [base]": 449, "executor restarts [new]": 1386, "fault jobs": 0, "fuzzer jobs": 10, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 308557, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 46348, "no exec duration": 50637000000, "no exec requests": 411, "pending": 27, "prog exec time": 221, "reproducing": 0, "rpc recv": 19543563240, "rpc sent": 1873502984, "signal": 302779, "smash jobs": 0, "triage jobs": 0, "vm output": 40807174, "vm restarts [base]": 82, "vm restarts [new]": 300 } 2026/01/29 11:39:58 runner 5 connected 2026/01/29 11:39:59 runner 2 connected 2026/01/29 11:40:08 runner 2 connected 2026/01/29 11:40:09 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:40:15 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:40:23 runner 0 connected 2026/01/29 11:40:26 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:40:35 runner 1 connected 2026/01/29 11:40:42 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:40:43 runner 7 connected 2026/01/29 11:40:57 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:41:02 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:41:05 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:41:05 runner 3 connected 2026/01/29 11:41:06 runner 0 connected 2026/01/29 11:41:15 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:41:15 runner 6 connected 2026/01/29 11:41:25 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:41:31 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:41:32 runner 2 connected 2026/01/29 11:41:39 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:41:46 runner 4 connected 2026/01/29 11:41:50 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:41:51 runner 0 connected 2026/01/29 11:41:55 runner 8 connected 2026/01/29 11:41:56 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:42:04 runner 2 connected 2026/01/29 11:42:15 runner 7 connected 2026/01/29 11:42:27 runner 3 connected 2026/01/29 11:42:28 runner 1 connected 2026/01/29 11:42:31 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:42:35 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:42:40 runner 5 connected 2026/01/29 11:42:46 runner 1 connected 2026/01/29 11:42:49 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:42:49 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:43:06 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:43:07 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:43:14 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:43:16 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:43:17 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:43:20 runner 2 connected 2026/01/29 11:43:22 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:43:32 runner 8 connected 2026/01/29 11:43:39 runner 7 connected 2026/01/29 11:43:39 runner 1 connected 2026/01/29 11:43:55 runner 0 connected 2026/01/29 11:43:55 runner 5 connected 2026/01/29 11:43:59 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:44:05 runner 4 connected 2026/01/29 11:44:06 runner 2 connected 2026/01/29 11:44:06 runner 1 connected 2026/01/29 11:44:12 runner 6 connected 2026/01/29 11:44:14 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:44:16 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:44:29 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:44:49 runner 1 connected 2026/01/29 11:44:53 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:44:54 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:44:56 STAT { "buffer too small": 0, "candidate triage jobs": 8, "candidates": 36232, "comps overflows": 0, "corpus": 46461, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 27999, "coverage": 307766, "distributor delayed": 69886, "distributor undelayed": 69885, "distributor violated": 1146, "exec candidate": 47039, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 103984, "exec total [new]": 258192, "exec triage": 143421, "executor restarts [base]": 476, "executor restarts [new]": 1476, "fault jobs": 0, "fuzzer jobs": 8, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 4, "hints jobs": 0, "max signal": 310018, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47039, "no exec duration": 51126000000, "no exec requests": 416, "pending": 27, "prog exec time": 293, "reproducing": 0, "rpc recv": 20661098076, "rpc sent": 1994769376, "signal": 304172, "smash jobs": 0, "triage jobs": 0, "vm output": 43274906, "vm restarts [base]": 92, "vm restarts [new]": 320 } 2026/01/29 11:44:58 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:45:04 runner 0 connected 2026/01/29 11:45:06 runner 7 connected 2026/01/29 11:45:09 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:45:13 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:45:16 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:45:18 runner 1 connected 2026/01/29 11:45:22 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:45:25 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:45:28 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:45:38 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:45:43 runner 0 connected 2026/01/29 11:45:43 runner 2 connected 2026/01/29 11:45:46 runner 4 connected 2026/01/29 11:45:51 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:46:00 runner 1 connected 2026/01/29 11:46:03 runner 8 connected 2026/01/29 11:46:04 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:46:05 runner 5 connected 2026/01/29 11:46:11 runner 6 connected 2026/01/29 11:46:12 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:46:14 runner 3 connected 2026/01/29 11:46:18 runner 7 connected 2026/01/29 11:46:28 runner 1 connected 2026/01/29 11:46:34 patched crashed: WARNING in iomap_zero_range [need repro = false] 2026/01/29 11:46:36 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:46:40 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:46:42 runner 0 connected 2026/01/29 11:46:44 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:46:54 runner 0 connected 2026/01/29 11:47:01 runner 2 connected 2026/01/29 11:47:06 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:47:25 runner 8 connected 2026/01/29 11:47:27 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:47:31 runner 5 connected 2026/01/29 11:47:33 runner 3 connected 2026/01/29 11:47:37 runner 7 connected 2026/01/29 11:47:45 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:47:45 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:47:59 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:48:03 runner 0 connected 2026/01/29 11:48:08 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2026/01/29 11:48:10 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:48:24 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:48:25 runner 2 connected 2026/01/29 11:48:30 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:48:34 runner 2 connected 2026/01/29 11:48:34 runner 1 connected 2026/01/29 11:48:42 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:48:44 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:48:49 runner 4 connected 2026/01/29 11:48:58 runner 0 connected 2026/01/29 11:48:59 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:49:00 runner 7 connected 2026/01/29 11:49:01 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:49:13 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:49:19 runner 8 connected 2026/01/29 11:49:21 runner 6 connected 2026/01/29 11:49:22 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:49:32 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:49:33 runner 3 connected 2026/01/29 11:49:33 runner 1 connected 2026/01/29 11:49:36 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:49:39 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:49:40 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:49:49 runner 2 connected 2026/01/29 11:49:50 runner 1 connected 2026/01/29 11:49:50 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:49:56 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:49:56 timed out waiting for coprus triage 2026/01/29 11:49:56 starting bug reproductions 2026/01/29 11:49:56 starting bug reproductions (max 6 VMs, 4 repros) 2026/01/29 11:49:56 reproduction of "kernel BUG in hpage_collapse_scan_file" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "kernel BUG in hpage_collapse_scan_file" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "kernel BUG in hpage_collapse_scan_file" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "kernel BUG in hpage_collapse_scan_file" aborted: it's no longer needed 2026/01/29 11:49:56 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 35902, "comps overflows": 0, "corpus": 46757, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 29414, "coverage": 308378, "distributor delayed": 70437, "distributor undelayed": 70437, "distributor violated": 1147, "exec candidate": 47369, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 108900, "exec total [new]": 267914, "exec triage": 144454, "executor restarts [base]": 507, "executor restarts [new]": 1563, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 310705, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47359, "no exec duration": 51126000000, "no exec requests": 416, "pending": 22, "prog exec time": 256, "reproducing": 0, "rpc recv": 21880206756, "rpc sent": 2113652488, "signal": 304783, "smash jobs": 0, "triage jobs": 0, "vm output": 45016398, "vm restarts [base]": 99, "vm restarts [new]": 346 } 2026/01/29 11:49:56 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "possible deadlock in ocfs2_reserve_suballoc_bits" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "possible deadlock in ocfs2_init_acl" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "kernel BUG in jfs_evict_inode" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "kernel BUG in jfs_evict_inode" aborted: it's no longer needed 2026/01/29 11:49:56 start reproducing 'KASAN: slab-use-after-free Read in jfs_lazycommit' 2026/01/29 11:49:56 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "possible deadlock in ocfs2_try_remove_refcount_tree" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2026/01/29 11:49:56 start reproducing 'INFO: task hung in switchdev_deferred_process_work' 2026/01/29 11:49:56 reproduction of "kernel BUG in txUnlock" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "INFO: task hung in evict" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "WARNING in iomap_zero_range" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "WARNING in iomap_zero_range" aborted: it's no longer needed 2026/01/29 11:49:56 reproduction of "possible deadlock in ext4_writepages" aborted: it's no longer needed 2026/01/29 11:49:56 start reproducing 'possible deadlock in ocfs2_del_inode_from_orphan' 2026/01/29 11:49:56 start reproducing 'INFO: task hung in corrupted' 2026/01/29 11:49:59 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:50:14 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:50:22 runner 2 connected 2026/01/29 11:50:28 runner 7 connected 2026/01/29 11:50:28 runner 6 connected 2026/01/29 11:50:47 runner 0 connected 2026/01/29 11:50:48 runner 8 connected 2026/01/29 11:50:49 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:50:49 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:51:02 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:51:03 runner 1 connected 2026/01/29 11:51:09 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:51:38 runner 6 connected 2026/01/29 11:51:38 runner 7 connected 2026/01/29 11:51:52 runner 2 connected 2026/01/29 11:52:01 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:52:06 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:52:07 runner 8 connected 2026/01/29 11:52:17 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:52:26 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:52:27 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:52:28 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:52:50 runner 7 connected 2026/01/29 11:52:56 runner 0 connected 2026/01/29 11:53:05 runner 2 connected 2026/01/29 11:53:09 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:53:16 runner 8 connected 2026/01/29 11:53:17 runner 1 connected 2026/01/29 11:53:23 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:53:24 runner 6 connected 2026/01/29 11:53:29 base crash: possible deadlock in ocfs2_init_acl 2026/01/29 11:53:55 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:54:04 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:54:05 runner 7 connected 2026/01/29 11:54:12 runner 0 connected 2026/01/29 11:54:17 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:54:17 runner 2 connected 2026/01/29 11:54:37 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:54:51 runner 8 connected 2026/01/29 11:54:52 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:54:54 runner 6 connected 2026/01/29 11:54:56 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 35849, "comps overflows": 0, "corpus": 46763, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 29644, "coverage": 308383, "distributor delayed": 70460, "distributor undelayed": 70456, "distributor violated": 1149, "exec candidate": 47422, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 111815, "exec total [new]": 269590, "exec triage": 144484, "executor restarts [base]": 540, "executor restarts [new]": 1590, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 310720, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47369, "no exec duration": 51126000000, "no exec requests": 416, "pending": 1, "prog exec time": 303, "reproducing": 4, "rpc recv": 22559251760, "rpc sent": 2152726464, "signal": 304788, "smash jobs": 0, "triage jobs": 0, "vm output": 46759327, "vm restarts [base]": 108, "vm restarts [new]": 358 } 2026/01/29 11:55:06 runner 1 connected 2026/01/29 11:55:15 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:55:26 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:55:26 runner 2 connected 2026/01/29 11:55:39 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:55:39 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:55:41 runner 0 connected 2026/01/29 11:56:04 runner 7 connected 2026/01/29 11:56:16 runner 6 connected 2026/01/29 11:56:24 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:56:28 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:56:29 runner 8 connected 2026/01/29 11:56:34 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:56:35 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:56:36 runner 1 connected 2026/01/29 11:56:48 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:57:17 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:57:18 runner 0 connected 2026/01/29 11:57:21 runner 7 connected 2026/01/29 11:57:24 runner 2 connected 2026/01/29 11:57:25 runner 6 connected 2026/01/29 11:57:37 runner 8 connected 2026/01/29 11:57:44 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:57:50 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:57:52 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:58:08 runner 1 connected 2026/01/29 11:58:29 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:58:30 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:58:32 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:58:40 runner 7 connected 2026/01/29 11:58:41 runner 0 connected 2026/01/29 11:58:42 runner 2 connected 2026/01/29 11:59:15 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:59:17 runner 1 connected 2026/01/29 11:59:22 runner 6 connected 2026/01/29 11:59:26 runner 8 connected 2026/01/29 11:59:38 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 11:59:52 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 11:59:56 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35812, "comps overflows": 0, "corpus": 46771, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 29975, "coverage": 308397, "distributor delayed": 70483, "distributor undelayed": 70479, "distributor violated": 1149, "exec candidate": 47459, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 19, "exec seeds": 0, "exec smash": 0, "exec total [base]": 114743, "exec total [new]": 271738, "exec triage": 144529, "executor restarts [base]": 579, "executor restarts [new]": 1624, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 310737, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47382, "no exec duration": 51126000000, "no exec requests": 416, "pending": 1, "prog exec time": 244, "reproducing": 4, "rpc recv": 23296992956, "rpc sent": 2196997288, "signal": 304802, "smash jobs": 0, "triage jobs": 0, "vm output": 51103229, "vm restarts [base]": 118, "vm restarts [new]": 367 } 2026/01/29 11:59:58 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:00:02 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:00:05 runner 7 connected 2026/01/29 12:00:06 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:00:35 runner 0 connected 2026/01/29 12:00:41 runner 8 connected 2026/01/29 12:00:48 runner 2 connected 2026/01/29 12:00:52 runner 6 connected 2026/01/29 12:00:55 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:00:55 runner 1 connected 2026/01/29 12:01:09 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:01:22 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:01:26 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:01:44 runner 0 connected 2026/01/29 12:01:59 runner 2 connected 2026/01/29 12:02:10 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:02:12 runner 7 connected 2026/01/29 12:02:16 runner 1 connected 2026/01/29 12:02:19 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:02:31 repro finished 'possible deadlock in ocfs2_del_inode_from_orphan', repro=false crepro=false desc='' hub=false from_dashboard=false 2026/01/29 12:02:31 start reproducing 'INFO: task hung in reg_process_self_managed_hints' 2026/01/29 12:02:31 failed repro for "possible deadlock in ocfs2_del_inode_from_orphan", err=%!s() 2026/01/29 12:02:31 "possible deadlock in ocfs2_del_inode_from_orphan": saved crash log into 1769688151.crash.log 2026/01/29 12:02:31 "possible deadlock in ocfs2_del_inode_from_orphan": saved repro log into 1769688151.repro.log 2026/01/29 12:03:07 runner 6 connected 2026/01/29 12:03:09 runner 2 connected 2026/01/29 12:03:09 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:03:20 patched crashed: KASAN: slab-use-after-free Read in jfs_syncpt [need repro = true] 2026/01/29 12:03:20 scheduled a reproduction of 'KASAN: slab-use-after-free Read in jfs_syncpt' 2026/01/29 12:04:01 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:04:06 runner 8 connected 2026/01/29 12:04:11 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:04:18 runner 7 connected 2026/01/29 12:04:19 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:04:41 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:04:51 runner 2 connected 2026/01/29 12:04:51 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:04:56 STAT { "buffer too small": 0, "candidate triage jobs": 20, "candidates": 35722, "comps overflows": 0, "corpus": 46818, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 30680, "coverage": 308519, "distributor delayed": 70603, "distributor undelayed": 70583, "distributor violated": 1161, "exec candidate": 47549, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 20, "exec seeds": 0, "exec smash": 0, "exec total [base]": 118672, "exec total [new]": 276584, "exec triage": 144714, "executor restarts [base]": 601, "executor restarts [new]": 1652, "fault jobs": 0, "fuzzer jobs": 20, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 1, "hints jobs": 0, "max signal": 310910, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47450, "no exec duration": 51126000000, "no exec requests": 416, "pending": 1, "prog exec time": 355, "reproducing": 4, "rpc recv": 23832366248, "rpc sent": 2243963304, "signal": 304907, "smash jobs": 0, "triage jobs": 0, "vm output": 53839125, "vm restarts [base]": 126, "vm restarts [new]": 374 } 2026/01/29 12:05:09 runner 0 connected 2026/01/29 12:05:09 runner 6 connected 2026/01/29 12:05:24 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:05:32 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:05:39 runner 7 connected 2026/01/29 12:05:42 runner 1 connected 2026/01/29 12:05:47 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:06:09 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:06:15 runner 8 connected 2026/01/29 12:06:22 runner 0 connected 2026/01/29 12:06:37 runner 6 connected 2026/01/29 12:07:04 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:07:04 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:07:06 runner 2 connected 2026/01/29 12:07:07 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:07:12 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:07:54 runner 6 connected 2026/01/29 12:07:54 runner 7 connected 2026/01/29 12:07:58 runner 0 connected 2026/01/29 12:08:01 runner 1 connected 2026/01/29 12:08:15 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:08:23 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:08:26 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:08:27 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:08:28 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:08:54 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:09:04 runner 2 connected 2026/01/29 12:09:12 runner 0 connected 2026/01/29 12:09:16 runner 7 connected 2026/01/29 12:09:16 runner 8 connected 2026/01/29 12:09:17 runner 6 connected 2026/01/29 12:09:35 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:09:43 runner 1 connected 2026/01/29 12:09:56 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 35661, "comps overflows": 0, "corpus": 46851, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 31273, "coverage": 308579, "distributor delayed": 70649, "distributor undelayed": 70649, "distributor violated": 1161, "exec candidate": 47610, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 21, "exec seeds": 0, "exec smash": 0, "exec total [base]": 122193, "exec total [new]": 280696, "exec triage": 144859, "executor restarts [base]": 636, "executor restarts [new]": 1687, "fault jobs": 0, "fuzzer jobs": 0, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 3, "hints jobs": 0, "max signal": 310975, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47474, "no exec duration": 51126000000, "no exec requests": 416, "pending": 1, "prog exec time": 232, "reproducing": 4, "rpc recv": 24521655760, "rpc sent": 2290127192, "signal": 304961, "smash jobs": 0, "triage jobs": 0, "vm output": 55347144, "vm restarts [base]": 135, "vm restarts [new]": 383 } 2026/01/29 12:09:56 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:10:12 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:10:15 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:10:25 runner 2 connected 2026/01/29 12:10:27 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:10:32 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:10:42 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:10:45 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:10:46 runner 0 connected 2026/01/29 12:11:01 runner 1 connected 2026/01/29 12:11:04 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:11:05 runner 6 connected 2026/01/29 12:11:20 runner 8 connected 2026/01/29 12:11:31 runner 7 connected 2026/01/29 12:11:32 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:11:41 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:11:43 runner 2 connected 2026/01/29 12:11:51 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:11:56 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:12:20 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:12:20 runner 6 connected 2026/01/29 12:12:32 runner 8 connected 2026/01/29 12:12:40 runner 0 connected 2026/01/29 12:12:46 runner 1 connected 2026/01/29 12:12:51 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:12:52 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:13:01 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:13:08 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:13:10 runner 7 connected 2026/01/29 12:13:26 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:13:41 runner 6 connected 2026/01/29 12:13:42 runner 8 connected 2026/01/29 12:13:46 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:13:52 runner 2 connected 2026/01/29 12:13:58 runner 0 connected 2026/01/29 12:14:15 runner 1 connected 2026/01/29 12:14:16 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:14:18 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:14:24 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:14:35 runner 7 connected 2026/01/29 12:14:45 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:14:51 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:14:56 STAT { "buffer too small": 0, "candidate triage jobs": 5, "candidates": 35596, "comps overflows": 0, "corpus": 46857, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 31618, "coverage": 308590, "distributor delayed": 70672, "distributor undelayed": 70672, "distributor violated": 1161, "exec candidate": 47675, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 21, "exec seeds": 0, "exec smash": 0, "exec total [base]": 125313, "exec total [new]": 282836, "exec triage": 144900, "executor restarts [base]": 669, "executor restarts [new]": 1714, "fault jobs": 0, "fuzzer jobs": 5, "fuzzing VMs [base]": 0, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311023, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 0, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47488, "no exec duration": 51207000000, "no exec requests": 417, "pending": 1, "prog exec time": 523, "reproducing": 4, "rpc recv": 25163255348, "rpc sent": 2331143944, "signal": 304975, "smash jobs": 0, "triage jobs": 0, "vm output": 56856939, "vm restarts [base]": 144, "vm restarts [new]": 392 } 2026/01/29 12:14:56 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:15:05 runner 6 connected 2026/01/29 12:15:06 runner 8 connected 2026/01/29 12:15:12 runner 0 connected 2026/01/29 12:15:33 runner 2 connected 2026/01/29 12:15:41 runner 1 connected 2026/01/29 12:15:42 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:15:45 runner 7 connected 2026/01/29 12:16:06 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:16:10 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:16:18 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:16:31 runner 0 connected 2026/01/29 12:16:39 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:16:44 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:16:55 runner 2 connected 2026/01/29 12:16:59 runner 6 connected 2026/01/29 12:17:01 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:17:14 runner 1 connected 2026/01/29 12:17:20 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:17:20 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:17:27 runner 7 connected 2026/01/29 12:17:38 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:17:51 runner 8 connected 2026/01/29 12:17:59 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:18:09 runner 2 connected 2026/01/29 12:18:10 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:18:10 runner 0 connected 2026/01/29 12:18:46 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:18:54 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:18:55 runner 6 connected 2026/01/29 12:19:03 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:19:04 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:19:23 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:19:35 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:19:42 runner 1 connected 2026/01/29 12:19:43 runner 7 connected 2026/01/29 12:19:53 runner 2 connected 2026/01/29 12:19:56 STAT { "buffer too small": 0, "candidate triage jobs": 7, "candidates": 35492, "comps overflows": 0, "corpus": 46916, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 32343, "coverage": 308694, "distributor delayed": 70803, "distributor undelayed": 70797, "distributor violated": 1163, "exec candidate": 47779, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 21, "exec seeds": 0, "exec smash": 0, "exec total [base]": 128757, "exec total [new]": 287732, "exec triage": 145134, "executor restarts [base]": 699, "executor restarts [new]": 1747, "fault jobs": 0, "fuzzer jobs": 7, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 311167, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47559, "no exec duration": 51207000000, "no exec requests": 417, "pending": 1, "prog exec time": 215, "reproducing": 4, "rpc recv": 25791674164, "rpc sent": 2380140056, "signal": 305078, "smash jobs": 0, "triage jobs": 0, "vm output": 58359017, "vm restarts [base]": 154, "vm restarts [new]": 400 } 2026/01/29 12:20:19 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:20:20 runner 6 connected 2026/01/29 12:20:35 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:20:57 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:21:17 runner 7 connected 2026/01/29 12:21:20 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:21:21 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:21:22 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:21:24 runner 0 connected 2026/01/29 12:21:43 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:21:48 runner 8 connected 2026/01/29 12:21:51 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:22:06 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:22:11 runner 2 connected 2026/01/29 12:22:11 runner 1 connected 2026/01/29 12:22:17 runner 6 connected 2026/01/29 12:22:33 runner 0 connected 2026/01/29 12:22:42 runner 7 connected 2026/01/29 12:22:43 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:22:46 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:23:06 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:23:11 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:23:16 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:23:17 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:23:18 reproducing crash 'KASAN: slab-use-after-free Read in jfs_lazycommit': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:23:18 repro finished 'KASAN: slab-use-after-free Read in jfs_lazycommit', repro=true crepro=false desc='KASAN: slab-use-after-free Write in txEnd' hub=false from_dashboard=false 2026/01/29 12:23:18 start reproducing 'KASAN: slab-use-after-free Read in jfs_syncpt' 2026/01/29 12:23:18 found repro for "KASAN: slab-use-after-free Write in txEnd" (orig title: "KASAN: slab-use-after-free Read in jfs_lazycommit", reliability: 1), took 33.37 minutes 2026/01/29 12:23:18 "KASAN: slab-use-after-free Write in txEnd": saved crash log into 1769689398.crash.log 2026/01/29 12:23:18 "KASAN: slab-use-after-free Write in txEnd": saved repro log into 1769689398.repro.log 2026/01/29 12:23:40 runner 1 connected 2026/01/29 12:23:57 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:24:05 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:24:07 runner 2 connected 2026/01/29 12:24:07 runner 6 connected 2026/01/29 12:24:12 runner 8 connected 2026/01/29 12:24:13 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:24:54 runner 7 connected 2026/01/29 12:24:55 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:24:56 STAT { "buffer too small": 0, "candidate triage jobs": 7, "candidates": 35420, "comps overflows": 0, "corpus": 46941, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 32989, "coverage": 308761, "distributor delayed": 70873, "distributor undelayed": 70867, "distributor violated": 1163, "exec candidate": 47851, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 22, "exec seeds": 0, "exec smash": 0, "exec total [base]": 132230, "exec total [new]": 291790, "exec triage": 145278, "executor restarts [base]": 728, "executor restarts [new]": 1776, "fault jobs": 0, "fuzzer jobs": 7, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 2, "hints jobs": 0, "max signal": 311260, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 5, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47598, "no exec duration": 51207000000, "no exec requests": 417, "pending": 1, "prog exec time": 232, "reproducing": 4, "rpc recv": 26322444972, "rpc sent": 2421724056, "signal": 305145, "smash jobs": 0, "triage jobs": 0, "vm output": 60322411, "vm restarts [base]": 160, "vm restarts [new]": 408 } 2026/01/29 12:25:03 runner 1 connected 2026/01/29 12:25:07 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:25:17 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:25:21 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:25:25 attempt #0 to run "KASAN: slab-use-after-free Write in txEnd" on base: crashed with general protection fault in lmLogSync 2026/01/29 12:25:25 crashes both: KASAN: slab-use-after-free Write in txEnd / general protection fault in lmLogSync 2026/01/29 12:25:25 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:25:47 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:25:56 runner 6 connected 2026/01/29 12:26:08 runner 8 connected 2026/01/29 12:26:10 runner 7 connected 2026/01/29 12:26:15 runner 2 connected 2026/01/29 12:26:23 runner 0 connected 2026/01/29 12:26:28 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:26:36 runner 1 connected 2026/01/29 12:26:41 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:27:17 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:27:24 runner 6 connected 2026/01/29 12:27:26 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:27:39 runner 2 connected 2026/01/29 12:27:50 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:28:14 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:28:22 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:28:24 runner 1 connected 2026/01/29 12:28:46 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:28:48 runner 8 connected 2026/01/29 12:28:58 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:29:10 runner 0 connected 2026/01/29 12:29:19 runner 6 connected 2026/01/29 12:29:34 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:29:35 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:29:41 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_txnmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:29:44 runner 1 connected 2026/01/29 12:29:54 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:29:56 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 35325, "comps overflows": 0, "corpus": 46987, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 33791, "coverage": 308838, "distributor delayed": 70960, "distributor undelayed": 70960, "distributor violated": 1169, "exec candidate": 47946, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 23, "exec seeds": 0, "exec smash": 0, "exec total [base]": 136432, "exec total [new]": 296667, "exec triage": 145484, "executor restarts [base]": 755, "executor restarts [new]": 1805, "fault jobs": 0, "fuzzer jobs": 0, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 0, "hints jobs": 0, "max signal": 311334, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 8, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47651, "no exec duration": 51256000000, "no exec requests": 419, "pending": 1, "prog exec time": 157, "reproducing": 4, "rpc recv": 26899316268, "rpc sent": 2466869008, "signal": 305218, "smash jobs": 0, "triage jobs": 0, "vm output": 62035219, "vm restarts [base]": 168, "vm restarts [new]": 414 } 2026/01/29 12:30:08 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:30:15 reproducing crash 'KASAN: slab-use-after-free Read in jfs_syncpt': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/jfs/jfs_logmgr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 12:30:15 repro finished 'KASAN: slab-use-after-free Read in jfs_syncpt', repro=true crepro=false desc='KASAN: slab-use-after-free Read in jfs_syncpt' hub=false from_dashboard=false 2026/01/29 12:30:15 found repro for "KASAN: slab-use-after-free Read in jfs_syncpt" (orig title: "-SAME-", reliability: 1), took 6.73 minutes 2026/01/29 12:30:15 "KASAN: slab-use-after-free Read in jfs_syncpt": saved crash log into 1769689815.crash.log 2026/01/29 12:30:15 "KASAN: slab-use-after-free Read in jfs_syncpt": saved repro log into 1769689815.repro.log 2026/01/29 12:30:17 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:30:32 runner 8 connected 2026/01/29 12:30:32 runner 7 connected 2026/01/29 12:30:33 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:30:42 repro finished 'INFO: task hung in corrupted', repro=false crepro=false desc='' hub=false from_dashboard=false 2026/01/29 12:30:42 failed repro for "INFO: task hung in corrupted", err=%!s() 2026/01/29 12:30:42 "INFO: task hung in corrupted": saved crash log into 1769689842.crash.log 2026/01/29 12:30:42 "INFO: task hung in corrupted": saved repro log into 1769689842.repro.log 2026/01/29 12:30:44 runner 6 connected 2026/01/29 12:30:58 runner 1 connected 2026/01/29 12:31:04 runner 0 connected 2026/01/29 12:31:09 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:31:09 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:31:10 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:31:23 runner 2 connected 2026/01/29 12:31:23 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:31:31 attempt #0 to run "KASAN: slab-use-after-free Read in jfs_syncpt" on base: crashed with KASAN: slab-use-after-free Read in jfs_syncpt 2026/01/29 12:31:31 crashes both: KASAN: slab-use-after-free Read in jfs_syncpt / KASAN: slab-use-after-free Read in jfs_syncpt 2026/01/29 12:31:31 runner 2 connected 2026/01/29 12:31:51 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:31:59 runner 8 connected 2026/01/29 12:31:59 runner 7 connected 2026/01/29 12:31:59 runner 6 connected 2026/01/29 12:32:02 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:32:13 runner 0 connected 2026/01/29 12:32:19 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:32:19 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:32:21 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:32:22 runner 0 connected 2026/01/29 12:32:30 runner 1 connected 2026/01/29 12:32:37 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:32:41 runner 2 connected 2026/01/29 12:32:52 runner 1 connected 2026/01/29 12:33:15 runner 7 connected 2026/01/29 12:33:15 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:33:16 runner 8 connected 2026/01/29 12:33:18 patched crashed: WARNING in iomap_zero_range [need repro = false] 2026/01/29 12:33:19 runner 2 connected 2026/01/29 12:33:20 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:33:26 runner 6 connected 2026/01/29 12:33:32 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:33:34 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:33:42 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:33:42 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:34:05 runner 0 connected 2026/01/29 12:34:07 runner 1 connected 2026/01/29 12:34:16 runner 0 connected 2026/01/29 12:34:25 runner 2 connected 2026/01/29 12:34:29 runner 1 connected 2026/01/29 12:34:31 runner 7 connected 2026/01/29 12:34:31 runner 2 connected 2026/01/29 12:34:56 STAT { "buffer too small": 0, "candidate triage jobs": 4, "candidates": 35206, "comps overflows": 0, "corpus": 47033, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 34637, "coverage": 308950, "distributor delayed": 71069, "distributor undelayed": 71069, "distributor violated": 1173, "exec candidate": 48065, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 23, "exec seeds": 0, "exec smash": 0, "exec total [base]": 139993, "exec total [new]": 302273, "exec triage": 145688, "executor restarts [base]": 783, "executor restarts [new]": 1861, "fault jobs": 0, "fuzzer jobs": 4, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 0, "max signal": 311470, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 8, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47712, "no exec duration": 51256000000, "no exec requests": 419, "pending": 1, "prog exec time": 261, "reproducing": 2, "rpc recv": 27812084804, "rpc sent": 2542963568, "signal": 305328, "smash jobs": 0, "triage jobs": 0, "vm output": 63091923, "vm restarts [base]": 176, "vm restarts [new]": 432 } 2026/01/29 12:35:03 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:35:23 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:35:32 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:35:35 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:35:42 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:35:48 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:35:51 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:36:00 runner 8 connected 2026/01/29 12:36:12 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:36:12 runner 6 connected 2026/01/29 12:36:22 runner 2 connected 2026/01/29 12:36:31 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:36:32 runner 0 connected 2026/01/29 12:36:33 runner 7 connected 2026/01/29 12:36:36 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:36:36 runner 2 connected 2026/01/29 12:36:41 runner 0 connected 2026/01/29 12:36:47 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:37:10 runner 1 connected 2026/01/29 12:37:11 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:37:22 runner 1 connected 2026/01/29 12:37:33 runner 8 connected 2026/01/29 12:37:41 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:37:45 runner 6 connected 2026/01/29 12:38:07 runner 2 connected 2026/01/29 12:38:38 runner 0 connected 2026/01/29 12:39:07 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:39:56 STAT { "buffer too small": 0, "candidate triage jobs": 1, "candidates": 21762, "comps overflows": 0, "corpus": 47082, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 36849, "coverage": 309056, "distributor delayed": 71213, "distributor undelayed": 71213, "distributor violated": 1178, "exec candidate": 61509, "exec collide": 0, "exec fuzz": 0, "exec gen": 0, "exec hints": 0, "exec inject": 0, "exec minimize": 0, "exec retries": 24, "exec seeds": 0, "exec smash": 0, "exec total [base]": 145702, "exec total [new]": 318473, "exec triage": 145998, "executor restarts [base]": 807, "executor restarts [new]": 1906, "fault jobs": 0, "fuzzer jobs": 1, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 6, "hints jobs": 0, "max signal": 311644, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 8, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47790, "no exec duration": 52407000000, "no exec requests": 423, "pending": 1, "prog exec time": 218, "reproducing": 2, "rpc recv": 28358633416, "rpc sent": 2631164584, "signal": 305425, "smash jobs": 0, "triage jobs": 0, "vm output": 64862282, "vm restarts [base]": 181, "vm restarts [new]": 440 } 2026/01/29 12:40:05 runner 1 connected 2026/01/29 12:40:50 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:41:47 runner 2 connected 2026/01/29 12:41:55 repro finished 'INFO: task hung in reg_process_self_managed_hints', repro=false crepro=false desc='' hub=false from_dashboard=false 2026/01/29 12:41:55 failed repro for "INFO: task hung in reg_process_self_managed_hints", err=%!s() 2026/01/29 12:41:55 "INFO: task hung in reg_process_self_managed_hints": saved crash log into 1769690515.crash.log 2026/01/29 12:41:55 "INFO: task hung in reg_process_self_managed_hints": saved repro log into 1769690515.repro.log 2026/01/29 12:41:55 start reproducing 'INFO: task hung in reg_process_self_managed_hints' 2026/01/29 12:42:56 triaged 90.6% of the corpus 2026/01/29 12:44:56 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 0, "corpus": 47180, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 40576, "coverage": 309416, "distributor delayed": 71563, "distributor undelayed": 71562, "distributor violated": 1178, "exec candidate": 83271, "exec collide": 252, "exec fuzz": 453, "exec gen": 21, "exec hints": 18, "exec inject": 0, "exec minimize": 74, "exec retries": 26, "exec seeds": 11, "exec smash": 26, "exec total [base]": 156029, "exec total [new]": 341847, "exec triage": 146754, "executor restarts [base]": 826, "executor restarts [new]": 1937, "fault jobs": 0, "fuzzer jobs": 18, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 6, "hints jobs": 4, "max signal": 312211, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 61, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 47980, "no exec duration": 52716000000, "no exec requests": 428, "pending": 0, "prog exec time": 355, "reproducing": 2, "rpc recv": 28653820304, "rpc sent": 2744649168, "signal": 305697, "smash jobs": 4, "triage jobs": 10, "vm output": 66974042, "vm restarts [base]": 183, "vm restarts [new]": 440 } 2026/01/29 12:48:11 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:48:48 base crash: possible deadlock in ocfs2_try_remove_refcount_tree 2026/01/29 12:49:09 runner 1 connected 2026/01/29 12:49:22 patched crashed: possible deadlock in ocfs2_try_remove_refcount_tree [need repro = false] 2026/01/29 12:49:33 repro finished 'INFO: task hung in switchdev_deferred_process_work', repro=false crepro=false desc='' hub=false from_dashboard=false 2026/01/29 12:49:33 failed repro for "INFO: task hung in switchdev_deferred_process_work", err=%!s() 2026/01/29 12:49:33 "INFO: task hung in switchdev_deferred_process_work": saved crash log into 1769690973.crash.log 2026/01/29 12:49:33 "INFO: task hung in switchdev_deferred_process_work": saved repro log into 1769690973.repro.log 2026/01/29 12:49:41 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2026/01/29 12:49:46 runner 0 connected 2026/01/29 12:49:55 patched crashed: possible deadlock in ocfs2_setattr [need repro = true] 2026/01/29 12:49:55 scheduled a reproduction of 'possible deadlock in ocfs2_setattr' 2026/01/29 12:49:55 start reproducing 'possible deadlock in ocfs2_setattr' 2026/01/29 12:49:56 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 33, "corpus": 47258, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 42809, "coverage": 309631, "distributor delayed": 71869, "distributor undelayed": 71868, "distributor violated": 1178, "exec candidate": 83271, "exec collide": 1822, "exec fuzz": 3429, "exec gen": 168, "exec hints": 1758, "exec inject": 0, "exec minimize": 2034, "exec retries": 26, "exec seeds": 240, "exec smash": 1969, "exec total [base]": 161992, "exec total [new]": 353029, "exec triage": 147376, "executor restarts [base]": 847, "executor restarts [new]": 1970, "fault jobs": 0, "fuzzer jobs": 40, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 2, "hints jobs": 18, "max signal": 312754, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1047, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48183, "no exec duration": 58500000000, "no exec requests": 443, "pending": 0, "prog exec time": 441, "reproducing": 2, "rpc recv": 29010354240, "rpc sent": 2981004120, "signal": 305865, "smash jobs": 13, "triage jobs": 9, "vm output": 70985021, "vm restarts [base]": 185, "vm restarts [new]": 440 } 2026/01/29 12:50:07 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2026/01/29 12:50:19 runner 2 connected 2026/01/29 12:50:29 runner 3 connected 2026/01/29 12:50:39 runner 6 connected 2026/01/29 12:50:53 runner 8 connected 2026/01/29 12:51:05 runner 0 connected 2026/01/29 12:51:43 base crash: WARNING in dbAdjTree 2026/01/29 12:52:41 runner 0 connected 2026/01/29 12:53:18 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2026/01/29 12:53:52 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2026/01/29 12:54:16 runner 7 connected 2026/01/29 12:54:50 runner 1 connected 2026/01/29 12:54:50 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2026/01/29 12:54:56 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 89, "corpus": 47316, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 44327, "coverage": 309749, "distributor delayed": 72050, "distributor undelayed": 72050, "distributor violated": 1178, "exec candidate": 83271, "exec collide": 2984, "exec fuzz": 5658, "exec gen": 297, "exec hints": 3829, "exec inject": 0, "exec minimize": 3521, "exec retries": 27, "exec seeds": 401, "exec smash": 3256, "exec total [base]": 166475, "exec total [new]": 361942, "exec triage": 147750, "executor restarts [base]": 870, "executor restarts [new]": 2011, "fault jobs": 0, "fuzzer jobs": 57, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 5, "hints jobs": 28, "max signal": 312993, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 1841, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48306, "no exec duration": 62305000000, "no exec requests": 450, "pending": 0, "prog exec time": 489, "reproducing": 2, "rpc recv": 29523920844, "rpc sent": 3168717888, "signal": 305971, "smash jobs": 20, "triage jobs": 9, "vm output": 76186233, "vm restarts [base]": 187, "vm restarts [new]": 446 } 2026/01/29 12:55:48 runner 2 connected 2026/01/29 12:59:10 patched crashed: possible deadlock in ext4_writepages [need repro = false] 2026/01/29 12:59:43 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 12:59:45 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 12:59:56 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 103, "corpus": 47370, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 45746, "coverage": 309888, "distributor delayed": 72241, "distributor undelayed": 72225, "distributor violated": 1184, "exec candidate": 83271, "exec collide": 4385, "exec fuzz": 8311, "exec gen": 439, "exec hints": 6516, "exec inject": 0, "exec minimize": 4756, "exec retries": 28, "exec seeds": 564, "exec smash": 4601, "exec total [base]": 171346, "exec total [new]": 371886, "exec triage": 148074, "executor restarts [base]": 890, "executor restarts [new]": 2039, "fault jobs": 0, "fuzzer jobs": 62, "fuzzing VMs [base]": 2, "fuzzing VMs [new]": 4, "hints jobs": 24, "max signal": 313295, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 2487, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48423, "no exec duration": 68667000000, "no exec requests": 461, "pending": 0, "prog exec time": 348, "reproducing": 2, "rpc recv": 29840833512, "rpc sent": 3350876456, "signal": 306107, "smash jobs": 17, "triage jobs": 21, "vm output": 79845005, "vm restarts [base]": 188, "vm restarts [new]": 446 } 2026/01/29 13:00:08 runner 1 connected 2026/01/29 13:00:32 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2026/01/29 13:00:39 runner 6 connected 2026/01/29 13:00:43 runner 2 connected 2026/01/29 13:01:15 patched crashed: WARNING in dbAdjTree [need repro = false] 2026/01/29 13:01:15 patched crashed: INFO: task hung in corrupted [need repro = true] 2026/01/29 13:01:15 scheduled a reproduction of 'INFO: task hung in corrupted' 2026/01/29 13:01:15 start reproducing 'INFO: task hung in corrupted' 2026/01/29 13:01:28 runner 8 connected 2026/01/29 13:01:42 base crash: WARNING in hci_conn_timeout 2026/01/29 13:01:47 patched crashed: INFO: task hung in corrupted [need repro = true] 2026/01/29 13:01:47 scheduled a reproduction of 'INFO: task hung in corrupted' 2026/01/29 13:02:12 runner 3 connected 2026/01/29 13:02:13 runner 6 connected 2026/01/29 13:02:21 repro finished 'possible deadlock in ocfs2_setattr', repro=false crepro=false desc='' hub=false from_dashboard=false 2026/01/29 13:02:21 failed repro for "possible deadlock in ocfs2_setattr", err=%!s() 2026/01/29 13:02:21 "possible deadlock in ocfs2_setattr": saved crash log into 1769691741.crash.log 2026/01/29 13:02:21 "possible deadlock in ocfs2_setattr": saved repro log into 1769691741.repro.log 2026/01/29 13:02:39 runner 1 connected 2026/01/29 13:02:45 runner 7 connected 2026/01/29 13:03:13 base crash: possible deadlock in ext4_destroy_inline_data 2026/01/29 13:03:19 runner 0 connected 2026/01/29 13:03:28 patched crashed: kernel BUG in jfs_evict_inode [need repro = false] 2026/01/29 13:04:12 runner 2 connected 2026/01/29 13:04:18 base crash: possible deadlock in ocfs2_init_acl 2026/01/29 13:04:26 runner 8 connected 2026/01/29 13:04:38 patched crashed: kernel BUG in txUnlock [need repro = false] 2026/01/29 13:04:48 base crash: kernel BUG in jfs_evict_inode 2026/01/29 13:04:56 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 118, "corpus": 47399, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 46710, "coverage": 309938, "distributor delayed": 72390, "distributor undelayed": 72390, "distributor violated": 1185, "exec candidate": 83271, "exec collide": 5185, "exec fuzz": 9845, "exec gen": 515, "exec hints": 7976, "exec inject": 0, "exec minimize": 5658, "exec retries": 30, "exec seeds": 653, "exec smash": 5468, "exec total [base]": 175677, "exec total [new]": 377868, "exec triage": 148318, "executor restarts [base]": 903, "executor restarts [new]": 2089, "fault jobs": 0, "fuzzer jobs": 40, "fuzzing VMs [base]": 1, "fuzzing VMs [new]": 5, "hints jobs": 18, "max signal": 313486, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3024, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48500, "no exec duration": 71924000000, "no exec requests": 469, "pending": 1, "prog exec time": 526, "reproducing": 2, "rpc recv": 30411750692, "rpc sent": 3511864432, "signal": 306154, "smash jobs": 9, "triage jobs": 13, "vm output": 82644408, "vm restarts [base]": 191, "vm restarts [new]": 454 } 2026/01/29 13:05:16 runner 0 connected 2026/01/29 13:05:19 base crash: kernel BUG in txUnlock 2026/01/29 13:05:36 runner 6 connected 2026/01/29 13:05:45 runner 1 connected 2026/01/29 13:06:14 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2026/01/29 13:06:16 runner 2 connected 2026/01/29 13:06:26 patched crashed: possible deadlock in ocfs2_reserve_suballoc_bits [need repro = false] 2026/01/29 13:07:10 runner 8 connected 2026/01/29 13:07:24 runner 0 connected 2026/01/29 13:07:55 base crash: WARNING in cm109_input_open/usb_submit_urb 2026/01/29 13:08:39 base crash: possible deadlock in ocfs2_reserve_suballoc_bits 2026/01/29 13:08:51 runner 1 connected 2026/01/29 13:09:43 runner 0 connected 2026/01/29 13:09:56 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 163, "corpus": 47461, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 48249, "coverage": 310123, "distributor delayed": 72581, "distributor undelayed": 72581, "distributor violated": 1186, "exec candidate": 83271, "exec collide": 6275, "exec fuzz": 11827, "exec gen": 621, "exec hints": 9660, "exec inject": 0, "exec minimize": 7158, "exec retries": 30, "exec seeds": 813, "exec smash": 6802, "exec total [base]": 178532, "exec total [new]": 386106, "exec triage": 148697, "executor restarts [base]": 932, "executor restarts [new]": 2135, "fault jobs": 0, "fuzzer jobs": 68, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 31, "max signal": 313994, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 3840, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48634, "no exec duration": 77023000000, "no exec requests": 477, "pending": 1, "prog exec time": 616, "reproducing": 2, "rpc recv": 30886337844, "rpc sent": 3691417944, "signal": 306326, "smash jobs": 28, "triage jobs": 9, "vm output": 87273159, "vm restarts [base]": 196, "vm restarts [new]": 457 } 2026/01/29 13:10:06 patched crashed: possible deadlock in ocfs2_init_acl [need repro = false] 2026/01/29 13:11:02 runner 3 connected 2026/01/29 13:11:07 patched crashed: WARNING in hci_conn_timeout [need repro = false] 2026/01/29 13:11:38 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 13:11:39 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 13:11:50 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 13:12:05 runner 7 connected 2026/01/29 13:12:35 runner 3 connected 2026/01/29 13:12:36 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 13:12:36 runner 8 connected 2026/01/29 13:12:38 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 13:12:45 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 13:12:47 runner 6 connected 2026/01/29 13:13:22 patched crashed: KASAN: out-of-bounds Read in ext4_xattr_set_entry [need repro = true] 2026/01/29 13:13:22 scheduled a reproduction of 'KASAN: out-of-bounds Read in ext4_xattr_set_entry' 2026/01/29 13:13:22 start reproducing 'KASAN: out-of-bounds Read in ext4_xattr_set_entry' 2026/01/29 13:13:32 runner 1 connected 2026/01/29 13:13:43 runner 0 connected 2026/01/29 13:13:59 reproducing crash 'KASAN: out-of-bounds Read in ext4_xattr_set_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/xattr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 13:14:20 runner 2 connected 2026/01/29 13:14:56 STAT { "buffer too small": 0, "candidate triage jobs": 0, "candidates": 0, "comps overflows": 216, "corpus": 47490, "corpus [files]": 0, "corpus [symbols]": 0, "cover overflows": 49325, "coverage": 310267, "distributor delayed": 72716, "distributor undelayed": 72716, "distributor violated": 1186, "exec candidate": 83271, "exec collide": 6908, "exec fuzz": 13006, "exec gen": 694, "exec hints": 10635, "exec inject": 0, "exec minimize": 7867, "exec retries": 31, "exec seeds": 916, "exec smash": 7602, "exec total [base]": 181762, "exec total [new]": 390792, "exec triage": 148906, "executor restarts [base]": 960, "executor restarts [new]": 2192, "fault jobs": 0, "fuzzer jobs": 43, "fuzzing VMs [base]": 3, "fuzzing VMs [new]": 5, "hints jobs": 26, "max signal": 314205, "minimize: array": 0, "minimize: buffer": 0, "minimize: call": 4252, "minimize: filename": 0, "minimize: integer": 0, "minimize: pointer": 0, "minimize: props": 0, "minimize: resource": 0, "modules [base]": 1, "modules [new]": 1, "new inputs": 48702, "no exec duration": 80023000000, "no exec requests": 480, "pending": 1, "prog exec time": 832, "reproducing": 3, "rpc recv": 31383719696, "rpc sent": 3875486656, "signal": 306423, "smash jobs": 11, "triage jobs": 6, "vm output": 90670376, "vm restarts [base]": 198, "vm restarts [new]": 463 } 2026/01/29 13:16:17 base crash: KASAN: out-of-bounds Read in ext4_xattr_set_entry 2026/01/29 13:16:35 reproducing crash 'KASAN: out-of-bounds Read in ext4_xattr_set_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/xattr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 13:16:58 patched crashed: kernel BUG in hpage_collapse_scan_file [need repro = false] 2026/01/29 13:17:15 runner 1 connected 2026/01/29 13:17:21 reproducing crash 'KASAN: out-of-bounds Read in ext4_xattr_set_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/xattr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 13:17:32 repro finished 'INFO: task hung in reg_process_self_managed_hints', repro=false crepro=false desc='' hub=false from_dashboard=false 2026/01/29 13:17:32 failed repro for "INFO: task hung in reg_process_self_managed_hints", err=%!s() 2026/01/29 13:17:32 "INFO: task hung in reg_process_self_managed_hints": saved crash log into 1769692652.crash.log 2026/01/29 13:17:32 "INFO: task hung in reg_process_self_managed_hints": saved repro log into 1769692652.repro.log 2026/01/29 13:17:56 runner 6 connected 2026/01/29 13:18:00 base crash: kernel BUG in hpage_collapse_scan_file 2026/01/29 13:18:18 reproducing crash 'KASAN: out-of-bounds Read in ext4_xattr_set_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/xattr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 13:18:49 reproducing crash 'KASAN: out-of-bounds Read in ext4_xattr_set_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/xattr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 13:18:58 runner 1 connected 2026/01/29 13:19:16 reproducing crash 'KASAN: out-of-bounds Read in ext4_xattr_set_entry': failed to symbolize report: failed to start scripts/get_maintainer.pl [scripts/get_maintainer.pl --git-min-percent=15 -f fs/ext4/xattr.c]: fork/exec scripts/get_maintainer.pl: no such file or directory 2026/01/29 13:19:46 base crash: INFO: task hung in corrupted 2026/01/29 13:19:52 bug reporting terminated 2026/01/29 13:19:52 status reporting terminated 2026/01/29 13:19:52 base: rpc server terminaled 2026/01/29 13:19:52 new: rpc server terminaled 2026/01/29 13:20:03 repro finished 'KASAN: out-of-bounds Read in ext4_xattr_set_entry', repro=false crepro=false desc='' hub=false from_dashboard=false 2026/01/29 13:20:35 base: pool terminated 2026/01/29 13:20:35 base: kernel context loop terminated 2026/01/29 13:22:36 repro finished 'INFO: task hung in corrupted', repro=false crepro=false desc='' hub=false from_dashboard=false 2026/01/29 13:22:36 repro loop terminated 2026/01/29 13:22:36 new: pool terminated 2026/01/29 13:22:36 new: kernel context loop terminated 2026/01/29 13:22:36 diff fuzzing terminated 2026/01/29 13:22:36 fuzzing is finished 2026/01/29 13:22:36 status at the end: Title On-Base On-Patched INFO: task hung in __iterate_supers 1 crashes INFO: task hung in corrupted 1 crashes 3 crashes INFO: task hung in evict 1 crashes 1 crashes INFO: task hung in reg_process_self_managed_hints 2 crashes INFO: task hung in switchdev_deferred_process_work 1 crashes INFO: trying to register non-static key in ocfs2_dlm_shutdown 1 crashes KASAN: out-of-bounds Read in ext4_xattr_set_entry 1 crashes 1 crashes KASAN: slab-use-after-free Read in jfs_lazycommit 1 crashes KASAN: slab-use-after-free Read in jfs_syncpt 1 crashes 1 crashes[reproduced] KASAN: slab-use-after-free Write in txEnd [reproduced] WARNING in cm109_input_open/usb_submit_urb 1 crashes WARNING in dbAdjTree 1 crashes 1 crashes WARNING in hci_conn_timeout 3 crashes 5 crashes WARNING in iomap_zero_range 1 crashes 4 crashes general protection fault in lmLogSync 1 crashes kernel BUG in hpage_collapse_scan_file 165 crashes 390 crashes kernel BUG in jfs_evict_inode 6 crashes 13 crashes kernel BUG in txUnlock 2 crashes 6 crashes possible deadlock in ext4_destroy_inline_data 2 crashes possible deadlock in ext4_writepages 2 crashes 2 crashes possible deadlock in ocfs2_del_inode_from_orphan 1 crashes possible deadlock in ocfs2_init_acl 3 crashes 11 crashes possible deadlock in ocfs2_reserve_suballoc_bits 5 crashes 7 crashes possible deadlock in ocfs2_setattr 1 crashes possible deadlock in ocfs2_try_remove_refcount_tree 2 crashes 4 crashes