From: Hui Zhu swap_cluster_alloc_table() assumes that the caller holds the following locks: ci->lock percpu_swap_cluster.lock si->global_cluster_lock (required for non-SWP_SOLIDSTATE devices) There are five call paths leading to swap_cluster_alloc_table(): swap_alloc_hibernation_slot->cluster_alloc_swap_entry ->alloc_swap_scan_list->isolate_lock_cluster->swap_cluster_alloc_table swap_alloc_slow->cluster_alloc_swap_entry->alloc_swap_scan_list ->isolate_lock_cluster->swap_cluster_alloc_table swap_alloc_hibernation_slot->cluster_alloc_swap_entry ->swap_reclaim_full_clusters->isolate_lock_cluster ->swap_cluster_alloc_table swap_alloc_slow->cluster_alloc_swap_entry->swap_reclaim_full_clusters ->isolate_lock_cluster->swap_cluster_alloc_table swap_reclaim_work->swap_reclaim_full_clusters->isolate_lock_cluster ->swap_cluster_alloc_table Other paths correctly acquire the necessary locks before calling swap_cluster_alloc_table(). But swap_reclaim_work() doesn't need do that because swap_reclaim_full_clusters() just isolate si->full_clusters that the tables of it must be allocated. Then isolate_lock_cluster() that is called by swap_reclaim_full_clusters() will never call swap_cluster_alloc_table(). This patch add a VM_WARN_ON to warning if a fill cluster will be handle by swap_cluster_alloc_table() Signed-off-by: Hui Zhu --- mm/swapfile.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/swapfile.c b/mm/swapfile.c index 94af29d1de88..3fc2eb30c187 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -596,6 +596,9 @@ static struct swap_cluster_info *isolate_lock_cluster( spin_unlock(&si->lock); if (found && !cluster_table_is_alloced(found)) { + /* Table of full cluster must be allocated. */ + VM_WARN_ON(ci->flags == CLUSTER_FLAG_FULL); + /* Only an empty free cluster's swap table can be freed. */ VM_WARN_ON_ONCE(list != &si->free_clusters); VM_WARN_ON_ONCE(!cluster_is_empty(found)); -- 2.43.0