The comment for sheaf_capacity says it does not enforce NUMA placement, but it's not true since commit 4ec1a08d2031 ("slab: allow NUMA restricted allocations to use percpu sheaves"). Let's update the comment. Signed-off-by: Kuniyuki Iwashima --- include/linux/slab.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 15a60b501b95..7477109eb315 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -359,9 +359,8 @@ struct kmem_cache_args { * may replace it with an empty sheaf, unless it's over capacity. In * that case a sheaf is bulk freed to slab pages. * - * The sheaves do not enforce NUMA placement of objects, so allocations - * via kmem_cache_alloc_node() with a node specified other than - * NUMA_NO_NODE will bypass them. + * The sheaves try to enforce NUMA placement of objects, but the + * allocation may fall back to the normal operation. * * Bulk allocation and free operations also try to use the cpu sheaves * and barn, but fallback to using slab pages directly. -- 2.53.0.473.g4a7958ca14-goog