In close_range(), the kernel traditionally performs a linear scan over the [fd, max_fd] range, resulting in O(N) complexity where N is the range size. For processes with sparse FD tables, this is inefficient as it checks many unallocated slots. This patch optimizes __range_close() by using find_next_bit() on the open_fds bitmap to skip holes. This shifts the algorithmic complexity from O(Range Size) to O(Active FDs), providing a significant performance boost for large-range close operations on sparse file descriptor tables. Signed-off-by: Qiliang Yuan Signed-off-by: Qiliang Yuan --- fs/file.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/fs/file.c b/fs/file.c index 0a4f3bdb2dec..c7c3ee03f8df 100644 --- a/fs/file.c +++ b/fs/file.c @@ -777,13 +777,17 @@ static inline void __range_close(struct files_struct *files, unsigned int fd, unsigned int max_fd) { struct file *file; + struct fdtable *fdt; unsigned n; spin_lock(&files->file_lock); - n = last_fd(files_fdtable(files)); + fdt = files_fdtable(files); + n = last_fd(fdt); max_fd = min(max_fd, n); - for (; fd <= max_fd; fd++) { + for (fd = find_next_bit(fdt->open_fds, max_fd + 1, fd); + fd <= max_fd; + fd = find_next_bit(fdt->open_fds, max_fd + 1, fd + 1)) { file = file_close_fd_locked(files, fd); if (file) { spin_unlock(&files->file_lock); -- 2.51.0