When Transparent Huge Pages (THP) are enabled in "always" mode, file-backed read-only mappings can be backed by PMD-sized huge pages if they meet the alignment and size requirements. For ELF executables loaded by the kernel ELF binary loader, PT_LOAD segments are normally aligned according to p_align, which is often only page-sized. As a result, large read-only segments that are otherwise eligible may fail to be mapped using PMD-sized THP. A segment is considered eligible if: * THP is in "always" mode, * it is not writable, * both p_vaddr and p_offset are PMD-aligned, * its file size is at least PMD_SIZE, and * its existing p_align is smaller than PMD_SIZE. To avoid excessive address space padding on systems with very large PMD_SIZE values, this optimization is applied only when PMD_SIZE <= 32MB, since requiring larger alignments would be unreasonable, especially on 32-bit systems with a much more limited virtual address space. This increases the likelihood that large text segments of ELF executables are backed by PMD-sized THP, reducing TLB pressure and improving performance for large binaries. This only affects ELF executables loaded directly by the kernel binary loader. Shared libraries loaded by user space (e.g. via the dynamic linker) are not affected. Signed-off-by: WANG Rui --- Changes since [v1]: * Dropped the Kconfig option CONFIG_ELF_RO_LOAD_THP_ALIGNMENT. * Moved the alignment logic into a helper align_to_pmd() for clarity. * Improved the comment explaining why we skip the optimization when PMD_SIZE > 32MB. [v1]: https://lore.kernel.org/linux-fsdevel/20260302155046.286650-1-r@hev.cc --- fs/binfmt_elf.c | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index fb857faaf0d6..39bad27d8490 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -489,6 +490,30 @@ static int elf_read(struct file *file, void *buf, size_t len, loff_t pos) return 0; } +static inline bool align_to_pmd(const struct elf_phdr *cmd) +{ + /* + * Avoid excessive virtual address space padding when PMD_SIZE is very + * large (e.g. some 64K base-page configurations). + */ + if (PMD_SIZE > SZ_32M) + return false; + + if (!hugepage_global_always()) + return false; + + if (!IS_ALIGNED(cmd->p_vaddr | cmd->p_offset, PMD_SIZE)) + return false; + + if (cmd->p_filesz < PMD_SIZE) + return false; + + if (cmd->p_flags & PF_W) + return false; + + return true; +} + static unsigned long maximum_alignment(struct elf_phdr *cmds, int nr) { unsigned long alignment = 0; @@ -501,6 +526,10 @@ static unsigned long maximum_alignment(struct elf_phdr *cmds, int nr) /* skip non-power of two alignments as invalid */ if (!is_power_of_2(p_align)) continue; + + if (align_to_pmd(&cmds[i]) && p_align < PMD_SIZE) + p_align = PMD_SIZE; + alignment = max(alignment, p_align); } } -- 2.53.0