aboutsummaryrefslogtreecommitdiffstats
path: root/tools/testing/selftests/mm (follow)
AgeCommit message (Collapse)AuthorFilesLines
2025-09-28selftests/mm: add fork inheritance test for ksm_merging_pages counterDonet Tom1-1/+42
Add a new selftest to verify whether the `ksm_merging_pages` counter in `mm_struct` is not inherited by a child process after fork. This helps ensure correctness of KSM accounting across process creation. Link: https://lkml.kernel.org/r/e7bb17d374133bd31a3e423aa9e46e1122e74971.1758648700.git.donettom@linux.ibm.com Signed-off-by: Donet Tom <donettom@linux.ibm.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Aboorva Devarajan <aboorvad@linux.ibm.com> Cc: Chengming Zhou <chengming.zhou@linux.dev> Cc: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> Cc: Wei Yang <richard.weiyang@gmail.com> Cc: xu xin <xu.xin16@zte.com.cn> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-23selftests/mm: skip soft-dirty tests when CONFIG_MEM_SOFT_DIRTY is disabledLance Yang4-20/+24
The madv_populate and soft-dirty kselftests currently fail on systems where CONFIG_MEM_SOFT_DIRTY is disabled. Introduce a new helper softdirty_supported() into vm_util.c/h to ensure tests are properly skipped when the feature is not enabled. Link: https://lkml.kernel.org/r/20250917133137.62802-1-lance.yang@linux.dev Fixes: 9f3265db6ae8 ("selftests: vm: add test for Soft-Dirty PTE bit") Signed-off-by: Lance Yang <lance.yang@linux.dev> Acked-by: David Hildenbrand <david@redhat.com> Suggested-by: David Hildenbrand <david@redhat.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Gabriel Krisman Bertazi <krisman@collabora.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-21selftests/mm: protection_keys: fix dead codeMuhammad Usama Anjum1-3/+1
The while loop doesn't execute and following warning gets generated: protection_keys.c:561:15: warning: code will never be executed [-Wunreachable-code] int rpkey = alloc_random_pkey(); Let's enable the while loop such that it gets executed nr_iterations times. Simplify the code a bit as well. Link: https://lkml.kernel.org/r/20250912123025.1271051-3-usama.anjum@collabora.com Signed-off-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: David Hildenbrand <david@redhat.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Lance Yang <lance.yang@linux.dev> Cc: Leon Romanovsky <leon@kernel.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-21selftests/mm: add -Wunreachable-code and fix warningsMuhammad Usama Anjum4-5/+5
Patch series "selftests/mm: Add -Wunreachable-code and fix warnings". Add -Wunreachable-code to selftests and remove dead code from generated warnings. This patch (of 2): Enable -Wunreachable-code flag to catch dead code and fix them. 1. Remove the dead code and write a comment instead: hmm-tests.c:2033:3: warning: code will never be executed [-Wunreachable-code] perror("Should not reach this\n"); ^~~~~~ 2. ksft_exit_fail_msg() calls exit(). So cleanup isn't done. Replace it with ksft_print_msg(). split_huge_page_test.c:301:3: warning: code will never be executed [-Wunreachable-code] goto cleanup; ^~~~~~~~~~~~ 3. Remove duplicate inline. pkey_sighandler_tests.c:44:15: warning: duplicate 'inline' declaration specifier [-Wduplicate-decl-specifier] static inline __always_inline Link: https://lkml.kernel.org/r/20250912123025.1271051-1-usama.anjum@collabora.com Link: https://lkml.kernel.org/r/20250912123025.1271051-2-usama.anjum@collabora.com Signed-off-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Lance Yang <lance.yang@linux.dev> Cc: Leon Romanovsky <leon@kernel.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-21selftests/mm: centralize the __always_unused macroMuhammad Usama Anjum1-1/+1
This macro gets used in different tests. Add it to kselftest.h which is central location and tests use this header. Then use this new macro. Link: https://lkml.kernel.org/r/20250912125102.1309796-1-usama.anjum@collabora.com Signed-off-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Antonio Quartulli <antonio@openvpn.net> Cc: David S. Miller <davem@davemloft.net> Cc: Eric Dumazet <edumazet@google.com> Cc: Jakub Kacinski <kuba@kernel.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Paolo Abeni <pabeni@redhat.com> Cc: "Sabrina Dubroca" <sd@queasysnail.net> Cc: Shuah Khan <shuah@kernel.org> Cc: Simon Horman <horms@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-21selftests/mm: gup_tests: option to GUP all pages in a single callDavid Hildenbrand2-3/+7
We recently missed detecting an issue during early testing because the default (!all) tests would not trigger it and even when running "all" tests it only would happen sometimes because of races. So let's allow for an easy way to specify "GUP all pages in a single call", extend the test matrix and extend our default (!all) tests. By GUP'ing all pages in a single call, with the default size of 128MiB we'll cover multiple leaf page tables / PMDs on architectures with sane THP sizes. Link: https://lkml.kernel.org/r/20250910093051.1693097-1-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Peter Xu <peterx@redhat.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Mike Rapoport <rppt@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-21selftests/mm: remove PROT_EXEC req from file-collapse testsZach O'Keefe1-1/+1
As of v6.8 commit 7fbb5e188248 ("mm: remove VM_EXEC requirement for THP eligibility") thp collapse no longer requires file-backed mappings be created with PROT_EXEC. Remove the overly-strict dependency from thp collapse tests so we test the least-strict requirement for success. Link: https://lkml.kernel.org/r/20250909190534.512801-1-zokeefe@google.com Signed-off-by: Zach O'Keefe <zokeefe@google.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-21selftests/mm: fix va_high_addr_switch.sh failure on x86_64Chunyu Hu1-2/+2
The test will fail as below on x86_64 with cpu la57 support (will skip if no la57 support). Note, the test requries nr_hugepages to be set first. # running bash ./va_high_addr_switch.sh # ------------------------------------- # mmap(addr_switch_hint - pagesize, pagesize): 0x7f55b60fa000 - OK # mmap(addr_switch_hint - pagesize, (2 * pagesize)): 0x7f55b60f9000 - OK # mmap(addr_switch_hint, pagesize): 0x800000000000 - OK # mmap(addr_switch_hint, 2 * pagesize, MAP_FIXED): 0x800000000000 - OK # mmap(NULL): 0x7f55b60f9000 - OK # mmap(low_addr): 0x40000000 - OK # mmap(high_addr): 0x1000000000000 - OK # mmap(high_addr) again: 0xffff55b6136000 - OK # mmap(high_addr, MAP_FIXED): 0x1000000000000 - OK # mmap(-1): 0xffff55b6134000 - OK # mmap(-1) again: 0xffff55b6132000 - OK # mmap(addr_switch_hint - pagesize, pagesize): 0x7f55b60fa000 - OK # mmap(addr_switch_hint - pagesize, 2 * pagesize): 0x7f55b60f9000 - OK # mmap(addr_switch_hint - pagesize/2 , 2 * pagesize): 0x7f55b60f7000 - OK # mmap(addr_switch_hint, pagesize): 0x800000000000 - OK # mmap(addr_switch_hint, 2 * pagesize, MAP_FIXED): 0x800000000000 - OK # mmap(NULL, MAP_HUGETLB): 0x7f55b5c00000 - OK # mmap(low_addr, MAP_HUGETLB): 0x40000000 - OK # mmap(high_addr, MAP_HUGETLB): 0x1000000000000 - OK # mmap(high_addr, MAP_HUGETLB) again: 0xffff55b5e00000 - OK # mmap(high_addr, MAP_FIXED | MAP_HUGETLB): 0x1000000000000 - OK # mmap(-1, MAP_HUGETLB): 0x7f55b5c00000 - OK # mmap(-1, MAP_HUGETLB) again: 0x7f55b5a00000 - OK # mmap(addr_switch_hint - pagesize, 2*hugepagesize, MAP_HUGETLB): 0x800000000000 - FAILED # mmap(addr_switch_hint , 2*hugepagesize, MAP_FIXED | MAP_HUGETLB): 0x800000000000 - OK # [FAIL] addr_switch_hint is defined as DFEFAULT_MAP_WINDOW in the failed test (for x86_64, DFEFAULT_MAP_WINDOW is defined as (1UL<<47) - pagesize) in 64 bit. Before commit cc92882ee218 ("mm: drop hugetlb_get_unmapped_area{_*} functions"), for x86_64 hugetlb_get_unmapped_area() is handled in arch code arch/x86/mm/hugetlbpage.c and addr is checked with map_address_hint_valid() after align with 'addr &= huge_page_mask(h)' which is a round down way, and it will fail the check because the addr is within the DEFAULT_MAP_WINDOW but (addr + len) is above the DFEFAULT_MAP_WINDOW. So it wil go through the hugetlb_get_unmmaped_area_top_down() to find an area within the DFEFAULT_MAP_WINDOW. After commit cc92882ee218 ("mm: drop hugetlb_get_unmapped_area{_*} functions"). The addr hint for hugetlb_get_unmmaped_area() will be rounded up and aligned to hugepage size with ALIGN() for all arches. And after the align, the addr will be above the default MAP_DEFAULT_WINDOW, and the map_addresshint_valid() check will pass because both aligned addr (addr0) and (addr + len) are above the DEFAULT_MAP_WINDOW, and the aligned hint address (0x800000000000) is returned as an suitable gap is found there, in arch_get_unmapped_area_topdown(). To still cover the case that addr is within the DEFAULT_MAP_WINDOW, and addr + len is above the DFEFAULT_MAP_WINDOW, change to choose the last hugepage aligned address within the DEFAULT_MAP_WINDOW as the hint addr, and the addr + len (2 hugepages) will be one hugepage above the DEFAULT_MAP_WINDOW. An aligned address won't be affected by the page round up or round down from kernel, so it's determistic. Link: https://lkml.kernel.org/r/20250912013711.3002969-4-chuhu@redhat.com Fixes: cc92882ee218 ("mm: drop hugetlb_get_unmapped_area{_*} functions") Signed-off-by: Chunyu Hu <chuhu@redhat.com> Suggested-by: David Hildenbrand <david@redhat.com> Acked-by: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-21selftests/mm: alloc hugepages in va_high_addr_switch testChunyu Hu1-0/+37
Alloc hugepages in the test internally, so we don't fully rely on the run_vmtests.sh. If run_vmtests.sh does that great, free hugepages is enough for being used to run the test, leave it as it is, otherwise setup the hugepages in the test. Save the original nr_hugepages value and restore it after test finish, so leave a stable test envronment. Link: https://lkml.kernel.org/r/20250912013711.3002969-3-chuhu@redhat.com Signed-off-by: Chunyu Hu <chuhu@redhat.com> Cc: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-21selftests/mm: fix hugepages cleanup too earlyChunyu Hu1-2/+7
Patch series "Fix va_high_addr_switch.sh test failure", v3. These three patches fix the va_high_addr_switch.sh test failure on x86_64. Patch 1 fixes the hugepage setup issue that nr_hugepages is reset too early in run_vmtests.sh and break the later va_high_addr_switch testing. Patch 2 adds hugepage setup in va_high_addr_switch test, so that it can still work if vm_runtests.sh changes the hugepage setup someday. Patch 3 fixes the test failure caused by the hint addr align method change in hugetlb_get_unmapped_area(). This patch (of 3): The nr_hugepgs variable is used to keep the original nr_hugepages at the hugepage setup step at test beginning. After userfaultfd test, a cleaup is executed, both /sys/kernel/mm/hugepages/hugepages-*/nr_hugepages and /proc/sys//vm/nr_hugepages are reset to 'original' value before userfaultfd test starts. Issue here is the value used to restore /proc/sys/vm/nr_hugepages is nr_hugepgs which is the initial value before the vm_runtests.sh runs, not the value before userfaultfd test starts. 'va_high_addr_swith.sh' tests runs after that will possibly see no hugepages available for test, and got EINVAL when mmap(HUGETLB), making the result invalid. And before pkey tests, nr_hugepgs is changed to be used as a temp variable to save nr_hugepages before pkey test, and restore it after pkey tests finish. The original nr_hugepages value is not tracked anymore, so no way to restore it after all tests finish. Add a new variable orig_nr_hugepgs to save the original nr_hugepages, and and restore it to nr_hugepages after all tests finish. And change to use the nr_hugepgs variable to save the /proc/sys/vm/nr_hugeages after hugepage setup, it's also the value before userfaultfd test starts, and the correct value to be restored after userfaultfd finishes. The va_high_addr_switch.sh broken will be resolved. Link: https://lkml.kernel.org/r/20250912013711.3002969-1-chuhu@redhat.com Link: https://lkml.kernel.org/r/20250912013711.3002969-2-chuhu@redhat.com Signed-off-by: Chunyu Hu <chuhu@redhat.com> Acked-by: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-21selftests/mm: split_huge_page_test: cleanups for split_pte_mapped_thp testDavid Hildenbrand1-49/+74
There is room for improvement, so let's clean up a bit: (1) Define "4" as a constant. (2) SKIP if we fail to allocate all THPs (e.g., fragmented) and add recovery code for all other failure cases: no need to exit the test. (3) Rename "len" to thp_area_size, and "one_page" to "thp_area". (4) Allocate a new area "page_area" into which we will mremap the pages; add "page_area_size". Now we can easily merge the two mremap instances into a single one. (5) Iterate THPs instead of bytes when checking for missed THPs after mremap. (6) Rename "pte_mapped2" to "tmp", used to verify mremap(MAP_FIXED) result. (7) Split the corruption test from the failed-split test, so we can just iterate bytes vs. thps naturally. (8) Extend comments and clarify why we are using mremap in the first place. Link: https://lkml.kernel.org/r/20250903070253.34556-3-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-21selftests/mm: split_huge_page_test: fix occasional is_backed_by_folio() ↵David Hildenbrand1-8/+7
wrong results Patch series "selftests/mm: split_huge_page_test: split_pte_mapped_thp improvements", v2. One fix for occasional failures I found while testing and a bunch of cleanups that should make that test easier to digest. This patch (of 2): When checking for actual tail or head pages of a folio, we must make sure that the KPF_COMPOUND_HEAD/KPF_COMPOUND_TAIL flag is paired with KPF_THP. For example, if we have another large folio after our large folio in physical memory, our "pfn_flags & (KPF_THP | KPF_COMPOUND_TAIL)" would trigger even though it's actually a head page of the next folio. If is_backed_by_folio() returns a wrong result, split_pte_mapped_thp() can fail with "Some THPs are missing during mremap". Fix it by checking for head/tail pages of folios properly. Add folio_tail_flags/folio_head_flags to improve readability and use these masks also when just testing for any compound page. Link: https://lkml.kernel.org/r/20250903070253.34556-1-david@redhat.com Link: https://lkml.kernel.org/r/20250903070253.34556-2-david@redhat.com Fixes: 169b456b0162 ("selftests/mm: reimplement is_backed_by_thp() with more precise check") Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-21selftests/mm/uffd: refactor non-composite global vars into structUjwal Kundur5-541/+615
Refactor macros and non-composite global variable definitions into a struct that is defined at the start of a test and is passed around instead of relying on global vars. Link: https://lkml.kernel.org/r/20250829155600.2000-1-ujwal.kundur@gmail.com Signed-off-by: Ujwal Kundur <ujwal.kundur@gmail.com> Acked-by: Peter Xu <peterx@redhat.com> Reviewed-by: Brendan Jackman <jackmanb@google.com> Cc: David Hildenbrand <david@redhat.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm/uffd-stress: stricten constraint on free hugepages needed ↵Dev Jain1-6/+11
before the test The test requires at least 2 * (bytes/page_size) hugetlb memory, since we require identical number of hugepages for src and dst location. Fix this. Along with the above, as explained in patch "selftests/mm/uffd-stress: Make test operate on less hugetlb memory", the racy nature of the test requires that we have some extra number of hugepages left beyond what is required. Therefore, stricten this constraint. Link: https://lkml.kernel.org/r/20250909061531.57272-3-dev.jain@arm.com Fixes: 5a6aa60d1823 ("selftests/mm: skip uffd hugetlb tests with insufficient hugepages") Signed-off-by: Dev Jain <dev.jain@arm.com> Cc: David Hildenbrand <david@redhat.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm/uffd-stress: make test operate on less hugetlb memoryDev Jain1-3/+7
Patch series "selftests/mm: uffd-stress fixes", v2. This patchset ensures that the number of hugepages is correctly set in the system so that the uffd-stress test does not fail due to the racy nature of the test. Patch 1 changes the hugepage constraint in the run_vmtests.sh script, whereas patch 2 changes the constraint in the test itself. This patch (of 2): We observed uffd-stress selftest failure on arm64 and intermittent failures on x86 too: running ./uffd-stress hugetlb-private 128 32 bounces: 17, mode: rnd read, ERROR: UFFDIO_COPY error: -12 (errno=12, @uffd-common.c:617) [FAIL] not ok 18 uffd-stress hugetlb-private 128 32 # exit=1 For this particular case, the number of free hugepages from run_vmtests.sh will be 128, and the test will allocate 64 hugepages in the source location. The stress() function will start spawning threads which will operate on the destination location, triggering uffd-operations like UFFDIO_COPY from src to dst, which means that we will require 64 more hugepages for the dst location. Let us observe the locking_thread() function. It will lock the mutex kept at dst, triggering uffd-copy. Suppose that 127 (64 for src and 63 for dst) hugepages have been reserved. In case of BOUNCE_RANDOM, it may happen that two threads trying to lock the mutex at dst, try to do so at the same hugepage number. If one thread succeeds in reserving the last hugepage, then the other thread may fail in alloc_hugetlb_folio(), returning -ENOMEM. I can confirm that this is indeed the case by this hacky patch: :--- a/mm/hugetlb.c ; +++ b/mm/hugetlb.c ; @@ -6929,6 +6929,11 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, ; ; folio = alloc_hugetlb_folio(dst_vma, dst_addr, false); ; if (IS_ERR(folio)) { ; + pte_t *actual_pte = hugetlb_walk(dst_vma, dst_addr, PMD_SIZE); ; + if (actual_pte) { ; + ret = -EEXIST; ; + goto out; ; + } ; ret = -ENOMEM; ; goto out; ; } This code path gets triggered indicating that the PMD at which one thread is trying to map a hugepage, gets filled by a racing thread. Therefore, instead of using freepgs to compute the amount of memory, use freepgs - (min(32, nr_cpus) - 1), so that the test still has some extra hugepages to use. The adjustment is a function of min(32, nr_cpus) - the value of nr_parallel in the test - because in the worst case, nr_parallel number of threads will try to map a hugepage on the same PMD, one will win the allocation race, and the other nr_parallel - 1 threads will fail, so we need extra nr_parallel - 1 hugepages to satisfy this request. Note that, in case the adjusted value underflows, there is a check for the number of free hugepages in the test itself, which will fail: get_free_hugepages() < bytes / page_size A negative value will be passed on to bytes which is of type size_t, thus the RHS will become a large value and the check will fail, so we are safe. Link: https://lkml.kernel.org/r/20250909061531.57272-1-dev.jain@arm.com Link: https://lkml.kernel.org/r/20250909061531.57272-2-dev.jain@arm.com Signed-off-by: Dev Jain <dev.jain@arm.com> Cc: David Hildenbrand <david@redhat.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm: use calloc instead of malloc in pagemap_ioctl.cI Viswanath1-12/+12
As per Documentation/process/deprecated.rst, dynamic size calculations should not be performed in memory allocator arguments due to possible overflows. Replace malloc with calloc to avoid open-ended arithmetic and prevent possible overflows. Link: https://lkml.kernel.org/r/20250825170643.63174-1-viswanathiyyappan@gmail.com Signed-off-by: I Viswanath <viswanathiyyappan@gmail.com> Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed by: Donet Tom <donettom@linux.ibm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests: centralise maybe-unused definition in kselftest.hBala-Vignesh-Reddy1-3/+0
Several selftests subdirectories duplicated the define __maybe_unused, leading to redundant code. Move to kselftest.h header and remove other definitions. This addresses the duplication noted in the proc-pid-vm warning fix Link: https://lkml.kernel.org/r/20250821101159.2238-1-reddybalavignesh9979@gmail.com Signed-off-by: Bala-Vignesh-Reddy <reddybalavignesh9979@gmail.com> Suggested-by: Andrew Morton <akpm@linux-foundation.org> Link:https://lore.kernel.org/lkml/20250820143954.33d95635e504e94df01930d0@linux-foundation.org/ Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Acked-by: SeongJae Park <sj@kernel.org> Reviewed-by: Ming Lei <ming.lei@redhat.com> Acked-by: Mickal Salan <mic@digikod.net> [landlock] Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13kselftest: mm: fix typos in test_vmalloc.shally heev1-3/+3
Fix simple typos in function name and console message. Link: https://lkml.kernel.org/r/20250823170208.184149-1-allyheev@gmail.com Signed-off-by: ally heev <allyheev@gmail.com> Cc: David Hildenbrand <david@redhat.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm: test that rmap behaves as expectedWei Yang4-0/+441
As David suggested, currently we don't have a high level test case to verify the behavior of rmap. This patch introduce the verification on rmap by migration. The general idea is if migrate one shared page between processes, this would be reflected in all related processes. Otherwise, we have problem in rmap. Currently it covers following four scenarios: * anonymous page * shmem page * pagecache page * ksm page Link: https://lkml.kernel.org/r/20250819080047.10063-3-richard.weiyang@gmail.com Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Suggested-by: David Hildenbrand <david@redhat.com> Cc: David Hildenbrand <david@redhat.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Rik van Riel <riel@surriel.com> Cc: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Harry Yoo <harry.yoo@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm: put general ksm operation into vm_utilWei Yang3-118/+154
Patch series "test that rmap behaves as expected", v4. As David suggested, currently we don't have a high level test case to verify the behavior of rmap. This patch set introduce the verification on rmap by migration. Patch 1 is a preparation to move ksm related operations into vm_util. Patch 2 is the new test case for rmap. Currently it covers following four scenarios: * anonymous page * shmem page * pagecache page * ksm page This patch (of 2): There are some general ksm operations could be used by other related test cases. Put them into vm_util for common use. This is a preparation patch for later use. Link: https://lkml.kernel.org/r/20250819080047.10063-1-richard.weiyang@gmail.com Link: https://lkml.kernel.org/r/20250819080047.10063-2-richard.weiyang@gmail.com Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Suggested-by: David Hildenbrand <david@redhat.com> Cc: David Hildenbrand <david@redhat.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Rik van Riel <riel@surriel.com> Cc: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Harry Yoo <harry.yoo@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm: check after-split folio orders in split_huge_page_testZi Yan1-24/+64
Instead of just checking the existence of PMD folios before and after folio split tests, use check_folio_orders() to check after-split folio orders. The split ranges in split_thp_in_pagecache_to_order_at() are changed to [addr, addr + pagesize) for every pmd_pagesize. It prevents folios within the range being split multiple times due to debugfs split function always perform splits with a pagesize step for a given range. The following tests are not changed: 1. split_pte_mapped_thp: the test already uses kpageflags to check; 2. split_file_backed_thp: no vaddr available. Link: https://lkml.kernel.org/r/20250818184622.1521620-6-ziy@nvidia.com Signed-off-by: Zi Yan <ziy@nvidia.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: David Hildenbrand <david@redhat.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Donet Tom <donettom@linux.ibm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: wang lian <lianux.mm@gmail.com> Cc: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm: add check_after_split_folio_orders() helperZi Yan1-0/+152
The helper gathers a folio order statistics of folios within a virtual address range and checks it against a given order list. It aims to provide a more precise folio order check instead of just checking the existence of PMD folios. The helper will be used the upcoming commit. Link: https://lkml.kernel.org/r/20250818184622.1521620-5-ziy@nvidia.com Signed-off-by: Zi Yan <ziy@nvidia.com> Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: David Hildenbrand <david@redhat.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Donet Tom <donettom@linux.ibm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: wang lian <lianux.mm@gmail.com> Cc: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm: reimplement is_backed_by_thp() with more precise checkZi Yan3-24/+82
and rename it to is_backed_by_folio(). is_backed_by_folio() checks if the given vaddr is backed a folio with a given order. It does so by: 1. getting the pfn of the vaddr; 2. checking kpageflags of the pfn; if order is greater than 0: 3. checking kpageflags of the head pfn; 4. checking kpageflags of all tail pfns. pmd_order is added to split_huge_page_test.c and replaces max_order. [ziy@nvidia.com: reduce code duplication, per David] Link: https://lkml.kernel.org/r/F54782D6-65A3-4D35-AE03-8ADE636EE258@nvidia.com Link: https://lkml.kernel.org/r/20250818184622.1521620-4-ziy@nvidia.com Signed-off-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Reviewed-by: wang lian <lianux.mm@gmail.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Donet Tom <donettom@linux.ibm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm: mark all functions static in split_huge_page_test.cZi Yan1-11/+11
All functions are only used within the file. Link: https://lkml.kernel.org/r/20250818184622.1521620-3-ziy@nvidia.com Signed-off-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Reviewed-by: wang lian <lianux.mm@gmail.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Donet Tom <donettom@linux.ibm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests: prctl: introduce tests for disabling THPs except for madviseUsama Arif1-0/+113
The test will set the global system THP setting to never, madvise or always depending on the fixture variant and the 2M setting to inherit before it starts (and reset to original at teardown). The fixture setup will also test if PR_SET_THP_DISABLE prctl call can be made with PR_THP_DISABLE_EXCEPT_ADVISED and skip if it fails. This tests if the process can: - successfully get the policy to disable THPs expect for madvise. - get hugepages only on MADV_HUGE and MADV_COLLAPSE if the global policy is madvise/always and only with MADV_COLLAPSE if the global policy is never. - successfully reset the policy of the process. - after reset, only get hugepages with: - MADV_COLLAPSE when policy is set to never. - MADV_HUGE and MADV_COLLAPSE when policy is set to madvise. - always when policy is set to "always". - never get a THP with MADV_NOHUGEPAGE. - repeat the above tests in a forked process to make sure the policy is carried across forks. Test results: ./prctl_thp_disable TAP version 13 1..12 ok 1 prctl_thp_disable_completely.never.nofork ok 2 prctl_thp_disable_completely.never.fork ok 3 prctl_thp_disable_completely.madvise.nofork ok 4 prctl_thp_disable_completely.madvise.fork ok 5 prctl_thp_disable_completely.always.nofork ok 6 prctl_thp_disable_completely.always.fork ok 7 prctl_thp_disable_except_madvise.never.nofork ok 8 prctl_thp_disable_except_madvise.never.fork ok 9 prctl_thp_disable_except_madvise.madvise.nofork ok 10 prctl_thp_disable_except_madvise.madvise.fork ok 11 prctl_thp_disable_except_madvise.always.nofork ok 12 prctl_thp_disable_except_madvise.always.fork [usamaarif642@gmail.com: return after executing test in child process] Link: https://lkml.kernel.org/r/3dca2de4-9a6a-4efe-a86c-83f9509831fc@gmail.com Link: https://lkml.kernel.org/r/20250815135549.130506-8-usamaarif642@gmail.com Signed-off-by: Usama Arif <usamaarif642@gmail.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Jann Horn <jannh@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: SeongJae Park <sj@kernel.org> Cc: Shakeel Butt <shakeel.butt@linux.dev> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Yafang <laoar.shao@gmail.com> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests: prctl: introduce tests for disabling THPs completelyUsama Arif5-1/+189
The test will set the global system THP setting to never, madvise or always depending on the fixture variant and the 2M setting to inherit before it starts (and reset to original at teardown). The fixture setup will also test if PR_SET_THP_DISABLE prctl call can be made to disable all THPs and skip if it fails. This tests if the process can: - successfully get the policy to disable THPs completely. - never get a hugepage when the THPs are completely disabled with the prctl, including with MADV_HUGE and MADV_COLLAPSE. - successfully reset the policy of the process. - after reset, only get hugepages with: - MADV_COLLAPSE when policy is set to never. - MADV_HUGE and MADV_COLLAPSE when policy is set to madvise. - always when policy is set to "always". - never get a THP with MADV_NOHUGEPAGE. - repeat the above tests in a forked process to make sure the policy is carried across forks. [usamaarif642@gmail.com: return after executing test in child process] Link: https://lkml.kernel.org/r/2d0ea708-ecba-4021-b6ca-e93f1413d60a@gmail.com [usamaarif642@gmail.com: include linux/mman.h for prctl_thp_disable] Link: https://lkml.kernel.org/r/20250910204609.1720498-1-usamaarif642@gmail.com Link: https://lore.kernel.org/all/c8249725-e91d-4c51-b9bb-40305e61e20d@sirena.org.uk/ Link: https://lkml.kernel.org/r/20250815135549.130506-7-usamaarif642@gmail.com Signed-off-by: Usama Arif <usamaarif642@gmail.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Jann Horn <jannh@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: SeongJae Park <sj@kernel.org> Cc: Shakeel Butt <shakeel.butt@linux.dev> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Yafang <laoar.shao@gmail.com> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftest/mm: extract sz2ord function into vm_util.hUsama Arif4-9/+9
The function already has 2 uses and will have a 3rd one in prctl selftests. The pagesize argument is added into the function, as it's not a global variable anymore. No functional change intended with this patch. Link: https://lkml.kernel.org/r/20250815135549.130506-6-usamaarif642@gmail.com Suggested-by: David Hildenbrand <david@redhat.com> Signed-off-by: Usama Arif <usamaarif642@gmail.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Jann Horn <jannh@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: SeongJae Park <sj@kernel.org> Cc: Shakeel Butt <shakeel.butt@linux.dev> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Yafang <laoar.shao@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm: skip hugepage-mremap test if userfaultfd unavailableAboorva Devarajan1-3/+13
Gracefully skip test if userfaultfd is not supported (ENOSYS) or not permitted (EPERM), instead of failing. This avoids misleading failures with clear skip messages. -------------- Before Patch -------------- ~ running ./hugepage-mremap ... ~ Bail out! userfaultfd: Function not implemented ~ Planned tests != run tests (1 != 0) ~ Totals: pass:0 fail:0 xfail:0 xpass:0 skip:0 error:0 ~ [FAIL] not ok 4 hugepage-mremap # exit=1 -------------- After Patch -------------- ~ running ./hugepage-mremap ... ~ ok 2 # SKIP userfaultfd is not supported/not enabled. ~ 1 skipped test(s) detected. ~ Totals: pass:0 fail:0 xfail:0 xpass:0 skip:1 error:0 ~ [SKIP] ok 4 hugepage-mremap # SKIP Link: https://lkml.kernel.org/r/20250816040113.760010-8-aboorvad@linux.ibm.com Co-developed-by: Donet Tom <donettom@linux.ibm.com> Signed-off-by: Donet Tom <donettom@linux.ibm.com> Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm: skip thuge-gen test if system is not setup properlyAboorva Devarajan1-4/+7
Make thuge-gen skip instead of fail when it can't run due to system settings. If shmmax is too small or no 1G huge pages are available, the test now prints a warning and is marked as skipped. ------------------- Before Patch: ------------------- ~ running ./thuge-gen ~ Bail out! Please do echo 262144 > /proc/sys/kernel/shmmax ~ Totals: pass:0 fail:0 xfail:0 xpass:0 skip:0 error:0 ~ [FAIL] not ok 28 thuge-gen ~ exit=1 ------------------- After Patch: ------------------- ~ running ./thuge-gen ~ ~ WARNING: shmmax is too small to run this test. ~ ~ Please run the following command to increase shmmax: ~ ~ echo 262144 > /proc/sys/kernel/shmmax ~ 1..0 ~ SKIP Test skipped due to insufficient shmmax value. ~ [SKIP] ok 29 thuge-gen ~ SKIP Link: https://lkml.kernel.org/r/20250816040113.760010-7-aboorvad@linux.ibm.com Co-developed-by: Donet Tom <donettom@linux.ibm.com> Signed-off-by: Donet Tom <donettom@linux.ibm.com> Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm: fix child process exit codes in ksm_functional_testsAboorva Devarajan1-7/+9
In ksm_functional_tests, test_child_ksm() returned negative values to indicate errors. However, when passed to exit(), these were interpreted as large unsigned values (e.g, -2 became 254), leading to incorrect handling in the parent process. As a result, some tests appeared to be skipped or silently failed. This patch changes test_child_ksm() to return positive error codes (1, 2, 3) and updates test_child_ksm_err() to interpret them correctly. Additionally, test_prctl_fork_exec() now uses exit(4) after a failed execv() to clearly signal exec failures. This ensures the parent accurately detects and reports child process failures. -------------- Before patch: -------------- - [RUN] test_unmerge ok 1 Pages were unmerged ... - [RUN] test_prctl_fork - No pages got merged - [RUN] test_prctl_fork_exec ok 7 PR_SET_MEMORY_MERGE value is inherited ... Bail out! 1 out of 8 tests failed - Planned tests != run tests (9 != 8) - Totals: pass:7 fail:1 xfail:0 xpass:0 skip:0 error:0 -------------- After patch: -------------- - [RUN] test_unmerge ok 1 Pages were unmerged ... - [RUN] test_prctl_fork - No pages got merged not ok 7 Merge in child failed - [RUN] test_prctl_fork_exec ok 8 PR_SET_MEMORY_MERGE value is inherited ... Bail out! 2 out of 9 tests failed - Totals: pass:7 fail:2 xfail:0 xpass:0 skip:0 error:0 Link: https://lkml.kernel.org/r/20250816040113.760010-6-aboorvad@linux.ibm.com Fixes: 6c47de3be3a0 ("selftest/mm: ksm_functional_tests: extend test case for ksm fork/exec") Co-developed-by: Donet Tom <donettom@linux.ibm.com> Signed-off-by: Donet Tom <donettom@linux.ibm.com> Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13mm/selftests: fix split_huge_page_test failure on systems with 64KB page sizeDonet Tom4-18/+19
The split_huge_page_test fails on systems with a 64KB base page size. This is because the order of a 2MB huge page is different: On 64KB systems, the order is 5. On 4KB systems, it's 9. The test currently assumes a maximum huge page order of 9, which is only valid for 4KB base page systems. On systems with 64KB pages, attempting to split huge pages beyond their actual order (5) causes the test to fail. In this patch, we calculate the huge page order based on the system's base page size. With this change, the tests now run successfully on both 64KB and 4KB page size systems. Link: https://lkml.kernel.org/r/20250816040113.760010-5-aboorvad@linux.ibm.com Fixes: fa6c02315f74 ("mm: huge_memory: a new debugfs interface for splitting THP tests") Co-developed-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Signed-off-by: Donet Tom <donettom@linux.ibm.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: David Hildenbrand <david@redhat.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftest/mm: fix ksm_funtional_test failuresDonet Tom1-1/+11
This patch fixes 2 issues. 1) After fork() in test_prctl_fork, the child process uses the file descriptors from the parent process to read ksm_stat and ksm_merging_pages. This results in incorrect values being read (parent process ksm_stat and ksm_merging_pages will be read in child), causing the test to fail. This patch calls init_global_file_handles() in the child process to ensure that the current process's file descriptors are used to read ksm_stat and ksm_merging_pages. 2) All tests currently call ksm_merge to trigger page merging. To ensure the system remains in a consistent state for subsequent tests, it is better to call ksm_unmerge during the test cleanup phase In the test_prctl_fork test, after a fork(), reading ksm_merging_pages in the child process returns a non-zero value because a previous test performed a merge, and the child's memory state is inherited from the parent. Although the child process calls ksm_unmerge, the ksm_merging_pages counter in the parent is reset to zero, while the child's counter remains unchanged. This discrepancy causes the test to fail. To avoid this issue, each test should call ksm_unmerge during cleanup to ensure the counter is reset and the system is in a clean state for subsequent tests. execv argument is an array of pointers to null-terminated strings. In this patch we also added NULL in the execv argument. Link: https://lkml.kernel.org/r/20250816040113.760010-4-aboorvad@linux.ibm.com Fixes: 6c47de3be3a0 ("selftest/mm: ksm_functional_tests: extend test case for ksm fork/exec") Co-developed-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Signed-off-by: Donet Tom <donettom@linux.ibm.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: David Hildenbrand <david@redhat.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm: add support to test 4PB VA on PPC64Donet Tom1-0/+11
PowerPC64 supports a 4PB virtual address space, but this test was previously limited to 512TB. This patch extends the coverage up to the full 4PB VA range on PowerPC64. Memory from 0 to 128TB is allocated without an address hint, while allocations from 128TB to 4PB use a hint address. Link: https://lkml.kernel.org/r/20250816040113.760010-3-aboorvad@linux.ibm.com Co-developed-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Signed-off-by: Donet Tom <donettom@linux.ibm.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13mm/selftests: fix incorrect pointer being passed to mark_range()Donet Tom1-1/+1
Patch series "selftests/mm: Fix false positives and skip unsupported tests", v4. This patch series addresses false positives in the generic mm selftests and skips tests that cannot run correctly due to missing features or system limitations. This patch (of 7): In main(), the high address is stored in hptr, but for mark_range(), the address passed is ptr, not hptr. Fixed this by changing ptr[i] to hptr[i] in mark_range() function call. Link: https://lkml.kernel.org/r/20250816040113.760010-1-aboorvad@linux.ibm.com Link: https://lkml.kernel.org/r/20250816040113.760010-2-aboorvad@linux.ibm.com Fixes: b2a79f62133a ("selftests/mm: virtual_address_range: unmap chunks after validation") Co-developed-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Signed-off-by: Donet Tom <donettom@linux.ibm.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm: fix spelling mistake "mrmeap" -> "mremap"Colin Ian King1-3/+3
There are spelling mistakes in perror messages. Fix these. Link: https://lkml.kernel.org/r/20250813081333.1978096-1-colin.i.king@gmail.com Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm: do check_huge_anon() with a number been passed inWei Yang1-1/+1
Currently it hard codes the number of hugepage to check for check_huge_anon(), but it would be more reasonable to do the check based on a number passed in. Pass in the hugepage number and do the check based on it. Link: https://lkml.kernel.org/r/20250809194209.30484-1-richard.weiyang@gmail.com Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Reviewed by: Donet Tom <donettom@linux.ibm.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> Reviewed-by: wang lian <lianux.mm@gmail.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm: use __auto_type in swap() macroPranav Tyagi1-1/+1
Replace typeof() with __auto_type in the swap() macro in uffd-stress.c. __auto_type was introduced in GCC 4.9 and reduces the compile time for all compilers. No functional changes intended. Link: https://lkml.kernel.org/r/20250730142301.6754-1-pranav.tyagi03@gmail.com Signed-off-by: Pranav Tyagi <pranav.tyagi03@gmail.com> Reviewed-by: Joshua Hahn <joshua.hahnjy@gmail.com> Cc: Peter Xu <peterx@redhat.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13selftests/mm: pass filename as input param to VM_PFNMAP testsSudarsan Mahendran3-16/+47
Enable these tests to be run on other pfnmap'ed memory like NVIDIA's EGM. Add '--' as a separator to pass in file path. This allows passing of cmd line arguments to kselftest_harness. Use '/dev/mem' as default filename. Existing test passes: pfnmap TAP version 13 1..6 # Starting 6 tests from 1 test cases. # PASSED: 6 / 6 tests passed. # Totals: pass:6 fail:0 xfail:0 xpass:0 skip:0 error:0 Pass params to kselftest_harness: pfnmap -r pfnmap:mremap_fixed TAP version 13 1..1 # Starting 1 tests from 1 test cases. # RUN pfnmap.mremap_fixed ... # OK pfnmap.mremap_fixed ok 1 pfnmap.mremap_fixed # PASSED: 1 / 1 tests passed. # Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0 Pass non-existent file name as input: pfnmap -- /dev/blah TAP version 13 1..6 # Starting 6 tests from 1 test cases. # RUN pfnmap.madvise_disallowed ... # SKIP Cannot open '/dev/blah' Pass non pfnmap'ed file as input: pfnmap -r pfnmap.madvise_disallowed -- randfile.txt TAP version 13 1..1 # Starting 1 tests from 1 test cases. # RUN pfnmap.madvise_disallowed ... # SKIP Invalid file: 'randfile.txt'. Not pfnmap'ed Link: https://lkml.kernel.org/r/20250805013629.47629-1-sudarsanm@google.com Signed-off-by: Sudarsan Mahendran <sudarsanm@google.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-27selftests/mm: fix FORCE_READ to read input value correctlyZi Yan7-9/+14
FORCE_READ() converts input value x to its pointer type then reads from address x. This is wrong. If x is a non-pointer, it would be caught it easily. But all FORCE_READ() callers are trying to read from a pointer and FORCE_READ() basically reads a pointer to a pointer instead of the original typed pointer. Almost no access violation was found, except the one from split_huge_page_test. Fix it by implementing a simplified READ_ONCE() instead. Link: https://lkml.kernel.org/r/20250805175140.241656-1-ziy@nvidia.com Fixes: 3f6bfd4789a0 ("selftests/mm: reuse FORCE_READ to replace "asm volatile("" : "+r" (XXX));"") Signed-off-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: wang lian <lianux.mm@gmail.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Cc: Christian Brauner <brauner@kernel.org> Cc: Jann Horn <jannh@google.com> Cc: Kairui Song <ryncsn@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Mark Brown <broonie@kernel.org> Cc: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-19selftests/mm: add test for invalid multi VMA operationsLorenzo Stoakes1-3/+261
We can use UFFD to easily assert invalid multi VMA moves, so do so, asserting expected behaviour when VMAs invalid for a multi VMA operation are encountered. We assert both that such operations are not permitted, and that we do not even attempt to move the first VMA under these circumstances. We also assert that we can still move a single VMA regardless. We then assert that a partial failure can occur if the invalid VMA appears later in the range of multiple VMAs, both at the very next VMA, and also at the end of the range. As part of this change, we are using the is_range_valid() helper more aggressively. Therefore, fix a bug where stale buffered data would hang around on success, causing subsequent calls to is_range_valid() to potentially give invalid results. We simply have to fflush() the stream on success to resolve this issue. Link: https://lkml.kernel.org/r/c4fb86dd5ba37610583ad5fc0e0c2306ddf318b9.1754218667.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: David Hildenbrand <david@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-05Merge tag 'mm-stable-2025-08-03-12-35' of ↵Linus Torvalds4-0/+351
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull more MM updates from Andrew Morton: "Significant patch series in this pull request: - "mseal cleanups" (Lorenzo Stoakes) Some mseal cleaning with no intended functional change. - "Optimizations for khugepaged" (David Hildenbrand) Improve khugepaged throughput by batching PTE operations for large folios. This gain is mainly for arm64. - "x86: enable EXECMEM_ROX_CACHE for ftrace and kprobes" (Mike Rapoport) A bugfix, additional debug code and cleanups to the execmem code. - "mm/shmem, swap: bugfix and improvement of mTHP swap in" (Kairui Song) Bugfixes, cleanups and performance improvememnts to the mTHP swapin code" * tag 'mm-stable-2025-08-03-12-35' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (38 commits) mm: mempool: fix crash in mempool_free() for zero-minimum pools mm: correct type for vmalloc vm_flags fields mm/shmem, swap: fix major fault counting mm/shmem, swap: rework swap entry and index calculation for large swapin mm/shmem, swap: simplify swapin path and result handling mm/shmem, swap: never use swap cache and readahead for SWP_SYNCHRONOUS_IO mm/shmem, swap: tidy up swap entry splitting mm/shmem, swap: tidy up THP swapin checks mm/shmem, swap: avoid redundant Xarray lookup during swapin x86/ftrace: enable EXECMEM_ROX_CACHE for ftrace allocations x86/kprobes: enable EXECMEM_ROX_CACHE for kprobes allocations execmem: drop writable parameter from execmem_fill_trapping_insns() execmem: add fallback for failures in vmalloc(VM_ALLOW_HUGE_VMAP) execmem: move execmem_force_rw() and execmem_restore_rox() before use execmem: rework execmem_cache_free() execmem: introduce execmem_alloc_rw() execmem: drop unused execmem_update_copy() mm: fix a UAF when vma->mm is freed after vma->vm_refcnt got dropped mm/rmap: add anon_vma lifetime debug check mm: remove mm/io-mapping.c ...
2025-08-02selftests/mm: add process_madvise() testswang lian4-0/+351
Add tests for process_madvise(), focusing on verifying behavior under various conditions including valid usage and error cases. [lianux.mm@gmail.com: v7] Link: https://lkml.kernel.org/r/20250729113109.12272-1-lianux.mm@gmail.com Link: https://lkml.kernel.org/r/20250729113109.12272-1-lianux.mm@gmail.com Link: https://lkml.kernel.org/r/20250721114614.40996-1-lianux.mm@gmail.com Signed-off-by: wang lian <lianux.mm@gmail.com> Suggested-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Suggested-by: David Hildenbrand <david@redhat.com> Suggested-by: Zi Yan <ziy@nvidia.com> Suggested-by: Mark Brown <broonie@kernel.org> Acked-by: SeongJae Park <sj@kernel.org> Reviewed-by: Zi Yan <ziy@nvidia.com> Tested-by: Zi Yan <ziy@nvidia.com> Cc: Christian Brauner <brauner@kernel.org> Cc: Jann Horn <jannh@google.com> Cc: Kairui Song <ryncsn@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-31Merge tag 'mm-stable-2025-07-30-15-25' of ↵Linus Torvalds21-129/+1304
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull MM updates from Andrew Morton: "As usual, many cleanups. The below blurbiage describes 42 patchsets. 21 of those are partially or fully cleanup work. "cleans up", "cleanup", "maintainability", "rationalizes", etc. I never knew the MM code was so dirty. "mm: ksm: prevent KSM from breaking merging of new VMAs" (Lorenzo Stoakes) addresses an issue with KSM's PR_SET_MEMORY_MERGE mode: newly mapped VMAs were not eligible for merging with existing adjacent VMAs. "mm/damon: introduce DAMON_STAT for simple and practical access monitoring" (SeongJae Park) adds a new kernel module which simplifies the setup and usage of DAMON in production environments. "stop passing a writeback_control to swap/shmem writeout" (Christoph Hellwig) is a cleanup to the writeback code which removes a couple of pointers from struct writeback_control. "drivers/base/node.c: optimization and cleanups" (Donet Tom) contains largely uncorrelated cleanups to the NUMA node setup and management code. "mm: userfaultfd: assorted fixes and cleanups" (Tal Zussman) does some maintenance work on the userfaultfd code. "Readahead tweaks for larger folios" (Ryan Roberts) implements some tuneups for pagecache readahead when it is reading into order>0 folios. "selftests/mm: Tweaks to the cow test" (Mark Brown) provides some cleanups and consistency improvements to the selftests code. "Optimize mremap() for large folios" (Dev Jain) does that. A 37% reduction in execution time was measured in a memset+mremap+munmap microbenchmark. "Remove zero_user()" (Matthew Wilcox) expunges zero_user() in favor of the more modern memzero_page(). "mm/huge_memory: vmf_insert_folio_*() and vmf_insert_pfn_pud() fixes" (David Hildenbrand) addresses some warts which David noticed in the huge page code. These were not known to be causing any issues at this time. "mm/damon: use alloc_migrate_target() for DAMOS_MIGRATE_{HOT,COLD" (SeongJae Park) provides some cleanup and consolidation work in DAMON. "use vm_flags_t consistently" (Lorenzo Stoakes) uses vm_flags_t in places where we were inappropriately using other types. "mm/memfd: Reserve hugetlb folios before allocation" (Vivek Kasireddy) increases the reliability of large page allocation in the memfd code. "mm: Remove pXX_devmap page table bit and pfn_t type" (Alistair Popple) removes several now-unneeded PFN_* flags. "mm/damon: decouple sysfs from core" (SeongJae Park) implememnts some cleanup and maintainability work in the DAMON sysfs layer. "madvise cleanup" (Lorenzo Stoakes) does quite a lot of cleanup/maintenance work in the madvise() code. "madvise anon_name cleanups" (Vlastimil Babka) provides additional cleanups on top or Lorenzo's effort. "Implement numa node notifier" (Oscar Salvador) creates a standalone notifier for NUMA node memory state changes. Previously these were lumped under the more general memory on/offline notifier. "Make MIGRATE_ISOLATE a standalone bit" (Zi Yan) cleans up the pageblock isolation code and fixes a potential issue which doesn't seem to cause any problems in practice. "selftests/damon: add python and drgn based DAMON sysfs functionality tests" (SeongJae Park) adds additional drgn- and python-based DAMON selftests which are more comprehensive than the existing selftest suite. "Misc rework on hugetlb faulting path" (Oscar Salvador) fixes a rather obscure deadlock in the hugetlb fault code and follows that fix with a series of cleanups. "cma: factor out allocation logic from __cma_declare_contiguous_nid" (Mike Rapoport) rationalizes and cleans up the highmem-specific code in the CMA allocator. "mm/migration: rework movable_ops page migration (part 1)" (David Hildenbrand) provides cleanups and future-preparedness to the migration code. "mm/damon: add trace events for auto-tuned monitoring intervals and DAMOS quota" (SeongJae Park) adds some tracepoints to some DAMON auto-tuning code. "mm/damon: fix misc bugs in DAMON modules" (SeongJae Park) does that. "mm/damon: misc cleanups" (SeongJae Park) also does what it claims. "mm: folio_pte_batch() improvements" (David Hildenbrand) cleans up the large folio PTE batching code. "mm/damon/vaddr: Allow interleaving in migrate_{hot,cold} actions" (SeongJae Park) facilitates dynamic alteration of DAMON's inter-node allocation policy. "Remove unmap_and_put_page()" (Vishal Moola) provides a couple of page->folio conversions. "mm: per-node proactive reclaim" (Davidlohr Bueso) implements a per-node control of proactive reclaim - beyond the current memcg-based implementation. "mm/damon: remove damon_callback" (SeongJae Park) replaces the damon_callback interface with a more general and powerful damon_call()+damos_walk() interface. "mm/mremap: permit mremap() move of multiple VMAs" (Lorenzo Stoakes) implements a number of mremap cleanups (of course) in preparation for adding new mremap() functionality: newly permit the remapping of multiple VMAs when the user is specifying MREMAP_FIXED. It still excludes some specialized situations where this cannot be performed reliably. "drop hugetlb_free_pgd_range()" (Anthony Yznaga) switches some sparc hugetlb code over to the generic version and removes the thus-unneeded hugetlb_free_pgd_range(). "mm/damon/sysfs: support periodic and automated stats update" (SeongJae Park) augments the present userspace-requested update of DAMON sysfs monitoring files. Automatic update is now provided, along with a tunable to control the update interval. "Some randome fixes and cleanups to swapfile" (Kemeng Shi) does what is claims. "mm: introduce snapshot_page" (Luiz Capitulino and David Hildenbrand) provides (and uses) a means by which debug-style functions can grab a copy of a pageframe and inspect it locklessly without tripping over the races inherent in operating on the live pageframe directly. "use per-vma locks for /proc/pid/maps reads" (Suren Baghdasaryan) addresses the large contention issues which can be triggered by reads from that procfs file. Latencies are reduced by more than half in some situations. The series also introduces several new selftests for the /proc/pid/maps interface. "__folio_split() clean up" (Zi Yan) cleans up __folio_split()! "Optimize mprotect() for large folios" (Dev Jain) provides some quite large (>3x) speedups to mprotect() when dealing with large folios. "selftests/mm: reuse FORCE_READ to replace "asm volatile("" : "+r" (XXX));" and some cleanup" (wang lian) does some cleanup work in the selftests code. "tools/testing: expand mremap testing" (Lorenzo Stoakes) extends the mremap() selftest in several ways, including adding more checking of Lorenzo's recently added "permit mremap() move of multiple VMAs" feature. "selftests/damon/sysfs.py: test all parameters" (SeongJae Park) extends the DAMON sysfs interface selftest so that it tests all possible user-requested parameters. Rather than the present minimal subset" * tag 'mm-stable-2025-07-30-15-25' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (370 commits) MAINTAINERS: add missing headers to mempory policy & migration section MAINTAINERS: add missing file to cgroup section MAINTAINERS: add MM MISC section, add missing files to MISC and CORE MAINTAINERS: add missing zsmalloc file MAINTAINERS: add missing files to page alloc section MAINTAINERS: add missing shrinker files MAINTAINERS: move memremap.[ch] to hotplug section MAINTAINERS: add missing mm_slot.h file THP section MAINTAINERS: add missing interval_tree.c to memory mapping section MAINTAINERS: add missing percpu-internal.h file to per-cpu section mm/page_alloc: remove trace_mm_alloc_contig_migrate_range_info() selftests/damon: introduce _common.sh to host shared function selftests/damon/sysfs.py: test runtime reduction of DAMON parameters selftests/damon/sysfs.py: test non-default parameters runtime commit selftests/damon/sysfs.py: generalize DAMON context commit assertion selftests/damon/sysfs.py: generalize monitoring attributes commit assertion selftests/damon/sysfs.py: generalize DAMOS schemes commit assertion selftests/damon/sysfs.py: test DAMOS filters commitment selftests/damon/sysfs.py: generalize DAMOS scheme commit assertion selftests/damon/sysfs.py: test DAMOS destinations commitment ...
2025-07-24tools/testing/selftests: explicitly test split multi VMA mremap moveLorenzo Stoakes1-1/+126
Check that moving a range of VMAs where we are offset into the first and last VMAs works correctly. This results in the VMAs being split at these points at which we are offset into VMAs. We explicitly test both the ordinary MREMAP_FIXED multi VMA move case and the MREMAP_DONTUNMAP multi VMA move case. Link: https://lkml.kernel.org/r/b04920bb6c09dc86c207c251eab8ec670fbbcaef.1753119043.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-24tools/testing/selftests: test MREMAP_DONTUNMAP on multiple VMA moveLorenzo Stoakes1-10/+19
We support MREMAP_MAYMOVE | MREMAP_FIXED | MREMAP_DONTUNMAP for moving multiple VMAs via mremap(), so assert that the tests pass with both MREMAP_DONTUNMAP set and not set. Additionally, add success = false settings when mremap() fails. This is something that cannot realistically happen, so in no way impacted test outcome, but it is incorrect to indicate a test pass when something has failed. Link: https://lkml.kernel.org/r/d7359941981e4e44c774753b3e364d1c54928e6a.1753119043.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-24tools/testing/selftests: add mremap() shrink test for multiple VMAsLorenzo Stoakes1-1/+82
Patch series "tools/testing: expand mremap testing". Expand our mremap() testing to further assert that behaviour is as expected. There is a poorly documented mremap() feature whereby it is possible to mremap() multiple VMAs (even with gaps) when shrinking, as long as the resultant shrunk range spans only a single VMA. So we start by asserting this behaviour functions correctly both with an in-place shrink and a shrink/move. Next, we further test the newly introduced ability to mremap() multiple VMAs when performing a MAP_FIXED move (that is without the size being changed), firstly by asserting that MREMAP_DONTUNMAP has no bearing on this behaviour. Finally, we explicitly test that such moves, when splitting source VMAs, function correctly. This patch (of 3): There is an apparently little-known feature of mremap() whereby, in stark contrast to other modes (other than the recently introduced capacity to move multiple VMAs), the input source range span multiple VMAs with gaps between. This is, when shrinking a VMA, whether moving it or not, and the shrink would reduce the range to a single VMA - this is permitted, as the shrink is actioned by an unmap. This patch adds tests to assert that this behaves as expected. Link: https://lkml.kernel.org/r/cover.1753119043.git.lorenzo.stoakes@oracle.com Link: https://lkml.kernel.org/r/f08122893a26092a2bec6e69443e87f468ffdbed.1753119043.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-24selftests/mm: guard-regions: Use SKIP() instead of ksft_exit_skip()wang lian1-1/+1
To ensure only the current test is skipped on permission failure, instead of terminating the entire test binary. Link: https://lkml.kernel.org/r/20250717131857.59909-3-lianux.mm@gmail.com Signed-off-by: wang lian <lianux.mm@gmail.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Cc: Christian Brauner <brauner@kernel.org> Cc: Jann Horn <jannh@google.com> Cc: Kairui Song <ryncsn@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-24selftests/mm: reuse FORCE_READ to replace "asm volatile("" : "+r" (XXX));"wang lian7-39/+31
Patch series "selftests/mm: reuse FORCE_READ to replace "asm volatile("" : "+r" (XXX));" and some cleanup", v2. This series introduces a common FORCE_READ() macro to replace the cryptic asm volatile("" : "+r" (variable)); construct used in several mm selftests. This improves code readability and maintainability by removing duplicated, hard-to-understand code. This patch (of 2): Several mm selftests use the `asm volatile("" : "+r" (variable));` construct to force a read of a variable, preventing the compiler from optimizing away the memory access. This idiom is cryptic and duplicated across multiple test files. Following a suggestion from David[1], this patch refactors this common pattern into a FORCE_READ() macro Link: https://lkml.kernel.org/r/20250717131857.59909-1-lianux.mm@gmail.com Link: https://lkml.kernel.org/r/20250717131857.59909-2-lianux.mm@gmail.com Link: https://lore.kernel.org/lkml/4a3e0759-caa1-4cfa-bc3f-402593f1eee3@redhat.com/ [1] Signed-off-by: wang lian <lianux.mm@gmail.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Cc: Christian Brauner <brauner@kernel.org> Cc: Jann Horn <jannh@google.com> Cc: Kairui Song <ryncsn@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Mark Brown <broonie@kernel.org> Cc: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-24tools/testing/selftests: extend mremap_test to test multi-VMA mremapLorenzo Stoakes1-1/+145
Now that we have added the ability to move multiple VMAs at once, assert that this functions correctly, both overwriting VMAs and moving backwards and forwards with merge and VMA invalidation. Additionally assert that page tables are correctly propagated by setting random data and reading it back. Link: https://lkml.kernel.org/r/139074a24a011ca4ed52498a7fa2080024b43917.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-07-19selftests/mm: fix split_huge_page_test for folio_split() testsZi Yan1-1/+2
PID_FMT does not have an offset field, so folio_split() tests are not performed. Add PID_FMT_OFFSET with an offset field and use it to perform folio_split() tests. Link: https://lkml.kernel.org/r/20250709012800.3225727-1-ziy@nvidia.com Fixes: 80a5c494c89f ("selftests/mm: add tests for folio_split(), buddy allocator like split") Signed-off-by: Zi Yan <ziy@nvidia.com> Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Donet Tom <donettom@linux.ibm.com> Tested-by : Donet Tom <donettom@linux.ibm.com> Cc: Barry Song <baohua@kernel.org> Cc: David Hildenbrand <david@redhat.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>